Every time I turn those on (plus timestamping), it breaks something. The last time I tried it broke ftp based transfers of new IOS, had to disable or use tftp to get a non-corrupted image (SRA). The time before that, it occasionally caused bgp keepalives to be missed and thus dropped the session (SXF). It may work now, or there may be more subtle
Cisco bugs lurking, who knows. :)
I tried that, no dice. I thought it would actually work.
You can confirm what MSS is actually being used in show ip bgp neighbor, under the "max data segment" line. I believe in modern code there is a way to turn on pmtud for all bgp neighbors (or individual ones) which may or may not depend on the global ip tcp path-mtu-discovery setting. I don't recall off the top of my head, but you should be able to confirm what size messages you're actually trying to send. FWIW I've run extensive tests on BGP with > 9000 byte MSS (though numbers that large are completely irrelevent, since bgp's maximum message size is 4096 bytes) and never hit a problem. I once saw a bug where Cisco miscalculated the MSS when doing tcp md5 (off by the number of bytes that the tcp option would take, I forget which direction), but I'm sure that's fixed now too. :)
Below is snap shot of the neighbor in question. Datagrams (max data segment is 4410 bytes): Rcvd: 6 (out of order: 0), with data: 4, total data bytes: 278 Sent: 6 (retransmit: 5), with data: 2, total data bytes: 4474 Could there be a problem with the total data bytes size exceeds the size of the max data segment? Below is the router (7206 NPE-400) I am trying to establish a session with BGP neighbor. <snip> Description: cr1.AUSTTXEE Member of peer-group TelWest-iBGP for session parameters BGP version 4, remote router ID 67.214.64.97 BGP state = Established, up for 00:00:02 Last read 00:00:02, hold time is 180, keepalive interval is 60 seconds Neighbor capabilities: Route refresh: advertised and received(old & new) Address family IPv4 Unicast: advertised and received Message statistics: <snip> Datagrams (max data segment is 4410 bytes): Rcvd: 4 (out of order: 0), with data: 1, total data bytes: 64 Sent: 5 (retransmit: 0, fastretransmit: 0), with data: 3, total data bytes: 259 cr2.CRCHTXCB# -----Original Message----- From: Richard A Steenbergen [mailto:ras@e-gerbil.net] Sent: Tuesday, September 15, 2009 2:54 PM To: Michael Ruiz Cc: nanog@nanog.org Subject: Re: <Keepalives are temporarily in throttle due to closed TCP window> On Tue, Sep 15, 2009 at 12:28:02PM -0500, Michael Ruiz wrote:
Here is more of the configuration to do with TCP information.
ip tcp selective-ack ip tcp window-size 65535 ip tcp synwait-time 10 ip tcp path-mtu-discovery
Every time I turn those on (plus timestamping), it breaks something. The last time I tried it broke ftp based transfers of new IOS, had to disable or use tftp to get a non-corrupted image (SRA). The time before that, it occasionally caused bgp keepalives to be missed and thus dropped the session (SXF). It may work now, or there may be more subtle Cisco bugs lurking, who knows. :) You can confirm what MSS is actually being used in show ip bgp neighbor, under the "max data segment" line. I believe in modern code there is a way to turn on pmtud for all bgp neighbors (or individual ones) which may or may not depend on the global ip tcp path-mtu-discovery setting. I don't recall off the top of my head, but you should be able to confirm what size messages you're actually trying to send. FWIW I've run extensive tests on BGP with > 9000 byte MSS (though numbers that large are completely irrelevent, since bgp's maximum message size is 4096 bytes) and never hit a problem. I once saw a bug where Cisco miscalculated the MSS when doing tcp md5 (off by the number of bytes that the tcp option would take, I forget which direction), but I'm sure that's fixed now too. :) -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)