Date: Mon, 17 Jun 2013 22:04:52 -0600 From: Phil Fagan <philfagan@gmail.com> ... you could always thread the crap out of whatever it is your transactioning across the link to make up for TCP's jackknifes...
What is a TCP jackknife? Cheers. Jakob.
It is also called a "sawtooth" or similar terms. Just google "tcp sawtooth" and you will see many references, and images that depict the traffic pattern. HTH, Fred Reimer | Secure Network Solutions Architect Presidio | www.presidio.com <http://www.presidio.com/> 3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309 D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 | freimer@presidio.com CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265 On 6/18/13 9:20 AM, "Jakob Heitz" <jakob.heitz@ericsson.com> wrote:
Date: Mon, 17 Jun 2013 22:04:52 -0600 From: Phil Fagan <philfagan@gmail.com> ... you could always thread the crap out of whatever it is your transactioning across the link to make up for TCP's jackknifes...
What is a TCP jackknife?
Cheers. Jakob.
Thanks Fred. Sawtooth is more familiar. How much of that do you actually see in practice? Cheers, Jakob. On Jun 18, 2013, at 6:27 AM, "Fred Reimer" <freimer@freimer.org> wrote:
It is also called a "sawtooth" or similar terms. Just google "tcp sawtooth" and you will see many references, and images that depict the traffic pattern.
HTH,
Fred Reimer | Secure Network Solutions Architect Presidio | www.presidio.com <http://www.presidio.com/> 3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309 D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 | freimer@presidio.com CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265
On 6/18/13 9:20 AM, "Jakob Heitz" <jakob.heitz@ericsson.com> wrote:
Date: Mon, 17 Jun 2013 22:04:52 -0600 From: Phil Fagan <philfagan@gmail.com> ... you could always thread the crap out of whatever it is your transactioning across the link to make up for TCP's jackknifes...
What is a TCP jackknife?
Cheers. Jakob.
Sorry; yes Sawtooth is the more accurate term. I see this on a daily occurance with large data-set transfers; generally if the data-set is large multiples of the initial window. I've never tested medium latency( <100ms) with small enough payloads where it may pay-off threading out many thousands of sessions. However, medium latency with large files (50M-10G) threads well in the sub 200 range and does a pretty good job at filling several Gig links. None of this is scientific; just my observations from the wild.....infulenced by end to end tunings per environment. On Tue, Jun 18, 2013 at 7:45 AM, Jakob Heitz <jakob.heitz@ericsson.com>wrote:
Thanks Fred. Sawtooth is more familiar. How much of that do you actually see in practice?
Cheers, Jakob.
On Jun 18, 2013, at 6:27 AM, "Fred Reimer" <freimer@freimer.org> wrote:
It is also called a "sawtooth" or similar terms. Just google "tcp sawtooth" and you will see many references, and images that depict the traffic pattern.
HTH,
Fred Reimer | Secure Network Solutions Architect Presidio | www.presidio.com <http://www.presidio.com/> 3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309 D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 | freimer@presidio.com CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265
On 6/18/13 9:20 AM, "Jakob Heitz" <jakob.heitz@ericsson.com> wrote:
Date: Mon, 17 Jun 2013 22:04:52 -0600 From: Phil Fagan <philfagan@gmail.com> ... you could always thread the crap out of whatever it is your transactioning across the link to make up for TCP's jackknifes...
What is a TCP jackknife?
Cheers. Jakob.
-- Phil Fagan Denver, CO 970-480-7618
Dear All We Deal with TCP window size all day every day across the southern cross from LA to Australia which adds around 160ms... I've given up looking for a solution to get around physical physics of sending TCP traffic a long distance at a high speed.... UDP traffic however comes in very fast.... Kindest Regards James Braunegg W: 1300 769 972 | M: 0488 997 207 | D: (03) 9751 7616 E: james.braunegg@micron21.com | ABN: 12 109 977 666 This message is intended for the addressee named above. It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer. -----Original Message----- From: Phil Fagan [mailto:philfagan@gmail.com] Sent: Wednesday, June 19, 2013 12:16 AM To: Jakob Heitz Cc: <nanog@nanog.org> Subject: Re: 10gig coast to coast Sorry; yes Sawtooth is the more accurate term. I see this on a daily occurance with large data-set transfers; generally if the data-set is large multiples of the initial window. I've never tested medium latency( <100ms) with small enough payloads where it may pay-off threading out many thousands of sessions. However, medium latency with large files (50M-10G) threads well in the sub 200 range and does a pretty good job at filling several Gig links. None of this is scientific; just my observations from the wild.....infulenced by end to end tunings per environment. On Tue, Jun 18, 2013 at 7:45 AM, Jakob Heitz <jakob.heitz@ericsson.com>wrote:
Thanks Fred. Sawtooth is more familiar. How much of that do you actually see in practice?
Cheers, Jakob.
On Jun 18, 2013, at 6:27 AM, "Fred Reimer" <freimer@freimer.org> wrote:
It is also called a "sawtooth" or similar terms. Just google "tcp sawtooth" and you will see many references, and images that depict the traffic pattern.
HTH,
Fred Reimer | Secure Network Solutions Architect Presidio | www.presidio.com <http://www.presidio.com/> 3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309 D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 | freimer@presidio.com CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265
On 6/18/13 9:20 AM, "Jakob Heitz" <jakob.heitz@ericsson.com> wrote:
Date: Mon, 17 Jun 2013 22:04:52 -0600 From: Phil Fagan <philfagan@gmail.com> ... you could always thread the crap out of whatever it is your transactioning across the link to make up for TCP's jackknifes...
What is a TCP jackknife?
Cheers. Jakob.
-- Phil Fagan Denver, CO 970-480-7618
On Tue, 18 Jun 2013 15:53:48 -0000, James Braunegg said:
We Deal with TCP window size all day every day across the southern cross from LA to Australia which adds around 160ms... I've given up looking for a solution to get around physical physics of sending TCP traffic a long distance at a high speed....
http://www.extremetech.com/extreme/141651-caltech-and-uvic-set-339gbps-inter... It's apparently doable. ;) A quick cheat sheet for the low-hanging fruit: http://www.psc.edu/index.php/networking/641-tcp-tune Though to get to *really* high througput, you may have to play some games with TCP slow-start so it's not quite as slow (otherwise for long hauls it can take literally hours to open the window after a packet burp at 10G or higher) Also, you may want to look at CODEL or related queueing disciplines to minimize the amount of trouble that bufferbloat can cause you at high speeds.
Dear Valdis Thanks for your comments, whilst I know you can optimize servers for TCP windowing I was more talking about network backhaul where you don't have control over the server sending the traffic. ie backhauling IP transit over the southern cross cable system Kindest Regards James Braunegg W: 1300 769 972 | M: 0488 997 207 | D: (03) 9751 7616 E: james.braunegg@micron21.com | ABN: 12 109 977 666 This message is intended for the addressee named above. It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer. -----Original Message----- From: Valdis.Kletnieks@vt.edu [mailto:Valdis.Kletnieks@vt.edu] Sent: Wednesday, June 19, 2013 3:19 AM To: James Braunegg Cc: Phil Fagan; Jakob Heitz; <nanog@nanog.org> Subject: Re: 10gig coast to coast On Tue, 18 Jun 2013 15:53:48 -0000, James Braunegg said:
We Deal with TCP window size all day every day across the southern cross from LA to Australia which adds around 160ms... I've given up looking for a solution to get around physical physics of sending TCP traffic a long distance at a high speed....
http://www.extremetech.com/extreme/141651-caltech-and-uvic-set-339gbps-inter... It's apparently doable. ;) A quick cheat sheet for the low-hanging fruit: http://www.psc.edu/index.php/networking/641-tcp-tune Though to get to *really* high througput, you may have to play some games with TCP slow-start so it's not quite as slow (otherwise for long hauls it can take literally hours to open the window after a packet burp at 10G or higher) Also, you may want to look at CODEL or related queueing disciplines to minimize the amount of trouble that bufferbloat can cause you at high speeds.
On Wed, 19 Jun 2013 00:24:15 -0000, James Braunegg said:
Thanks for your comments, whilst I know you can optimize servers for TCP windowing I was more talking about network backhaul where you don't have control over the server sending the traffic.
If you don't have control over the server, why are you allowing your customer to make their misconfiguration your problem? (Mostly a rhetorical question, as I know damned well how this sort of thing ends up happening)
On Tue, Jun 18, 2013 at 08:47:41PM -0400, Valdis.Kletnieks@vt.edu wrote:
On Wed, 19 Jun 2013 00:24:15 -0000, James Braunegg said:
Thanks for your comments, whilst I know you can optimize servers for TCP windowing I was more talking about network backhaul where you don't have control over the server sending the traffic.
If you don't have control over the server, why are you allowing your customer to make their misconfiguration your problem? (Mostly a rhetorical question, as I know damned well how this sort of thing ends up happening)
maybe his customers are connecting to normal internet servers. there's a lot of servers with strangely low limits on window size out there. like on speedtest.net under palo alto there's "Fiber Internet Center" which seems to have a window size of 128k. it requests files from 66.201.42.23, and if you do something like: curl -O http://66.201.42.23/speedtest/random4000x4000.jpg then do ping 66.201.42.23 then divide 1000 by the latency, for example 1000 / 160 then muitply by 128 then that number is about what curl will show on a fast connection. speedtest.net seems to use 2 parallel connections which raises the speed slightly, but it seems reasonably common to come across sites with sub-optimal tcp/ip configurations, like a while back i noticed www.godaddy.com seems to use 2 packets initial window size, and if using a proxy server that sends Via they seem to disable compression, so the web page will load very slowly from a remote location using a proxy server. Using recent Linux default kernels network speeds can be very good over high latency connections both for small files, and larger files assuming minimal packet loss. The combination of initial window size of 10 packets and cubic congestion control helps both small and large tranfers, and Linux has been improving their TCP/IP stack a lot. But there are still quite a few less ideal TCP/IP peers around. Also big buffers really help microbursts of traffic on fast connections. And using small buffers can really increase the sawtooth effects of TCP/IP. With all this talk of buffer bloat, in my experience sfq works better than codel for long distance throughput.. Ben.
participants (6)
-
Ben Aitchison
-
Fred Reimer
-
Jakob Heitz
-
James Braunegg
-
Phil Fagan
-
Valdis.Kletnieks@vt.edu