Does anybody know any more about Fast TCP: http://story.news.yahoo.com/news?tmpl=story&cid=581&ncid=581&e=6&u=/nm/20030604/tc_nm/technology_internet_dc_3 Is it real? It it open source? Are there any implementations available? Mike. +----------------- H U R R I C A N E - E L E C T R I C -----------------+ | Mike Leber Direct Internet Connections Voice 510 580 4100 | | Hurricane Electric Web Hosting Colocation Fax 510 580 4151 | | mleber@he.net http://www.he.net | +-----------------------------------------------------------------------+
On Wed, 4 Jun 2003, Mike Leber wrote:
Does anybody know any more about Fast TCP:
Is it real?
It it open source?
Are there any implementations available?
Here's the white paper detailing it: http://netlab.caltech.edu/pub/papers/fast-030401.pdf Here is their home page: http://netlab.caltech.edu/FAST It doesn't look like they have production code available at this point, but it looks like it could be interesting. allan -- Allan Liska allan@allan.org http://www.allan.org
Glad this came up as I have been reading this paper - Does Figure 1 in
seem reasonable ? Will 100 RED TCP flows really only fill 90% of a 155 Mbps pipe but 87% of a 2.4 Gbps connection and 75% of a 4.8 Gbps connection ? This seems strangely non-linear to me. A more fundamental question is, is this really useful except in the case of very high bandwidth single flows (such as e-VLBI or particle physics or uncompressed HDTV). After all, isn't the current standard practice not to come close to fully utilizing backbone bandwidth ? Regards Marshall Eubanks On Wednesday, June 4, 2003, at 10:40 PM, Allan Liska wrote:
On Wed, 4 Jun 2003, Mike Leber wrote:
Does anybody know any more about Fast TCP:
http://story.news.yahoo.com/news?tmpl=story&cid=581&ncid=581&e=6&u=/ nm/20030604/tc_nm/technology_internet_dc_3
Is it real?
It it open source?
Are there any implementations available?
Here's the white paper detailing it:
http://netlab.caltech.edu/pub/papers/fast-030401.pdf
Here is their home page:
http://netlab.caltech.edu/FAST
It doesn't look like they have production code available at this point, but it looks like it could be interesting.
allan -- Allan Liska allan@allan.org http://www.allan.org
T.M. Eubanks e-mail : tme@multicasttech.com http://www.multicasttech.com Test your network for multicast : http://www.multicasttech.com/mt/ Our New Video Service is in Beta testing http://www.americafree.tv
Glad this came up as I have been reading this paper -
Does Figure 1 in
seem reasonable ? Will 100 RED TCP flows really only fill 90% of a 155 Mbps pipe but 87% of a 2.4 Gbps connection and 75% of a 4.8 Gbps connection ? This seems strangely non-linear to me.
A more fundamental question is, is this really useful except in the case of very high bandwidth single flows (such as e-VLBI or particle physics or uncompressed HDTV). After all, isn't the current standard practice not to come close to fully utilizing backbone bandwidth ?
I think the idea is that (similar to the 1Gb/s single-stream test a few months ago) that the concerns of academics are not exactly inline with those of network operators. The idea with a non-stablized TCP Vegas on a very fast pipe [with a small number of streams] is that as delays get large (relative to the size of the network connection) you have a very long/impossible window to grow into to fully utilize the full bandwidth. With TCP Reno (which it seems they have the biggest fault with) a single packet drop causes far more severe problems. Since RED causes packet drops, high speed streams that get RED'd are in an immense world of pain. Further, since a typically delayed ack window is only 100ms, this is a lot of data that isn't transmitted over the network or retransmitted and resequenced. If you have many streams (where each one represents a small portion of your network link, whether backbone or CPE) you can easily fill your pipe, this is common experience. If you aren't using RED [or similar] to manage congestion, you are good with a smaller number of streams. When you have a single (or small number of streams) you need larger windows, more tolerance for latency, and a large willingness to buffer data rather than drop it. I think this is all well understood at a common-sense level. I think the academics (practice, not people) are the ones that will figure out some idealized set of variables for a slightly modified equation from the ones we all use wrt to bits-in-flight calculations. I think they mention in the paper that they will start by stablizing TCP Vegas for a high latency, high speed link. I could be wrong (about my understanding or what is considered common-sense). I am not sure why sending a single large/high speed stream today (>1Gb/s) is such an improvement over sending multiple today-streams of data, but I guess that is the difference between a get-it-done-right and a get-it-done-now mentality. Deepak Jain AiNET
At 11:41 PM 04-06-03 -0400, Deepak Jain wrote:
I am not sure why sending a single large/high speed stream today (>1Gb/s) is such an improvement over sending multiple today-streams of data, but I guess that is the difference between a get-it-done-right and a get-it-done-now mentality.
The bot-owners would tend to disagree. This will improve their kill ratio without having to significantly increase the size of their bot-herds. Now we can have someone be the receipent of some FAST-love. -Hank
Hi, NANOGers. Did someone say...bot? /me twitches :) ] I am not sure why sending a single large/high speed stream today (>1Gb/s) is ] such an improvement over sending multiple today-streams of data, but I guess ] that is the difference between a get-it-done-right and a get-it-done-now ] mentality. For those who herd bots, this in theory provides the capability to get-it-done-right *AND* get-it-done-now. :/ Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
The bot-owners would tend to disagree. This will improve their kill ratio without having to significantly increase the size of their bot-herds. Now we can have someone be the receipent of some FAST-love.
Now, now... I am not all that pessimistic. We only need our FAST-IDS and our FAST-ACLs and our FAST-Firewalls to handle the possibility that a single stream could fill a large majority of the network connections today. Deepak Jain AiNET
Subject: RE: Fast TCP? Date: Wed, Jun 04, 2003 at 11:41:22PM -0400 Quoting Deepak Jain (deepak@ai.net):
I am not sure why sending a single large/high speed stream today (>1Gb/s) is such an improvement over sending multiple today-streams of data, but I guess that is the difference between a get-it-done-right and a get-it-done-now mentality.
Because us RE network operators have customers, especially in the astronomy field, that want to push 1gbit streams in realtime from various radio telescopes all over Europe. Moreover, they want them to end up in one place, ie. converge ;-) So, we need to come up with technolgies that can sustain multi-gbit (preferably) TCP streams over 50-100 mS RTT links. And, we've got the OC192 backbones to do it, if TCP were up to it.. -- Måns Nilsson Systems Specialist +46 70 681 7204 KTHNOC MN1334-RIPE ... bleakness ... desolation ... plastic forks ...
MN> Date: Thu, 5 Jun 2003 08:02:33 +0200 MN> From: Mans Nilsson MN> So, we need to come up with technolgies that can sustain MN> multi-gbit (preferably) TCP streams over 50-100 mS RTT MN> links. And, we've got the OC192 backbones to do it, if TCP MN> were up to it.. 10 Gbps * 100 ms * 2 = 2 Gbit = 1/4 Gbyte I guess one can run huge windows, insane SACK, eschew anything resembling slow-start, modify the recovery algorithm, and still call it TCP as long as it fits in an IP protocol #6 packet. Of course, in the absence of bw*delay-based autotuning, I suppose servers should have plenty of mbuf memory. ;-) Oh, wait, a few thousand slow-moving TCP streams could nuke a server without harming the clients, so slow start still is an issue. Also witness the BGP data/keepalive mechanism. Messages are sent at least every <x so often>, and frequently contain data (or at least a keepalive instead of data). If ACKs were sent in the same way, and packet fragments could be passed to the application layer before all segments were received in order to alleviate mbuf issues... UDP, anyone? Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.
On Thu, Jun 05, 2003 at 06:34:23AM +0000, eddy+public+spam@noc.everquick.net said: [snip]
Also witness the BGP data/keepalive mechanism. Messages are sent at least every <x so often>, and frequently contain data (or at least a keepalive instead of data). If ACKs were sent in the same way, and packet fragments could be passed to the application layer before all segments were received in order to alleviate mbuf issues...
UDP, anyone?
The folks over at digitalfountain have a pretty spiffy product that does kind of a UDP encapsulation between their end points (TCP -> df box -> UDP -> df box -> TCP) which works quite well for fast transfers over high latency links (satellite, etc.). Also lets you get the most out of your available pipes (i.e. 95% utilization of a DS-3 vs. a much lower figure using TCP to transfer the same amount of data). (I'm not affiliated with digitalfountain in any way other than being a customer and sharing an office with a beta tester. :)) -- Scott Francis || darkuncle (at) darkuncle (dot) net illum oportet crescere me autem minui
Hello; e-VLBI streams can easily sustain packet losses. IMHO these streams should be sent UDP with application layer congestion control, minimal FEC if necessary and "worse than best effort" QOS (because VLBI has little money but an almost infinite ability to generate bits). These TCP based tools may be useful for other applications, but I do not think that they are the right path for e-VLBI. Regards Marshall On Thursday, June 5, 2003, at 02:02 AM, Mans Nilsson wrote:
Subject: RE: Fast TCP? Date: Wed, Jun 04, 2003 at 11:41:22PM -0400 Quoting Deepak Jain (deepak@ai.net):
I am not sure why sending a single large/high speed stream today (>1Gb/s) is such an improvement over sending multiple today-streams of data, but I guess that is the difference between a get-it-done-right and a get-it-done-now mentality.
Because us RE network operators have customers, especially in the astronomy field, that want to push 1gbit streams in realtime from various radio telescopes all over Europe. Moreover, they want them to end up in one place, ie. converge ;-)
So, we need to come up with technolgies that can sustain multi-gbit (preferably) TCP streams over 50-100 mS RTT links. And, we've got the OC192 backbones to do it, if TCP were up to it..
-- Måns Nilsson Systems Specialist +46 70 681 7204 KTHNOC MN1334-RIPE
... bleakness ... desolation ... plastic forks ... <mime-attachment>
T.M. Eubanks e-mail : tme@multicasttech.com http://www.multicasttech.com Test your network for multicast : http://www.multicasttech.com/mt/ Our New Video Service is in Beta testing http://www.americafree.tv
On Wed, Jun 04, 2003 at 11:41:22PM -0400, Deepak Jain wrote:
causes far more severe problems. Since RED causes packet drops, high speed streams that get RED'd are in an immense world of pain. Further, since a
In some experience I've had RED did not cause drops. In fact, I have some data showing how drops increased without RED. <http://condor.depaul.edu/~jkristof/red/> I'd like to see (or actually perform them myself if I could :-) some actual tests. If anyone has any updated data doing AQM on high speed links or large streams, please post pointers. John
IMHO, the way the article reads it sounds like an implementation of dynamic window sizing. Regards, Christopher J. Wolff, VP CIO Broadband Laboratories, Inc. http://www.bblabs.com -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Mike Leber Sent: Wednesday, June 04, 2003 7:28 PM To: nanog@merit.edu Subject: Fast TCP? Does anybody know any more about Fast TCP: http://story.news.yahoo.com/news?tmpl=story&cid=581&ncid=581&e=6&u=/nm/2 0030604/tc_nm/technology_internet_dc_3 Is it real? It it open source? Are there any implementations available? Mike. +----------------- H U R R I C A N E - E L E C T R I C -----------------+ | Mike Leber Direct Internet Connections Voice 510 580 4100 | | Hurricane Electric Web Hosting Colocation Fax 510 580 4151 | | mleber@he.net http://www.he.net | +----------------------------------------------------------------------- +
participants (11)
-
Allan Liska
-
Christopher J. Wolff
-
Deepak Jain
-
E.B. Dreger
-
Hank Nussbacher
-
John Kristoff
-
Mans Nilsson
-
Marshall Eubanks
-
Mike Leber
-
Rob Thomas
-
Scott Francis