On Tue, Apr 09, 2002 at 07:18:35PM +0000, E.B. Dreger wrote:
Date: Tue, 09 Apr 2002 11:16:24 -0700 From: Paul Vixie <paul@vix.com>
my expectation is that when the last mile goes to 622Mb/s or 1000Mb/s, exchange points will all be operating at 10Gb/s, and interswitch trunks at exchange points will be multiples of 10Gb/s.
I guess Moore's Law comes into play again. One will need some pretty hefty TCP buffers for a single stream to hit those rates, unless latency _really_ drops. (Distributed CDNs, anyone? Speed of light ain't getting faster any time soon...)
To transfer 1Gb/s across 100ms I need to be prepared to buffer at least 25MB of data. According to pricewatch, I can pick up a high density 512MB PC133 DIMM for $70, and use $3.50 of it to catch that TCP stream. Throw in $36 for a GigE NIC, and we're ready to go for under $40. Yeah I know thats cheapest garbage you can get, but this is just to prove a point. :) I might only be able to get 800Mbit across a 32bit/33mhz PCI bus, but whatever. The problem isn't the lack of hardware, it's a lack of good software (both on the receiving side and probably more importantly the sending side), a lot of bad standards coming back to bite us (1500 byte packets is about as far from efficient as you can get), a lack of people with enough know-how to actually build a network that can transport it all (heck they can't even build decent networks to deliver 10Mbit/s, @Home was the closest), and just a general lack of things for end users to do with that much bandwidth even if they got it. -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)