http://slashdot.org/article.pl?sid=04/09/03/0534206 6.63 Gbps The article ended with hardware specs 'S2io's Xframe 10 GbE server adapter, Cisco 7600 Series Routers, Newisys 4300 servers using AMD Opteron processors, Itanium servers and the 64-bit version of Windows Server 2003.'" Is there a 10GE OSM for the 7600s? Deepak
On Fri, 3 Sep 2004, Deepak Jain wrote:
Is there a 10GE OSM for the 7600s?
http://www.cisco.com/en/US/products/hw/modules/ps4835/ps5136/index.html But if there is no other traffic, why would you need deep packet buffers to beat records? (It doesn't say if this was on a production network with other traffic or not, so I don't know) -- Mikael Abrahamsson email: swmike@swm.pp.se
http://www.cisco.com/en/US/products/hw/modules/ps4835/ps5136/index.html
But if there is no other traffic, why would you need deep packet buffers to beat records? (It doesn't say if this was on a production network with other traffic or not, so I don't know)
I wouldn't suspect its on a production network at all. :) DJ
On Fri, Sep 03, 2004 at 07:07:42PM +0200, Mikael Abrahamsson wrote:
But if there is no other traffic, why would you need deep packet buffers to beat records? (It doesn't say if this was on a production network with other traffic or not, so I don't know)
Yes, it was: http://ultralight.caltech.edu/lsr_06252004/ I wasn't involved with this project, but the LSR rules require use of production, shared networks and a minimum of two router hops: http://lsr.internet2.edu/ Bill.
On Fri, 3 Sep 2004, Bill Owens wrote:
On Fri, Sep 03, 2004 at 07:07:42PM +0200, Mikael Abrahamsson wrote:
But if there is no other traffic, why would you need deep packet buffers to beat records? (It doesn't say if this was on a production network with other traffic or not, so I don't know)
Yes, it was:
http://ultralight.caltech.edu/lsr_06252004/
I wasn't involved with this project, but the LSR rules require use of production, shared networks and a minimum of two router hops:
Bill.
digging around here will answer some of your load questions: http://globalnoc.iu.edu/ path is caltech -> cenet -> abilene(LOSA) -> startap -> cern Abiene/LOSA http://stryper.uits.iu.edu/abilene/aggregate/summary.cgi?rrd=losa CERN (see monthly at bottom) http://stryper.uits.iu.edu/starlight/summary.cgi?network=m10-cern&data=bits
On Fri, 3 Sep 2004, Lucy E. Lynch wrote:
digging around here will answer some of your load questions:
With the background "noise" being approx half a gigabit/s and the PC seeminly having a theoretical limit of 7.5 gigabit/s (according to the article), buffering shouldn't have happened during the test so deep packet buffers wasn't an issue. -- Mikael Abrahamsson email: swmike@swm.pp.se
At 02:51 AM 4/09/2004, Deepak Jain wrote:
http://slashdot.org/article.pl?sid=04/09/03/0534206
6.63 Gbps
The article ended with hardware specs 'S2io's Xframe 10 GbE server adapter, Cisco 7600 Series Routers, Newisys 4300 servers using AMD Opteron processors, Itanium servers and the 64-bit version of Windows Server 2003.'"
Is there a 10GE OSM for the 7600s?
there isn't any 10GE OSM but there are certainly 10GE modules for the 6500/7600 with deep buffers. most recently, Guido Appenzeller, Isaac Keslassy and Nick McKeown have written a paper that they presented at SIGCOMM 2004 about "sizing router buffers" that is very informative and goes against the grain of the amount of buffering required in routers/switches. for folk who aren't aware of some of the hardware limitations faced today, the paper provides a fairly good degree of detail on some of the technical tradeoffs in router/switch design and some of the technical hurdles faced by ever-increasing interface speeds today. while Moore's law means that the processing speeds get faster and faster, the same amount of innovation cannot be said for either speed-of-RAM or chip packaging, which have fallen significantly behind both Moore's law and speed-of-interface growth curves. cheers, lincoln. NB. not speaking for my employer, this is an area of research that is of personal interest to me. but of course, my employer spends a lot of time looking at this.
most recently, Guido Appenzeller, Isaac Keslassy and Nick McKeown have written a paper that they presented at SIGCOMM 2004 about "sizing router
buffers" that is very informative and goes against the grain of the amount of buffering required in routers/switches.
In the paper http://klamath.stanford.edu/~keslassy/download/tr04_hpng_060800_sizing.pdf they state as follows: ----------------- While we have evidence that buffers can be made smaller, we haven't tested the hypothesis in a real operational network. It is a little difficult to persuade the operator of a functioning, profitable network to take the risk and remove 99% of their buffers. But that has to be the next step, and we see the results presented in this paper as a first step towards persuading an operator to try it. ---------------- So, has anyone actually tried their buffer sizing rules? Or do your current buffer sizing rules actually match, more or less, the sizes that they recommend? --Michael Dillon
Michael Dillon writes:
In the paper http://klamath.stanford.edu/~keslassy/download/tr04_hpng_060800_sizing.pdf
That's also in the (shorter) SIGCOMM'04 version of the paper.
they state as follows: ----------------- While we have evidence that buffers can be made smaller, we haven't tested the hypothesis in a real operational network. It is a little difficult to persuade the operator of a functioning, profitable network to take the risk and remove 99% of their buffers. But that has to be the next step, and we see the results presented in this paper as a first step towards persuading an operator to try it. ----------------
So, has anyone actually tried their buffer sizing rules?
Or do your current buffer sizing rules actually match, more or less, the sizes that they recommend?
The latter, more or less. Our backbone consists of 1 Gbps and 10 Gbps links, and because our platform is a glorified campus L3 switch (Cisco Catalyst 6500/7600 OSR, mostly with "LAN" linecards), we have nowhere near the buffer space that was traditionally recommended for such networks. (We use the low-cost/performance 4-port variant of the 10GE linecards.) The decision for these types of interfaces (as opposed to going the Juniper or GSR route) was mostly driven by price, and by the observation that we don't want to strive for >95% circuit utilization. We tend to upgrade links at relatively low average utilization - router interfaces are cheap (even 10 GE), and on the optical transport side (DWDM/CWDM) these upgrades are also affordable. What I'd be interested in: In a lightly-used network with high-capacity links, many (1000s of) active TCP flows, and small buffers, how well can we still support the occasional huge-throughput TCP (Internet2 land-speed record :-)? Or conversely: is there a TCP variant/alternative that can fill 10Gb/s paths (with maybe 10-30% of background load from those thousands of TCP flows) without requiring huge buffers in the backbone? Rather than over-dimensioning the backbone for two or three users (the "Petabyte crowd"), I'd prefer making them happy with a special TCP. -- Simon.
On Mon, 6 Sep 2004, Simon Leinen wrote:
Rather than over-dimensioning the backbone for two or three users (the "Petabyte crowd"), I'd prefer making them happy with a special TCP.
Tune your max window size so it won't be able to use more than say 60% of the total bandwidth, that way (if the packets are paced evenly) you won't ever overload the 10GE link with 30% background "noise". -- Mikael Abrahamsson email: swmike@swm.pp.se
Mikael Abrahamsson writes:
On Mon, 6 Sep 2004, Simon Leinen wrote:
Rather than over-dimensioning the backbone for two or three users (the "Petabyte crowd"), I'd prefer making them happy with a special TCP.
Tune your max window size so it won't be able to use more than say 60% of the total bandwidth, that way (if the packets are paced evenly) you won't ever overload the 10GE link with 30% background "noise".
Hm, three problems: 1.) Ideally the Petabyte folks would magically get *all* of the currently "unused bandwidth" - I don't want to limit them to 60%. (Caveat: Unused bandwidth of a path is very hard to quantify.) 2.) When we upgrade the backbone to 100GE or whatever, I don't want to have to tell those people they can increase their windows now. 3.) TCP as commonly implemented does NOT pace packets evenly. If the high-speed TCP 1.) notices the onset of congestion even when it's just a *small* increase in queue length, or maybe a tiny bit of packet drop/ECN (someone please convince Cisco to implement ECN on the OSR :-), 2.) adapts quickly to load changes, and 3.) paces its packets nicely as you describe, then things should be good. Maybe modern TCPs such as FAST or BIC do all this, I don't know. I'm pretty sure FAST helps by avoiding to fill up the buffers. As I said, it would be great if it were possible to build fast networks with modest buffers, and use end-to-end (TCP) improvements to fill the "needs" of the Petabyte/Internet2 Land Speed Record crowd. -- Simon.
On Mon, 6 Sep 2004, Simon Leinen wrote:
As I said, it would be great if it were possible to build fast networks with modest buffers, and use end-to-end (TCP) improvements to fill the "needs" of the Petabyte/Internet2 Land Speed Record crowd.
I do believe that the high-speed (more than 10% of core network speed) TCP connection is almost exclusive to the research/academic community. I think you're on the right track in the thoughts to develop a new TCP implementation solely for the application you describe above, and that it makes more economic sense to do this than to build with the quite a lot more expensive equipment with larger buffers. I guess the cost of developing a tweaked TCP protocol would be in the neighbourhood of the cost of a couple of OC192 linecards for the GSR :/ -- Mikael Abrahamsson email: swmike@swm.pp.se
At 10:24 PM 06-09-04 +0200, Mikael Abrahamsson wrote:
On Mon, 6 Sep 2004, Simon Leinen wrote:
As I said, it would be great if it were possible to build fast networks with modest buffers, and use end-to-end (TCP) improvements to fill the "needs" of the Petabyte/Internet2 Land Speed Record crowd.
I do believe that the high-speed (more than 10% of core network speed) TCP connection is almost exclusive to the research/academic community.
I think you're on the right track in the thoughts to develop a new TCP implementation solely for the application you describe above, and that it makes more economic sense to do this than to build with the quite a lot more expensive equipment with larger buffers.
I guess the cost of developing a tweaked TCP protocol would be in the neighbourhood of the cost of a couple of OC192 linecards for the GSR :/
Ask Mentat: http://www.mentat.com/tcp/tcpdata.html -Hank
-- Mikael Abrahamsson email: swmike@swm.pp.se
participants (8)
-
Bill Owens
-
Deepak Jain
-
Hank Nussbacher
-
Lincoln Dale
-
Lucy E. Lynch
-
Michael.Dillon@radianz.com
-
Mikael Abrahamsson
-
Simon Leinen