ATM Wide-Area Networks (was: sell shell accounts?)
From: Dave Siegel <dave@rtd.net> Subject: Re: sell shell accounts? To: freedman@netaxs.com (Avi Freedman) Date: Fri, 19 Jul 1996 16:01:01 -0700 (MST) Cc: vansax@atmnet.net, richards@netrex.com, agislist@interstice.com, nanog@merit.edu
Acceptable arguments are: o Switches can handle more throughput
That's difficult to quantify in theory *or* practice. [...]
I may misunderstand your assertion, but it doesn't seem all that difficult to quantify, at least to some coarse level. We have been using wide-area ATM switches at OC-3c for some time. It is pretty clear that the switches can handle OC-3c. The early switches had relatively small output buffers, so they tended to loose cells before TCP could throttle back in congested circumstances. We are now using switches with much larger output buffers, and TCP appears able to throttle back fairly gracefully. After we upgraded our switches to use much larger output buffers, most of the problems we experienced were related to the hosts, (e.g., poor TCP implementations, poor ATM interfaces, etc). By the way, I believe that most people who are using ATM wide-area networks in production or as part of the Internet are using static bandwidth allocation to avoid a number of problems. We also have a number of OC-12c interfaces for our switches. They appear to work, but we haven't really had a chance to stress them yet. (Part of the problem is that it is hard to find OC-12c data sources and sinks.) The current generation of routers appears unlikely to support OC-12c, particularly since their backplanes are only about the same speed. On a more theoretical note, switches, being circuit-switched, make the complicated decisions when the connection is established, (or configured for PVCs), and need to make relatively simple decisions to switch each cell. This probably scales very well is terms of speed, although at some point might have some difficulty in scaling to a very large number of simultaneous connections. Routers, on the other hand, have to make a bit more complicated decisions per packet. This has some limitations in terms of speed and number of simultaneous "connections." -tjs
Acceptable arguments are: o Switches can handle more throughput
That's difficult to quantify in theory *or* practice. [...]
I may misunderstand your assertion, but it doesn't seem all that difficult to quantify, at least to some coarse level.
We have been using wide-area ATM switches at OC-3c for some time. It is pretty clear that the switches can handle OC-3c. The early switches had relatively small output buffers, so they tended to loose cells before TCP could throttle back in congested circumstances. We are now using switches with much larger output buffers, and TCP appears able to throttle back fairly gracefully.
It's difficult to quantify wrt to how ATM plays a role in end-to-end performance on the Internet. There is very little research to support how ATM affects the overall performance. Even Ameritech and PacBell restricted the majority of their performance evaluation on performance in the switch, and only extended their scope if they leased an ADSU to a customer. Even with the improved buffering, if you fill your pipe into the ATM switch, your ATM switch still becomes a packet shredder, compared to the more graceful packet drops seen on a clear channel line. The performance of the switched technology is integral with the equipment attached to it. You cannot evaluate the performance of the system by analyzing small chunks of it, and coming to conclusions on the whole system. This is what I'm talking about. It's easy to qualify that ATM switches can switch cells much faster than routers can switch packets (with current commercially available routers), but it's much more difficult to qualify the scope of how ATM improves performance in the big picture. Dave -- Dave Siegel Sr. Network Engineer, RTD Systems & Networking (520)623-9663 x130 Network Consultant -- Regional/National NSPs dsiegel@rtd.com User Tracking & Acctg -- "Written by an ISP, http://www.rtd.com/~dsiegel/ for an ISP."
In message <199607231657.LAA14164@uh.msc.edu>, Tim Salo writes:
On a more theoretical note, switches, being circuit-switched, make the complicated decisions when the connection is established, (or configured for PVCs), and need to make relatively simple decisions to switch each cell. This probably scales very well is terms of speed, although at some point might have some difficulty in scaling to a very large number of simultaneous connections.
Routers, on the other hand, have to make a bit more complicated decisions per packet. This has some limitations in terms of speed and number of simultaneous "connections."
-tjs
Tim et al, Delete now if you have anything better to do. :-) Just addressing you theoretical note here for the moment. Routers scale well to "number of connections if there are a large number of sources and destinations per routing prefix (ie: good aggregation). Routers scale O(logN) wrt the number of prefixes used for forwarding if a radix tree is used or required O(N) storage for hashed lookup methods (failure of the host based cache - proven avout 2 years ago - dead horse dept). Switches scale O(N) with the number of connections established and torn down. Routers are not affected by setup/tear-down. So the (theoretic) question is whether the O(logN) swamps the forwarding lookup in the router model before the O(N) connection setup overhead kills the switch. Back to real world considerations. Which scales better depends on things like average packet size for the router limits and average connection duration for the switch limits. Good ol' HTTP is a nightmare for either one. As a result, hybrid approaches start to look attractive, using routers on the periphery and building fat pipes through the switches. What you end up with there is all sorts of traffic on the same fat pipe (or forget about setup scaling) and so out the window goes "the advantages of ATM QoS". The state of the art of ISP needs (where the barage of tinygrams and very short flows is felt full force) is PVC pipes between routers to offload the routers a little bit. In practice the advantage is very little (but enough to have MCI doing it). For all the investment made in custom silicon for ATM it may turn out that a general purpose processor and a good router design (such as the DEC Alpha used in the BBN router) will take us to OC12. Hope its been amusing. I'd say the jury is still out on this one. :-) Curtis
participants (3)
-
Curtis Villamizar
-
Dave Siegel
-
salo@msc.edu