At 22:38 +0300 9/20/02, Petri Helenius wrote:
Under the best possible circumstances, most of the extra delay is due to the fact that routers do "store and forward" forwarding, so you have to wait for the last bit of the packet to come in before you can start sending the first bit over the next link. This delay is directly proportional to the bandwidth and the packet size. Since ATM uses very small "packets" this isn't as much an issue there.
But doing SAR at the ends of the PVC you´ll end up suffering the same latency anyway and since most people run their ATM PVC´s at a rate smaller than the attached linerate, this delay is actually larger in many cases.
However, the real problem with many hops comes when there is congestion. Then the packet suffers a queuing delay at each hop. Now everyone is going to say "but our network isn't congested" and it probably isn't when you look at the 5 minute average, but short term (a few ms - several seconds) congestion happens all the time because IP is so bursty. This adds to the
If you either do the math at OC48 or above or just look at how many places are able to generate severe, even subsecond bursts on any significant backbone, you´ll figure out that 99.9% of the time, there aren´t any. If you burst your access link, then it´s a not a backbone hopcount issue.
jitter. It doesn't matter whether those extra hops are layer 2 or layer 3, though: this can happen just as easily in an ethernet switch as in a router. Because ATM generally doesn't buffer cells but discards them, this also isn't much of an issue for ATM.
Most ATM switches have thousands of cell buffers for an interface or tens of thousands to a few million shared for all interfaces. There is one legendary piece of hardware with buffer space for 32 cells. Fortunately they didn´t get too many out there.
However, when an ATM network gets in trouble it's much, much worse than some jitter or even full-blown congestion, so I'll take as many hops as I must to avoid ATM (but preferably no more than that) any day.
Depends on your ATM hardware, most of the vendors fixed their ATM to make decisions based on packets. Which kind of defeats the idea of having 53 byte shredded packets in the first place.
Really? Then I guess Juniper made a mistake chopping every packet into 64 byte packets ;-) . From a hardware standpoint, it speeds up the process significantly. Think of a factory with a cleaver machine, it knows exactly where to chop the pieces because there is a rhythm. It takes no "figuring out." By chopping up everything into set sizes you dont need to "search" for headers or different parts of the packet. It's always at a "set" byte number. Well Ive done a poor job of explaining it, but it does speed things up.
Pete
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] Smotons (Smart Photons) trump dumb photons