On Thu, 6 Nov 1997, Paul D. Robertson wrote:
I'm sure that's a part of it, I initially saw a lot of dropped packets through a couple of ATM clouds. I'm seeing some improvements in some of the providers, however, given the trumpeting of ATM (magic bullet syndrome), it seems that it's just not something which happens correctly by default. Before we go off on the 'nothing happens correctly by default' tangent, it's just been my general observation that whenever my packets have been transited over ATM, my latency has been less than ideal. I would have figured that oversubscription would result more in lost packets and timed out connections (which were also seen, but more easily screamed about) than latency, but I guess that's a factor of how oversubscribed the line is.
We run ATM between POPs over our own DS3, simply because it gives us the ability to flexibly divide it into multiple logical channels. Right now, we don't need all of it so I'm not concerned about only getting 34Mbps of payload data across the DS3. When we get closer to that, we may need to investigate other solutions. What we see on a 250mi DS3 running ATM is 8ms RTT (ICMP echoes), never varying. I don't have a similar mileage circuit running HDLC or PPP over DS3 to compare with, but assuming a propagation speed of .7c the round trip time just to cover the distance is 3.8ms. Adding the various repeaters and mux equipment along the way, then going through our ATM switches on each end and to a router on each end and the processing there, that doesn't sound bad to me. We may also add voice circuits across the link at some point.
Perhaps he is referring to latencies that some beleive is incurred as ATM 'packet shredding' when applied to typical data distributions encountered on the Internet that fall between the 53byte ATM cell size and any even multiple thereof?
Some reports that I have seen show a direct disavantage for data where a large portion of 64byte TCP ACKS, etc. are inefficiently split among two 53byte ATM cells, wasting a considerable amount of 'available' bandwidth. i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of space. If a large portion of Internet traffic followed this model, ATM may not be a good solution.
This was my preliminary guess. I expect that it'll be mid next year before we start playing with ATM internally, if that soon. Once I get it on a testbed, I'll know for sure where the issues lie. Is there a good place to dig up this stuff, or am I doomed to sniffers and diagnostic code?
That shouldn't significantly affect latency, but it does waste bandwidth. With a 5-byte header per ATM cell, you already waste 9% of the line rate to overhead, and then you have AAL5/etc headers on top of that. Nobody is saying that ATM is the best solution for all things, but you do get something for the extra overhead -- the ability to mix all types of traffic over a single network, and for the allocation of bandwidth to these types of traffic to be done dynamically in a stat-mux fashion. If you have enough traffic for the various types that you can justify multiple circuits for each type, then there is less justification for using ATM. There was another comment about wanting to use a larger MTU at a NAP which confused me. What benefit is gained by having a large MTU at the NAP if the MTU along the way (such as at the endpoints) is lower, typically 1500? John Tamplin Traveller Information Services jat@Traveller.COM 2104 West Ferry Way 205/883-4233x7007 Huntsville, AL 35801