At 08:33 PM 11/5/97 -0500, you wrote:
Paul,
I have not spoken with you before, so I do not know if your posting below is meant in a literal, nonfacetious manner.
With all of the problems with MAE-EAST.....
Any plans from anyone to create a ATM exchange point in the DC area?
For what it's worth, I do understand that there is a plan to create an ATM exchange point in the DC area, at speeds exceeding those currently available.
Given the latency we've seen over some ATM backbones,
The latency increased in network areas that are switched is generally (by all but the zealots) given to be less than that of comparable layer three data moving topologies.
The latency induced by several providers claiming an ATM backbone is generally attributable to an error: they leave off one important word -- shared -- . The latency about which I assume you speak is caused by large amounts of queuing. This queuing is demanded by network oversubscription. The latency introduced by the oversubscription is consistent with any oversold network.
Perhaps he is referring to latencies that some beleive is incurred as ATM 'packet shredding' when applied to typical data distributions encountered on the Internet that fall between the 53byte ATM cell size and any even multiple thereof? Some reports that I have seen show a direct disavantage for data where a large portion of 64byte TCP ACKS, etc. are inefficiently split among two 53byte ATM cells, wasting a considerable amount of 'available' bandwidth. i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of space. If a large portion of Internet traffic followed this model, ATM may not be a good solution. I am not an authority on this position, so feel free to dispute, but perhaps that is one latency factor to which he was referring?
For what it's worth, I do understand that there is a plan to create an ATM exchange point in the DC area, at speeds exceeding those currently available.
Given the latency we've seen over some ATM backbones,
The latency increased in network areas that are switched is generally (by all but the zealots) given to be less than that of comparable layer three data moving topologies.
The latency induced by several providers claiming an ATM backbone is generally attributable to an error: they leave off one important word -- shared -- . The latency about which I assume you speak is caused by large amounts of queuing. This queuing is demanded by network oversubscription. The latency introduced by the oversubscription is consistent with any oversold network.
Perhaps he is referring to latencies that some beleive is incurred as ATM 'packet shredding' when applied to typical data distributions encountered on the Internet that fall between the 53byte ATM cell size and any even multiple thereof?
Some reports that I have seen show a direct disavantage for data where a large portion of 64byte TCP ACKS, etc. are inefficiently split among two 53byte ATM cells, wasting a considerable amount of 'available' bandwidth. i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of space. If a large portion of Internet traffic followed this model, ATM may not be a good solution.
Just an observation: most of the super-fast L3 routers and frame relay switches chop their packets into cells before transporting them across the switching backplane. ATM cells are only chopped once at the ingress into the ATM cloud, but the "L3 packets" have more chances to be chopped inside the non-ATM cloud. So which one induces more latencies ?
I am not an authority on this position, so feel free to dispute, but perhaps that is one latency factor to which he was referring?
------------------------------------------------------------------- - Naiming Shen MCI - MCI Internet Engineering 2100 Reston Parkway - +1 703-715-7056 fax:703-715-7066 v272-7056 Reston, VA 20191
On Wed, 5 Nov 1997, Al Roethlisberger wrote:
Paul,
I have not spoken with you before, so I do not know if your posting below is meant in a literal, nonfacetious manner.
It was, however not having spoken to me would have given you a better chance of getting that right ;)
With all of the problems with MAE-EAST.....
Any plans from anyone to create a ATM exchange point in the DC area?
For what it's worth, I do understand that there is a plan to create an ATM exchange point in the DC area, at speeds exceeding those currently available.
Given the latency we've seen over some ATM backbones,
The latency increased in network areas that are switched is generally (by all but the zealots) given to be less than that of comparable layer three data moving topologies.
The latency induced by several providers claiming an ATM backbone is generally attributable to an error: they leave off one important word -- shared -- . The latency about which I assume you speak is caused by large amounts of queuing. This queuing is demanded by network oversubscription. The latency introduced by the oversubscription is consistent with any oversold network.
I'm sure that's a part of it, I initially saw a lot of dropped packets through a couple of ATM clouds. I'm seeing some improvements in some of the providers, however, given the trumpeting of ATM (magic bullet syndrome), it seems that it's just not something which happens correctly by default. Before we go off on the 'nothing happens correctly by default' tangent, it's just been my general observation that whenever my packets have been transited over ATM, my latency has been less than ideal. I would have figured that oversubscription would result more in lost packets and timed out connections (which were also seen, but more easily screamed about) than latency, but I guess that's a factor of how oversubscribed the line is.
Perhaps he is referring to latencies that some beleive is incurred as ATM 'packet shredding' when applied to typical data distributions encountered on the Internet that fall between the 53byte ATM cell size and any even multiple thereof?
Some reports that I have seen show a direct disavantage for data where a large portion of 64byte TCP ACKS, etc. are inefficiently split among two 53byte ATM cells, wasting a considerable amount of 'available' bandwidth. i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of space. If a large portion of Internet traffic followed this model, ATM may not be a good solution.
This was my preliminary guess. I expect that it'll be mid next year before we start playing with ATM internally, if that soon. Once I get it on a testbed, I'll know for sure where the issues lie. Is there a good place to dig up this stuff, or am I doomed to sniffers and diagnostic code? It'll be a couple of months until I start gathering latency stats again. Paul ------------------------------------------------------------------------- Paul D. Robertson gatekeeper@gannett.com
On Thu, 6 Nov 1997, Paul D. Robertson wrote:
I'm sure that's a part of it, I initially saw a lot of dropped packets through a couple of ATM clouds. I'm seeing some improvements in some of the providers, however, given the trumpeting of ATM (magic bullet syndrome), it seems that it's just not something which happens correctly by default. Before we go off on the 'nothing happens correctly by default' tangent, it's just been my general observation that whenever my packets have been transited over ATM, my latency has been less than ideal. I would have figured that oversubscription would result more in lost packets and timed out connections (which were also seen, but more easily screamed about) than latency, but I guess that's a factor of how oversubscribed the line is.
We run ATM between POPs over our own DS3, simply because it gives us the ability to flexibly divide it into multiple logical channels. Right now, we don't need all of it so I'm not concerned about only getting 34Mbps of payload data across the DS3. When we get closer to that, we may need to investigate other solutions. What we see on a 250mi DS3 running ATM is 8ms RTT (ICMP echoes), never varying. I don't have a similar mileage circuit running HDLC or PPP over DS3 to compare with, but assuming a propagation speed of .7c the round trip time just to cover the distance is 3.8ms. Adding the various repeaters and mux equipment along the way, then going through our ATM switches on each end and to a router on each end and the processing there, that doesn't sound bad to me. We may also add voice circuits across the link at some point.
Perhaps he is referring to latencies that some beleive is incurred as ATM 'packet shredding' when applied to typical data distributions encountered on the Internet that fall between the 53byte ATM cell size and any even multiple thereof?
Some reports that I have seen show a direct disavantage for data where a large portion of 64byte TCP ACKS, etc. are inefficiently split among two 53byte ATM cells, wasting a considerable amount of 'available' bandwidth. i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of space. If a large portion of Internet traffic followed this model, ATM may not be a good solution.
This was my preliminary guess. I expect that it'll be mid next year before we start playing with ATM internally, if that soon. Once I get it on a testbed, I'll know for sure where the issues lie. Is there a good place to dig up this stuff, or am I doomed to sniffers and diagnostic code?
That shouldn't significantly affect latency, but it does waste bandwidth. With a 5-byte header per ATM cell, you already waste 9% of the line rate to overhead, and then you have AAL5/etc headers on top of that. Nobody is saying that ATM is the best solution for all things, but you do get something for the extra overhead -- the ability to mix all types of traffic over a single network, and for the allocation of bandwidth to these types of traffic to be done dynamically in a stat-mux fashion. If you have enough traffic for the various types that you can justify multiple circuits for each type, then there is less justification for using ATM. There was another comment about wanting to use a larger MTU at a NAP which confused me. What benefit is gained by having a large MTU at the NAP if the MTU along the way (such as at the endpoints) is lower, typically 1500? John Tamplin Traveller Information Services jat@Traveller.COM 2104 West Ferry Way 205/883-4233x7007 Huntsville, AL 35801
At 06:17 PM 11/5/97 -0800, Al Roethlisberger wrote:
Perhaps he is referring to latencies that some beleive is incurred as ATM 'packet shredding' when applied to typical data distributions encountered on the Internet that fall between the 53byte ATM cell size and any even multiple thereof?
I'm going to rant a little. Sorry Al, but it was you repeating something allegedly BAD about ATM that once ATM promoters used to say was GOOD, well it's just too funny and too ironic to pass up. One of the advantages of ATM as touted by ATM bigots in the early days was the advantage of "cell interleaving". When two "packets" meet at an intermediate ATM node, their cells interleave as they are switched through. This reduces the per-hop latency of an ATM network over a frame network on the order of microseconds for large packets. An idiotic marketing-initiated "advantage" that I used to make fun of when ATM marketers would trot it out. Now you tell me that ATM segmentation probably increases latency because the modulo 48 byte payload causes the extra padding bytes on some packets to "take a long time" to be forwarded? On the order of picoseconds. An idiotic "what else can we think of that's wrong with ATM" engineering-initiated disadvantage. And if we could remember what we were actually talking about -- an ATM switch for an exchange point and not an ATM network -- we can see that none of this matters, except to show how we know that ATM is Just Bad and we would never do that.
Some reports that I have seen show a direct disavantage for data where a large portion of 64byte TCP ACKS, etc. are inefficiently split among two 53byte ATM cells, wasting a considerable amount of 'available' bandwidth. i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of space. If a large portion of Internet traffic followed this model, ATM may not be a good solution.
The TCP ACKs are 40 bytes long and if you aren't trying to solve too many problems at once, you can use an encapsulation that will fit a 40 byte TCP ACK in a single cell. There isn't a way to stuff a 64 byte packet into a 48 byte payload. Is that a problem!? Only if you have a lot of 64 byte datagrams, which you don't, because the ACKs are 40 bytes long. I have actually looked at some Internet traffic distributions to see how big a problem this isn't. There is no point agreeing with the Big Backbone Network Engineers that the MAEs suck. It is in their best interest that the MAEs suck, the CIX is crippled, you aren't bugging them to plug into a high perf exchange, and that you, the little ISP, go out of business soon. THEY have private interconnects which you can't join. Find a co-lo where you can cross-connect without being robbed or build your own NAP, just don't use DEC-designed Gigaswitches and FDDI. Use full duplex 100 Mbps Ethernet switch or find an old Fore switch cheap. --Kent Kent W. England President and CEO Six Sigma Networks Experienced Internet Consulting 1655 Landquist Drive, Suite 100 Voice/Fax: 760.632.8400 Encinitas, CA 92024 mailto:kwe@6SigmaNets.com PGP Key-> http://keys.pgp.com:11371/pks/lookup?op=get&search=0x6C0CDE69
On Sat, Nov 08, 1997 at 06:58:56AM -0800, Kent W. England wrote:
There is no point agreeing with the Big Backbone Network Engineers that the MAEs suck. It is in their best interest that the MAEs suck, the CIX is crippled, you aren't bugging them to plug into a high perf exchange, and that you, the little ISP, go out of business soon. THEY have private interconnects which you can't join. Find a co-lo where you can cross-connect without being robbed or build your own NAP, just don't use DEC-designed Gigaswitches and FDDI. Use full duplex 100 Mbps Ethernet switch or find an old Fore switch cheap.
At the risk of litigation, Kent makes a good point here: how much of the problems we see are engineering based, and how much are (let's say it softly: political? This is right on the edge of topic for the list; construct your replies carefully, or cross-post to nodlist, per reply-to. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "Pedantry. It's not just a job, it's an Tampa Bay, Florida adventure." -- someone on AFU +1 813 790 7592
participants (6)
-
Al Roethlisberger
-
Jay R. Ashworth
-
John A. Tamplin
-
Kent W. England
-
Naiming Shen
-
Paul D. Robertson