Hm, I think Peter was too brief to be understood by all. Let me try to expand on his major point (buffering requirements). First, however, to this: Jeremy Porter <jerry@fc.net> wrote:
Since large amounts of traffic on the Net orginates from modems which are typically plugged into terminal servers, which virtually all have ethernet interfaces, very large amounts of internet traffic have MTUs smaller than the 1500.
[Continues argument in the line of "if little traffic uses more than 1500 bytes MTU, ethernet will be better/cheaper/etc."]
I would claim that the average packet size doesn't really matter much -- the average packet size is usually in the order of 2-300 bytes anyway. However, restricting the MTU of an IX to 1500 bytes *will* matter for those fortunate enough to have FDDI and DS3 (or better) equipment all the way, forcing them to use smaller packets than they otherwise could. Some hosts get noticeably higher performance when they are able to use FDDI- sized packets compared to Ethernet-sized packets, and restricting the packet size to 1500 bytes will put a limit on the maximum performance these people will see. In some cases it is important to cater to these needs. The claim that switched fast full-duplex Ethernet will perform better than switched, full-duplex FDDI for small packets doesn't really make sense -- not to me at least. I mean, it's not like FDDI doesn't use variable-sized packets... Now, over to the rather important point Peter made. In some common cases what really matters is the behaviour of these boxes under high load or congestion. The Digital GigaSwitch is reportedly able to "steal" the token on one of the access ports if that port sends too much traffic to another port where there currently is congestion. This causes the router on the port where the token was stolen to buffer the packets it has to send until it sees the token again. Thus, the total buffering capacity of the system will be the sum of the buffering internal to the switch and the buffering in each connected router. I have a hard time seeing how similar effects could be achieved with ethernet-type switches. (If I'm not badly mistaken, this is a variant of one of the architectural problems with the current ATM based IXes as well.) Thanks to Curtis Villamizar it should be fairly well known by now what insufficient buffering can do to your effective utilization under high offered load (it's not pretty), and that the requirements for buffering at a bottleneck scales approximately with the (end-to-end) bandwidth X delay product for the traffic you transport through that bottleneck. So, there you have it: if you foresee that you will push the technology to it's limits, switched ethernet (fast or full duplex) as part of a "total solution" for an IX point seems to be at a disadvantage compared to switched FDDI as currently implemented in the Digital GigaSwitch. This doesn't render switched ethernet unusable in all circumstances, of course. Regards, - Havard