Re: NAP/ISP Saturation WAS: Re: Exchanges that matter...
* ICMP packets are dropped by busy routers Many routers drop ICMP packets (ping, traceroute) when busy, or alternate dropping ICMP packets. I know that this behavior occurs when the packets are directed to the specific router, I am not sure if this every occurs for packets passing through. The standby tool ping needs a more reliable replacement for testing end to end packet loss. There seems to be a great deal of (understandable) confusion on this issue. Let's set it straight: Packets which are _successfully_ forwarded through a (high end) cisco router are not (by default) prioritized by protocol type. Packets which are not forwarded require more work and are effectively rate limited (and consume large amounts of CPU time). Some effects: - Pinging a cisco is not a valid measure of packet loss. It's closer to a CPU load measure than anything else. - Pinging _thru_ a cisco is reasonable. - Traceroute to a cisco is rate limited to one reply per second, so will almost always miss the middle reply. - Traceroute _thru_ a cisco may show many drops which would NOT be seen by normal "thru" traffic. Replies generated by the cisco when the TTL expires are again thru the CPU. So you may well traceroute thru a cisco which does not reply at all. However, you can clearly see the route after that router. * Head of queue blocking in the Gigaswitch Even though the Gigaswitch has input and output queues, your output queue will block until the other providers input queue is free. My (admittedly second hand) understanding is that the Gigaswitch/FDDI actually has minimal amounts of buffering. During a congestion event, it simply withholds the token, resulting in buffering in the routers. Queues there eventually overflow, and ... If this is incorrect, I would greatly appreciate pointers to the truth. Tony
* Head of queue blocking in the Gigaswitch
Even though the Gigaswitch has input and output queues, your output queue will block until the other providers input queue is free.
My (admittedly second hand) understanding is that the Gigaswitch/FDDI actually has minimal amounts of buffering. During a congestion event, it simply withholds the token, resulting in buffering in the routers. Queues there eventually overflow, and ...
This matches my understanding, though I think it understates the problem. Gigaswitches are essentially input-queued. When their teeny tiny buffers fill they flow-control everyone to slow them down. What this means is that a single congested output port will cause all inputs, including packets to the other `n' uncongested output ports, to be pushed back, so the overall throughput drops. And there appear to be additional problems caused when two gigaswitches are connected together, on the link between them, I assume because flow control sucks even worse when neither of the guys on the ends of the link has buffers. In any case, the head-of-line blocking means these switches only work really well when they're unloaded, something I've always suspected. Providing suitable buffering in switches is both necessary and sufficient. The evil that flow control attempts to hide is still evil, you just have to work harder to see it. Dennis Ferguson
I'm quite curious how they handle full-duplex FDDI where witholding the token doesn't seem to be an option. Do they simply drop packets when traffic gets bursty? Ironically, I'd prefer they drop the packet bound for a busy port rather than stop all incoming traffic from a port until the busy port frees. If anyone has experience with NetStar's GigaRouters, especially in comparison to the GigaSwitches, I'd love to hear about it. You can reach me at davids@wiznet.net. DS On Tue, 17 Dec 1996, Dennis Ferguson wrote:
My (admittedly second hand) understanding is that the Gigaswitch/FDDI actually has minimal amounts of buffering. During a congestion event, it simply withholds the token, resulting in buffering in the routers. Queues there eventually overflow, and ...
This matches my understanding, though I think it understates the problem. Gigaswitches are essentially input-queued. When their teeny tiny buffers fill they flow-control everyone to slow them down. What this means is that
participants (3)
-
David Schwartz
-
Dennis Ferguson
-
Tony Li