backbone routers' priority settings for ICMP & UDP
Greetings, In my travels I have heard numerous times that it is a popular practice to give ICMP and even UDP traffic a lower priority on backbone routers in times of congestion. Are there any providers that document this policy? How is this policy carried out on the actual hardware? Any pointers would be appreciated. Thanks. -Marko
At 2:17 PM -0500 2/3/98, Marko B. wrote:
Greetings,
In my travels I have heard numerous times that it is a popular practice to give ICMP and even UDP traffic a lower priority on backbone routers in times of congestion.
ICMP in general is or should be given higher priority, since it is necessary for congestion control. Echo requests (pings) could be thrown away, but the rest is necessary. --Dean ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Plain Aviation, Inc dean@av8.com LAN/WAN/UNIX/NT/TCPIP http://www.av8.com ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
On Tue, 3 Feb 1998, Dean Anderson wrote:
At 2:17 PM -0500 2/3/98, Marko B. wrote:
Greetings,
In my travels I have heard numerous times that it is a popular practice to give ICMP and even UDP traffic a lower priority on backbone routers in times of congestion.
When a lot of people talk about that, what they are really referring to is that an ICMP ping to a router can take longer to get a response than a packet going through the router and coming back through it because packets to the router have to be handled by the often puny CPU while packets passing through normally don't.
ICMP in general is or should be given higher priority, since it is necessary for congestion control. Echo requests (pings) could be thrown
Please, tell me of this magic ICMP that is used for congestion control. Obsolete things that are now recommended against don't count.
away, but the rest is necessary.
ICMP in general is or should be given higher priority, since it is necessary for congestion control. Echo requests (pings) could be thrown
Please, tell me of this magic ICMP that is used for congestion control. Obsolete things that are now recommended against don't count.
Where is ICMP made obsolete or recommended against? I don't see it.
From rfc1812:
4.3 INTERNET CONTROL MESSAGE PROTOCOL - ICMP ........... 52 4.3.1 INTRODUCTION ..................................... 52 4.3.2 GENERAL ISSUES ................................... 53 4.3.2.1 Unknown Message Types .......................... 53 4.3.2.2 ICMP Message TTL ............................... 53 4.3.2.3 Original Message Header ........................ 53 4.3.2.4 ICMP Message Source Address .................... 53 4.3.2.5 TOS and Precedence ............................. 54 4.3.2.6 Source Route ................................... 54 4.3.2.7 When Not to Send ICMP Errors ................... 55 4.3.2.8 Rate Limiting .................................. 56 4.3.3 SPECIFIC ISSUES .................................. 56 4.3.3.1 Destination Unreachable ........................ 56 4.3.3.2 Redirect ....................................... 57 4.3.3.3 Source Quench .................................. 57 4.3.3.4 Time Exceeded .................................. 58 4.3.3.5 Parameter Problem .............................. 58 4.3.3.6 Echo Request/Reply ............................. 58 4.3.3.7 Information Request/Reply ...................... 59 4.3.3.8 Timestamp and Timestamp Reply .................. 59 4.3.3.9 Address Mask Request/Reply ..................... 61 4.3.3.10 Router Advertisement and Solicitations ........ 62
From rfc1812:
ICMP and IGMP are considered integral parts of IP, although they are architecturally layered upon IP. ICMP provides error reporting, flow control, first-hop router redirection, and other maintenance and control functions. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Plain Aviation, Inc dean@av8.com LAN/WAN/UNIX/NT/TCPIP http://www.av8.com ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
On Tue, 3 Feb 1998, Marc Slemko wrote:
On Tue, 3 Feb 1998, Dean Anderson wrote:
ICMP in general is or should be given higher priority, since it is necessary for congestion control. Echo requests (pings) could be thrown
Please, tell me of this magic ICMP that is used for congestion control. Obsolete things that are now recommended against don't count.
Since so many people are telling me all about the wonders of ICMP source quench messages, may I remind them of section 4.3.3.3 of RFC-1812 which says: A router SHOULD NOT originate ICMP Source Quench messages. As specified in Section [4.3.2], a router that does originate Source Quench messages MUST be able to limit the rate at which they are generated. Sure, hosts can generate them, but that is seldom of much utility (especially in the discussion here about network congestion control as opposed to host/processing congestion control) because the network is more often the backlog than the host, and the host can't generate messages saying the network is congested. ICMP messages are not a commonly used or particularily useful method of congestion control on the Internet today.
Marc, I'd have to agree, ICMP is more for flow control than congestion control. A source quench is to slow a fast machine from overrunning a slow machine, not preventing all flows from going through one link. One then wonders how well Win95 implements source quench, if at all. One (weak) metaphor is that traffic lights at an intersection are for flow control, while the traffic lights to get onto the freeway (common here in California) are for congestion control... On Tue, 3 Feb 1998, Marc Slemko wrote:
Sure, hosts can generate them, but that is seldom of much utility (especially in the discussion here about network congestion control as opposed to host/processing congestion control) because the network is more often the backlog than the host, and the host can't generate messages saying the network is congested.
ICMP messages are not a commonly used or particularily useful method of congestion control on the Internet today.
------------------------------------------------------------------- Scott Whyte 408.527.5713 |Any opinions expressed herein are Network Supported Accounts (NSA) |mine and not cisco's... CCIE 3340 | | "Eschew Obfuscation"
Marc, I'd have to agree, ICMP is more for flow control than congestion control. A source quench is to slow a fast machine from overrunning a slow machine, not preventing all flows from going through one link.
One (weak) metaphor is that traffic lights at an intersection are for flow control, while the traffic lights to get onto the freeway (common here in California) are for congestion control...
Extremely weak metaphore, since a source quench indicates there weren't enough buffers available to send your packets. Now, if the freeway was full, and cars started dropping out of the space/time continuum, that'd be more like a source quench. ;-) The freeway would call your wife at home and say "sorry, but your husband didn't make it to work because the freeways were too full." If wife runs correct a correct TCP implementation, she would know to initiate "slow start" and would send out her husbands at a slower rate until she gets a feel for how bad the traffic is.
One then wonders how well Win95 implements source quench, if at all.
Which side of the implementation do you mean? as a client, or as a gateway? I suppose it doesn't really matter. Since source quenches are not supposed to be used on routers anymore, the expectation of receiving a source quench on a large network (like the Internet) is a bad one, so the TCP implementations have to implement congestion controls through other means anyhow. TCP/IP Illus. Vol. I by W. Richard Stevens has a pretty good explanation of what source quenches are. Dave -- Dave Siegel dave@rtd.net Network Engineer dave@pager.rtd.com (alpha pager) (520)579-0450 (home office) http://www.rtd.com/~dsiegel/
Since source quenches are not supposed to be used on routers anymore, the expectation of receiving a source quench on a large network (like the Internet) is a bad one, so the TCP implementations have to implement congestion controls through other means anyhow.
As is (reasonably) well known, TCP has its own congestion control built in to an extent. However, if your network is UDP heavy (for instance) on a protocol which has no higher level congestion control, why are source quenches from routers worse than nothing? If they aren't, then wouldn't ignoring source quench on the client for TCP have been a better strategy? I'm thinking about this in a WAN context where theoretically you have more control over the clients as well as an Internet context. Or is Source Quench really broken by design? -- Alex Bligh GX Networks (formerly Xara Networks)
As is (reasonably) well known, TCP has its own congestion control built in to an extent. However, if your network is UDP heavy (for instance) on a protocol which has no higher level congestion control, why are source quenches from routers worse than nothing? If they aren't, then
They work well in a situation like W. Richard Stevens supplies, such as a local workstation shoveling packets into a SLIP link runs out of buffers. The usefulness breaks down in a larger Internet with more serious congestion problems. It could be the guy on the end of a PPP connection receiving source quenches from a big router out somewhere. Steven's qoutes "Although RFC 1009 [Braden and Postel 1987] requires a router to generate source quenches when it runs out of buffers, the New Router Requirements RFC [Almquist 1993] changes this and says that a router must not originate source quench errors. The currently feeling is to deprecate the source quench error, since it consumes network bandwidth and is an ineffective and unfair fix for congestion."
Or is Source Quench really broken by design?
I wouldn't say that it's broken, it just isn't a desirable thing to use anymore. -- Dave Siegel dave@rtd.net Network Engineer dave@pager.rtd.com (alpha pager) (520)579-0450 (home office) http://www.rtd.com/~dsiegel/
On Wed, 4 Feb 1998, Dave Siegel wrote:
Extremely weak metaphore, since a source quench indicates there weren't enough buffers available to send your packets.
Now, if the freeway was full, and cars started dropping out of the space/time continuum, that'd be more like a source quench. ;-) The freeway would call your wife at home and say "sorry, but your husband didn't make it to work because the freeways were too full." If wife runs correct a correct TCP implementation, she would know to initiate "slow start" and would send out her husbands at a slower rate until she gets a feel for how bad the traffic is.
ROFL, nice extension. But this is not true because, as you say below, gateways don't source quench anyore.
One then wonders how well Win95 implements source quench, if at all.
Which side of the implementation do you mean? as a client, or as a gateway? I suppose it doesn't really matter. Since source quenches are not supposed to be used on routers anymore, the expectation of receiving a source quench on a large network (like the Internet) is a bad one, so the TCP implementations have to implement congestion controls through other means anyhow.
As a client, of course, since end-to-end source quench is the only alternative available. And consider the near future scenario where a user with a cable-modem connected via Ethernet to their nice new NC (with cheapest bus design possible to contain costs) has requested a URL from a <insert whomping fast server here> connected via OC-3 to the Internet. It seems likely a source quench will come in handy to provide flow control.
TCP/IP Illus. Vol. I by W. Richard Stevens has a pretty good explanation of what source quenches are.
Don't have Mr. Stevens handy, but from RFC777 (1981!), when both types of source quench were defined: ...A destination host may also send a source quench message if datagrams arrive too fast to be processed. The source quench message is a request to the host to cut back the rate at which it is sending traffic to the internet destination. This is what I was getting at. Flow control versus congestion control. ------------------------------------------------------------------- Scott Whyte 408.527.5713 |Any opinions expressed herein are Network Supported Accounts (NSA) |mine and not cisco's... CCIE 3340 | | "Eschew Obfuscation"
Which side of the implementation do you mean? as a client, or as a gateway? I suppose it doesn't really matter. Since source quenches are not supposed to be used on routers anymore, the expectation of receiving a source quench on a large network (like the Internet) is a bad one, so the TCP implementations have to implement congestion controls through other means anyhow.
As a client, of course, since end-to-end source quench is the only alternative available. And consider the near future scenario where a user with a cable-modem connected via Ethernet to their nice new NC (with cheapest bus design possible to contain costs) has requested a URL from a <insert whomping fast server here> connected via OC-3 to the Internet. It seems likely a source quench will come in handy to provide flow control.
Yes, but it seems more likely that the <whomping fast server> would be receiving such source quenches, if they were provided by <whomping fast caching agent> in the cable network.
TCP/IP Illus. Vol. I by W. Richard Stevens has a pretty good explanation of what source quenches are.
Don't have Mr. Stevens handy, but from RFC777 (1981!), when both types of source quench were defined:
...A destination host may also send a source quench message if datagrams arrive too fast to be processed. The source quench message is a request to the host to cut back the rate at which it is sending traffic to the internet destination.
This is what I was getting at. Flow control versus congestion control.
When it comes to IP, it's sometimes hard to distinguish between the two. Since you've already lost packets, is it really flow control, or congestion control? Flow control usually assumes a "hold that thought" characteristic. For example, if you are using TCP, the initialization period when packets are lost is definitely considered congestion control. Once that period of time is over, the negotiated TCP rate, and associated buffers, are considered flow control. When talking about UDP, though, most implementations ignore SQ for udp altogether, so UDP does not really implement congestion or flow control. If congestion/flow control are to be done with UDP, it has to be done at a higher layer (application). Dave -- Dave Siegel dave@rtd.net Network Engineer dave@pager.rtd.com (alpha pager) (520)579-0450 (home office) http://www.rtd.com/~dsiegel/
This is not typically a policy that is carried out by the providers. It is just how some router vendors have developed their implementations. They don't give a lower priority to UDP or ICMP unless that traffic is destine for the router itself. Although some providers do chose to block traceroutes, but that's a different story. --Mark Kayser MCI Network Operations
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of Marko B. Sent: Tuesday, February 03, 1998 2:17 PM To: nanog@merit.edu Cc: marko@notwork.net Subject: backbone routers' priority settings for ICMP & UDP
Greetings,
In my travels I have heard numerous times that it is a popular practice to give ICMP and even UDP traffic a lower priority on backbone routers in times of congestion.
Are there any providers that document this policy? How is this policy carried out on the actual hardware? Any pointers would be appreciated.
Thanks.
-Marko
This is not typically a policy that is carried out by the providers. It is just how some router vendors have developed their implementations. They don't give a lower priority to UDP or ICMP unless that traffic is destine for the router itself.
I think that this is insufficiently clear, though correct :-) Non-optioned traffic *through* a cisco router running IOS is always treated the same. Traffic destined *to* one of the addresses on a router is usually switched with a different switching mode (i.e. "process switching"). Process switching is a seperate set of queues on the router, and therefore a seperate set of delays. Despite various assertions you might hear people make, process switching is not likely to drop packets more frequently. It is likely to introduce higher delay. --jhawk
participants (8)
-
Alex Bligh
-
Dave Siegel
-
Dean Anderson
-
John Hawkinson
-
Marc Slemko
-
Mark Kayser
-
Marko B.
-
Scott Whyte