As is (reasonably) well known, TCP has its own congestion control built in to an extent. However, if your network is UDP heavy (for instance) on a protocol which has no higher level congestion control, why are source quenches from routers worse than nothing? If they aren't, then
They work well in a situation like W. Richard Stevens supplies, such as a local workstation shoveling packets into a SLIP link runs out of buffers. The usefulness breaks down in a larger Internet with more serious congestion problems. It could be the guy on the end of a PPP connection receiving source quenches from a big router out somewhere. Steven's qoutes "Although RFC 1009 [Braden and Postel 1987] requires a router to generate source quenches when it runs out of buffers, the New Router Requirements RFC [Almquist 1993] changes this and says that a router must not originate source quench errors. The currently feeling is to deprecate the source quench error, since it consumes network bandwidth and is an ineffective and unfair fix for congestion."
Or is Source Quench really broken by design?
I wouldn't say that it's broken, it just isn't a desirable thing to use anymore. -- Dave Siegel dave@rtd.net Network Engineer dave@pager.rtd.com (alpha pager) (520)579-0450 (home office) http://www.rtd.com/~dsiegel/