Since source quenches are not supposed to be used on routers anymore, the expectation of receiving a source quench on a large network (like the Internet) is a bad one, so the TCP implementations have to implement congestion controls through other means anyhow.
As is (reasonably) well known, TCP has its own congestion control built in to an extent. However, if your network is UDP heavy (for instance) on a protocol which has no higher level congestion control, why are source quenches from routers worse than nothing? If they aren't, then wouldn't ignoring source quench on the client for TCP have been a better strategy? I'm thinking about this in a WAN context where theoretically you have more control over the clients as well as an Internet context. Or is Source Quench really broken by design? -- Alex Bligh GX Networks (formerly Xara Networks)