Well, so far nobody provided a valid explanation for the necessity of buffering in routers (and any other stochastically multiplexing devices). The real reason for having buffers is the fact that information about congestions takes some time to propagate. (In TCP/IP congestion are detected by seeing lost packets). If buffers are not sufficient to hold packets until TCP speakers see congestion and slow down, the system becomes unstable. Buffers are, essentially, inertial elements in the delayed negative-feedback control loop. Insufficient inertia causes oscillations in such systems, which is sometimes useful, but in case of networks leads to decreased througoutput because the wire is utilized fully only at upswings and underutilized on downswings (collapsed TCP windows aggravate the effect futher). Consequently, the holding capacity of buffers should be sufficient to bring the inertia of the system up to the reaction time of the negative feedback (this is a simplification, of course). This reaction time is about one round-trip time for end-to-end packets. In real networks, the RTTs differ for different paths, so some "characteristic" RTT is used. So, to hold packets until TCPs slow down a router needs cRTT * BW bits of buffer memory (where BW is the speed of the interface). This rule can be somewhat relaxed if congestion control loop is engaged proactively before congestion occured (by using Random Early Detection), but not much. Even with properly sized buffers, sessions with longer RTTs suffer from congestions disproportionately because TCPs on the ends never get enough time to recover fully (i.e. to bring windows to large enough size to maintain steady stream of packets), while small-RTT sessions recover quickly, and, therefore, get bigger share of bandwidth. But I'm digressing :) --vadim
participants (1)
-
Vadim Antonov