Hmmmm. You're right. I lost sight of the original thread... GigE inter-switch trunking at PAIX. In that case, congestion _should_ be low, and there shouldn't be much queue depth.
indeed, this is the case. we keep a lot of headroom on those trunks.
But this _does_ bank on current "real world" behavior. If endpoints ever approach GigE speeds (of course requiring "low enough" latency and "big enough" windows)...
Then again, last mile is so slow that we're probably a ways away from that happening.
my expectation is that when the last mile goes to 622Mb/s or 1000Mb/s, exchange points will all be operating at 10Gb/s, and interswitch trunks at exchange points will be multiples of 10Gb/s.
Of course, I'd hope that individual heavy pairs would establish private interconnects instead of using public switch fabric, but I know that's not always { an option | done | ... }.
individual heavy pairs do this, but as a long term response to growth, not as a short term response to congestion. in the short term, the exchange point switch can't present congestion. it's just not on the table at all.