On Thu, 25 Oct 2007, michael.dillon@bt.com wrote:
Where has it been proven that adding capacity won't solve the P2P bandwidth problem? I'm aware that some studies have shown that P2P demand increases when capacity is added, but I am not aware that anyone has attempted to see if there is an upper limit for that appetite.
The upper-limit is where packet switching turns into circuit (lambda, etc) switching with a fixed amount of bandwidth between each end-point. As long as the packet switch capacity is less, then you will have a bottleneck and statistical multiplexing. TCP does per-flow sharing, but P2P may have hundreds of independent flows sharing with each other, but tending to congest the bottleneck and crowding out single-flow network users. As long as you have a shared bottleneck in the network, it will be a problem. The only way more bandwidth solves this problem is using a circuit (lambda, etc) switched network without shared bandwidth between flows. And even then you may get "All Circuits Are Busy, Please Try Your Call Later." Of course, then the network cost will be similar to circuit networks instead of packet networks.
That leaves us with the technology of sharing, and as others have pointed out, use of DSCP bits to deploy a Scavenger service would resolve the P2P bandwidth crunch, if operators work together with P2P software authors.
Comcast's network is QOS DSCP enabled, as are many other large provider networks. Enterprise customers use QOS DSCP all the time. However, the net neutrality battles last year made it politically impossible for providers to say they use QOS in their consumer networks. Until P2P applications figure out how to play nicely with non-P2P network uses, its going to be a network wreck.