On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:
Why artificially keep access link speeds low just to prevent upstream network congestion? Why can't you have big access links?
You're the one that says that statistical overbooking doesn't work, not anyone else.
If you performed a simple Google search, you would have discovered many universities around the world having similar problems. The university network engineers are saying adding capacity alone isn't solving their problems.
Since I know people that offer 100/100 to residential users that upstream this with GE/10GE in their networks and they are happy with it, I don't agree with you about the problem description.
Since I know poeple that offer 100/100 to university dorms, and are having problems with GE and even 10 GE depending on the size of the dorms, if you did a Google search you would find the problem.
For statistical overbooking to work, a good rule of thumb is that the upstream can never be more than half full normally, and each customer cannot have more access speed than 1/10 of the speed of the upstream capacity.
So for example, you can have a large number of people with 100/100 uplinked with gig as long as that gig ring doesn't carry more than approx 500 meg peak 5 minute average and it'll work just fine.
1. You are assuming traffic mixes don't change. 2. You are assuming traffix mixes on every network are the same. If you restrict demand, statistical multiplexing works. The problem is how do you restrict demand? What happens when 10 x 100/100 users drive demand on your GigE ring to 99%? What happens when P2P become popular and 30% of your subscribers use P2P? What happens when 80% of your subscribers use P2P? What happens with 100% of your subscribers use P2P? TCP "friendly" flows voluntarily restrict demand by backing off when they detect congestion. The problem is TCP assumes single flows, not grouped flows used by some applications.