How about a system where I tell my customers that for a given plan X at price Y they get U bytes of "high priority" upload per month (or day or whatever) and after that all their traffic is low priority until the next cycle starts. Now here's the fun part. They can mark the priority on the packets they send (diffserv/TOS) and decide what they want treated as high priority and what they want treated as not-so-high priority. If I'm a low usage customer with no p2p applications, maybe I can mark ALL my traffic high priority all month long and not run over my limit. If I run p2p, I can choose to set my p2p software to send all it's traffic marked low priority if I want to, and save my high priority traffic quote for more important stuff. Maybe the default should be high priority so that customers who do nothing but are light users get the best service. low priority upstream traffic gets dropped in favor of high priority, but users decide what's important to them. If I want all my stuff to be high priority, maybe there's a metered plan I can sign up for so I don't have any hard cap on high priority traffic each month but I pay extra over a certain amount. This seems like it would be reasonable and fair and p2p wouldn't have to be singled out. Any thoughts? On 10/22/07, Joe Greco <jgreco@ns.sol.net> wrote:
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
A system that wasn't P2P-centric could be interesting, though making it P2P-centric would be easier, I'm sure. ;-)
The idea that Internet data flows would ever stop probably doesn't work out well for the average user.
What about a system that would /guarantee/ a low amount of data on a low priority queue, but would also provide access to whatever excess capacity was currently available (if any)?
We've already seen service providers such as Virgin UK implementing things which essentially try to do this, where during primetime they'll limit the largest consumers of bandwidth for 4 hours. The method is completely different, but the end result looks somewhat similar. The recent discussion of AU service providers also talks about providing a baseline service once you've exceeded your quota, which is a simplified version of this.
Would it be better for networks to focus on separating data classes and providing a product that's actually capable of quality-of-service style attributes?
Would it be beneficial to be able to do this on an end-to-end basis (which implies being able to QoS across ASN's)?
The real problem with the "throw more bandwidth" solution is that at some point, you simply cannot do it, since the available capacity on your last mile simply isn't sufficient for the numbers you're selling, even if you are able to buy cheaper upstream bandwidth for it.
Perhaps that's just an argument to fix the last mile.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.