Can P2P applications learn to play fair on networks?
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else. If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem. The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications. While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited. User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too. Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe. Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions. Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even "passive" taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
* Sean Donelan:
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
Uhm, what about civil liability? It's not necessarily a technical issue that motivates them, I think.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
So is Sun RPC. I don't think the original implementation performs exponential back-off. If there is a technical reason, it's mostly that the network as deployed is not sufficient to meet user demands. Instead of providing more resources, lack of funds may force some operators to discriminate against certain traffic classes. In such a scenario, it doesn't even matter much that the targeted traffic class transports content of questionable legaility. It's more important that the measures applied to it have actual impact (Amdahl's law dictates that you target popular traffic), and that you can get away with it (this is where the legality comes into play).
Sean Donelan wrote:
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
What exactly is it that P2P applications do that is impolite? AFAIK they are mostly TCP based, so it can't be that they don't have any congestion avoidance, it's just that they utilise multiple TCP flows? Or it is the view that the need for TCP congestion avoidance to kick in is bad in itself (i.e. raw bandwidth consumption)? It seems to me that the problem is more general than just P2P applications, and there are two possible solutions: 1) Some kind of magical quality is given to the network to allow it to do congestion avoidance on an IP basis, rather than on a TCP flow basis. As previously discussed on nanog there are many problems with this approach, not least the fact the core ends up tracking a lot of flow information. 2) A QoS scavenger class is implemented so that users get a guaranteed minimum, with everything above this marked to be dropped first in the event of congestion. Of course, the QoS markings aren't carried inter-provider, but I assume that most of the congestion this thread talks about is occuring the first AS? Sam
Sean I don't think this is an issue of "fairness." There are two issues at play here: 1) Legal Liability due to the content being swapped. This is not a technical matter IMHO. 2) The breakdown of network engineering assumptions that are made when network operators are designing networks. I think network operators that are using boxes like the Sandvine box are doing this due to (2). This is because P2P traffic hits them where it hurts, aka the pocketbook. I am sure there are some altruistic network operators out there, but I would be sincerely surprised if anyone else was concerned about "fairness" Regards Bora
On Mon, 22 Oct 2007, Bora Akyol wrote:
I think network operators that are using boxes like the Sandvine box are doing this due to (2). This is because P2P traffic hits them where it hurts, aka the pocketbook. I am sure there are some altruistic network operators out there, but I would be sincerely surprised if anyone else was concerned about "fairness"
The problem with words is all the good ones are taken. The word "Fairness" has some excess baggage, nevertheless it is the word used. Network operators probably aren't operating from altruistic principles, but for most network operators when the pain isn't spread equally across the the customer base it represents a "fairness" issue. If 490 customers are complaining about bad network performance and the cause is traced to what 10 customers are doing, the reaction is to hammer the nails sticking out. Whose traffic is more "important?" World of Warcraft lagged or P2P throttled? The network operator makes P2P a little worse and makes WoW a little better, and in the end do they end up somewhat "fairly" using the same network resources. Or do we just put two extremely vocal groups, the gamers and the p2ps in a locked room and let the death match decide the winnner?
I see your point. The main problem I see with the traffic shaping or worse boxes is that comcast/ATT/... Sells a particular bandwidth to the customer. Clearly, they don't provision their network as Number_Customers*Data_Rate, they provision it to a data rate capability that is much less than the maximum possible demand. This is where the friction in traffic that you mention below happens. I have to go check on my broadband service contract to see how they word the bandwidth clause. Bora On 10/22/07 9:12 AM, "Sean Donelan" <sean@donelan.com> wrote:
On Mon, 22 Oct 2007, Bora Akyol wrote:
I think network operators that are using boxes like the Sandvine box are doing this due to (2). This is because P2P traffic hits them where it hurts, aka the pocketbook. I am sure there are some altruistic network operators out there, but I would be sincerely surprised if anyone else was concerned about "fairness"
The problem with words is all the good ones are taken. The word "Fairness" has some excess baggage, nevertheless it is the word used.
Network operators probably aren't operating from altruistic principles, but for most network operators when the pain isn't spread equally across the the customer base it represents a "fairness" issue. If 490 customers are complaining about bad network performance and the cause is traced to what 10 customers are doing, the reaction is to hammer the nails sticking out.
Whose traffic is more "important?" World of Warcraft lagged or P2P throttled? The network operator makes P2P a little worse and makes WoW a little better, and in the end do they end up somewhat "fairly" using the same network resources. Or do we just put two extremely vocal groups, the gamers and the p2ps in a locked room and let the death match decide the winnner?
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic principles, but for most network operators when the pain isn't spread equally across the the customer base it represents a "fairness" issue. If 490 customers are complaining about bad network performance and the cause is traced to what 10 customers are doing, the reaction is to hammer the nails sticking out.
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water. And they deny doing anything, to boot. A reasonable approach would be to throttle the offending applications to make them fit inside the maximum reasonable traffic envelope. What I would like is a system where there are two diffserv traffic classes: normal and scavenger-like. When a user trips some predefined traffic limit within a certain period, all their traffic is put in the scavenger bucket which takes a back seat to normal traffic. P2P users can then voluntarily choose to classify their traffic in the lower service class where it doesn't get in the way of interactive applications (both theirs and their neighbor's). I believe Azureus can already do this today. It would even be somewhat reasonable to require heavy users to buy a new modem that can implement this.
On Oct 23, 2007, at 7:18 AM, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic principles, but for most network operators when the pain isn't spread equally across the the customer base it represents a "fairness" issue. If 490 customers are complaining about bad network performance and the cause is traced to what 10 customers are doing, the reaction is to hammer the nails sticking out.
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water. And they deny doing anything, to boot.
A reasonable approach would be to throttle the offending applications to make them fit inside the maximum reasonable traffic envelope.
What I would like is a system where there are two diffserv traffic classes: normal and scavenger-like. When a user trips some predefined traffic limit within a certain period, all their traffic is put in the scavenger bucket which takes a back seat to normal traffic. P2P users can then voluntarily choose to classify their traffic in the lower service class where it doesn't get in the way of interactive applications (both theirs and their neighbor's). I believe Azureus can already do this today. It would even be somewhat reasonable to require heavy users to buy a new modem that can implement this.
I also would like to see a UDP scavenger service, for those applications that generate lots of bits but can tolerate fairly high packet losses without replacement. (VLBI, for example, can in principle live with 10% packet loss without much pain.) Drop it if you need too, if you have the resources let it through. Congestion control is not an issue because, if there is congestion, it gets dropped. In this case, I suspect that a "worst effort" TOS class would be honored across domains. I also suspect that BitTorrent could live with this TOS quite nicely. Regards Marshall
On 23-okt-2007, at 14:52, Marshall Eubanks wrote:
I also would like to see a UDP scavenger service, for those applications that generate lots of bits but can tolerate fairly high packet losses without replacement. (VLBI, for example, can in principle live with 10% packet loss without much pain.)
Note that this is slightly different from what I've been talking about: if a user trips the traffic volume limit and is put in the lower-than-normal traffic class, that user would still be using TCP apps so very high packet loss rates would be problematic here. So I guess this makes three traffic classes.
In this case, I suspect that a "worst effort" TOS class would be honored across domains.
If not always by choice. :-)
On Oct 23, 2007, at 9:07 AM, Iljitsch van Beijnum wrote:
On 23-okt-2007, at 14:52, Marshall Eubanks wrote:
I also would like to see a UDP scavenger service, for those applications that generate lots of bits but can tolerate fairly high packet losses without replacement. (VLBI, for example, can in principle live with 10% packet loss without much pain.)
Note that this is slightly different from what I've been talking about: if a user trips the traffic volume limit and is put in the lower-than-normal traffic class, that user would still be using TCP apps so very high packet loss rates would be problematic here.
So I guess this makes three traffic classes.
In this case, I suspect that a "worst effort" TOS class would be honored across domains.
If not always by choice. :-)
Comcast has come out with a little more detail on what they were doing : http://bits.blogs.nytimes.com/2007/10/22/comcast-were-delaying-not- blocking-bittorrent-traffic/ Speaking on background in a phone interview earlier today, a Comcast Internet executive admitted that reality was a little more complex. The company uses data management technologies to conserve bandwidth and allow customers to experience the Internet without delays. As part of that management process, he said, the company occasionally – but not always – delays some peer-to-peer file transfers that eat into Internet speeds for other users on the network. ----- (My understanding is that this traffic shaping is only applied to P2P traffic transiting the Comcast network, not to connections within that network.) Regards Marshall
Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic principles, but for most network operators when the pain isn't spread equally across the the customer base it represents a "fairness" issue. If 490 customers are complaining about bad network performance and the cause is traced to what 10 customers are doing, the reaction is to hammer the nails sticking out.
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water. And they deny doing anything, to boot.
A reasonable approach would be to throttle the offending applications to make them fit inside the maximum reasonable traffic envelope.
What I would like is a system where there are two diffserv traffic classes: normal and scavenger-like. When a user trips some predefined traffic limit within a certain period, all their traffic is put in the scavenger bucket which takes a back seat to normal traffic. P2P users can then voluntarily choose to classify their traffic in the lower service class where it doesn't get in the way of interactive applications (both theirs and their neighbor's). I believe Azureus can already do this today. It would even be somewhat reasonable to require heavy users to buy a new modem that can implement this.
Surely you would only want to set traffic that falls outside the limit as scavenger, rather than all of it? S
On 23-okt-2007, at 15:43, Sam Stickland wrote:
What I would like is a system where there are two diffserv traffic classes: normal and scavenger-like. When a user trips some predefined traffic limit within a certain period, all their traffic is put in the scavenger bucket which takes a back seat to normal traffic. P2P users can then voluntarily choose to classify their traffic in the lower service class where it doesn't get in the way of interactive applications (both theirs and their neighbor's).
Surely you would only want to set traffic that falls outside the limit as scavenger, rather than all of it?
If the ISP gives you (say) 1 GB a month upload capacity and on the 3rd you've used that up, then you'd be in the "worse effort" traffic class for ALL your traffic the rest of the month. But if you voluntarily give your P2P stuff the worse effort traffic class, this means you get to upload all the time (although probably not as fast) without having to worry about hurting your other traffic. This is both good in the short term, because your VoIP stuff still works when an upload is happening, and in the long term, because you get to do video conferencing throughout the month, which didn't work before after you went over 1 GB.
On 23-okt-2007, at 15:43, Sam Stickland wrote:
What I would like is a system where there are two diffserv traffic classes: normal and scavenger-like. When a user trips some predefined traffic limit within a certain period, all their traffic is put in the scavenger bucket which takes a back seat to normal traffic. P2P users can then voluntarily choose to classify their traffic in the lower service class where it doesn't get in the way of interactive applications (both theirs and their neighbor's).
Surely you would only want to set traffic that falls outside the limit as scavenger, rather than all of it?
If the ISP gives you (say) 1 GB a month upload capacity and on the 3rd you've used that up, then you'd be in the "worse effort" traffic class for ALL your traffic the rest of the month. But if you voluntarily give your P2P stuff the worse effort traffic class, this means you get to upload all the time (although probably not as fast) without having to worry about hurting your other traffic. This is both good in the short term, because your VoIP stuff still works when an upload is happening, and in the long term, because you get to do video conferencing throughout the month, which didn't work before after you went over 1 GB. Oh, you mean to do this based on traffic volume, and not current traffic rate? I suspose an external monitoring/billing tool would need track
Iljitsch van Beijnum wrote: this and reprogram the neccessary router/switch, but it's the sort of infrastructure most ISPs would need to have anyway. I was thinking more along the lines of: everything above 512 kbps (that isn't already marked worse-effort) gets marked worse effort, all of the time. Sam
On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic principles, but for most network operators when the pain isn't spread equally across the the customer base it represents a "fairness" issue. If 490 customers are complaining about bad network performance and the cause is traced to what 10 customers are doing, the reaction is to hammer the nails sticking out.
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water.
From the perspective of thee protocol designers, unfair sharing is indeed "dead" but to state it in a way that indicates customers cannot *use* BT for some function is bogus. Part of the reason why caching, provider based, etc schemes seem to be unpopular is that private trackers appear to operate much in the way that
Wrong - seeding from scratch, that is uploading without any download component, is being clobbered. Seeding back into the swarm works while one is still taking chunks down, then closes. Essentially, all clients into a client similar to BitTyrant and focuses on, as Charlie put it earlier, customers downloading stuff. old BBS download/uploads used to... you get credits for contributing and can only pull down so much based on such credits. Not just bragging rights, but users need to take part in the transactions to actually use the service. A provider-hosted solution which managed to transparently handle this across multiple clients and trackers would likely be popular with the end users. Cheers, Joe -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
On 10/23/07, Joe Provo <nanog-post@rsuc.gweep.net> wrote:
On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water.
Wrong - seeding from scratch, that is uploading without any download component, is being clobbered. Seeding back into the swarm works while one is still taking chunks down, then closes. Essentially, all clients into a client similar to BitTyrant and focuses on, as Charlie put it earlier, customers downloading stuff.
Joe
If seeding from scratch is detected by an ISP/NSP, and terminated, what happens when the Bittorrent clients evolve to detect this behavior and continue downloading even after the total transfer is complete (in order to permit themselves to seed). Would this unnecessary "dummy" downloading cause a non-significant amount of network traffic? -brandon
Joe Provo wrote:
A provider-hosted solution which managed to transparently handle this across multiple clients and trackers would likely be popular with the end users.
but not with the rights holders... J -- COO Entanet International T: 0870 770 9580 W: http://www.enta.net/ L: http://tinyurl.com/3bxqez
On Tue, 23 Oct 2007, Iljitsch van Beijnum wrote:
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water. And they deny doing anything, to boot.
A reasonable approach would be to throttle the offending applications to make them fit inside the maximum reasonable traffic envelope.
There are many "reasonable" things providers could do. However, in the US last year we had folks testifying to Congress that QOS will never work, providers must never treat any traffic differently, DPI is evil, and the answer to all our problems is just more bandwidth. Unfortnately, its currently not considered acceptable for commercial ISPs to do the same things that universities are already doing to manage traffic on their networks. The result is network engineering by politician, and many reasonable things can no longer be done. Fair usage policies QOS scavenger/background class of service Tiered data caps billing Upstream/downstream billing Changing some of the billing methods could encourage US providers to offer "uncapped" line rates, but "capped" data usage. So you could have a 20Mbps/50Mbps/100Mbps line rate, but because the upstream network utilization could be controlled at the data layer instead of the line rate, effective prices may be lower. But I don't know if the bloggersphere is ready for that yet in the US.
On 23-okt-2007, at 19:43, Sean Donelan wrote:
The problem here is that they seem to be using a sledge hammer: BitTorrent is essentially left dead in the water. And they deny doing anything, to boot.
A reasonable approach would be to throttle the offending applications to make them fit inside the maximum reasonable traffic envelope.
There are many "reasonable" things providers could do.
So then why to you stick up for Comcast when they do something unreasonable? Although yesterday there was a little more info and it seems they only stop the affected protocols temporarily, the uploads should complete later. If that's true, I'd say that's reasonable for a protocol like BitTorrent that automatically retries, but it's hard to know if it's true, and Comcast is still to blame for saying one thing and doing something else.
However, in the US last year we had folks testifying to Congress that QOS will never work, providers must never treat any traffic differently,
So what? Just because someone testified to something before the US congress doesn't make it true. Or law.
DPI is evil,
It is.
and the answer to all our problems is just more bandwidth.
That's pretty stupid. Remove one bottleneck, create another. But it's not to say that some ISPs can't stand to up their bandwidth.
The result is network engineering by politician, and many reasonable things can no longer be done.
I don't see that.
Changing some of the billing methods could encourage US providers to offer "uncapped" line rates, but "capped" data usage. So you could have a 20Mbps/50Mbps/100Mbps line rate, but because the upstream network utilization could be controlled at the data layer instead of the line rate, effective prices may be lower.
But I don't know if the bloggersphere is ready for that yet in the US.
Buying wholesale metered and reselling unmetered is just not a sustainable business model, you're always at the mercy of your cutomers' usage patterns. Most of the blogoshpere will be able to understand that, as long as ISPs make sure that 98% of all users don't have to worry about hitting traffic limits and/or having to pay extra. Remember that it's in ISP's interest that users use a lot of traffic, because otherwise they don't need to buy fatter lines. So ISPs should work hard to give users as much traffic as they can reasonably give them. (Something my new ISP should take to heart - I moved into a new apartment more than a week ago and I'm still waiting to hear from my new DSL provider.)
On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:
There are many "reasonable" things providers could do.
So then why to you stick up for Comcast when they do something unreasonable?
Although yesterday there was a little more info and it seems they only stop the affected protocols temporarily, the uploads should complete later. If that's true, I'd say that's reasonable for a protocol like BitTorrent that automatically retries, but it's hard to know if it's true, and Comcast is still to blame for saying one thing and doing something else.
Because, unlike some of the bloggers, I can actually understand what Comcast is doing and know the limitations providers work under. Most of the misinformation and hyperbole is being generated by others. Although Comcast's PR people don't explain technical things very well, they have been pretty consistent in what they've said since the begining which then gets filtered through reporters and bloggers. Now that you understand it a bit more, you're also saying it may be a reasonable approach. Nothing is perfect, and within the known limitations, Comcast is trying something interesting. Just like Cox Communications tried one resonable response to Bots, Qwest Communications tried one reasonable response to malware, AOL tried one resonable response to Spam, and so on and so on. The reality is no matter what any large provider tries or doesn't try, they will be criticized.
The result is network engineering by politician, and many reasonable things can no longer be done.
I don't see that.
You may have missed what's been happening for the last few years in the US.
On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:
The result is network engineering by politician, and many reasonable things can no longer be done.
I don't see that.
Here come the Congresspeople. After ICANN, next legistlative IETF standards for what is acceptable network management. http://www.news.com/8301-10784_3-9804158-7.html Rep. Boucher's solution: more capacity, even though it has been demonstrated many times more capacity doesn't actually solve this particular problem. Is there something in humans that makes it difficult to understand the difference between circuit-switch networks, which allocated a fixed amount of bandwidth during a session, and packet-switched networks, which vary the available bandwidth depending on overall demand throughout a session? Packet switch networks are darn cheap because you share capacity with lots of other uses; Circuit switch networks are more expensive because you get dedicated capacity for your sole use. If people think its unfair to expect them to share the packet switch network, why not return to circuit switch networks and circuit switch pricing?
Rep. Boucher's solution: more capacity, even though it has been demonstrated many times more capacity doesn't actually solve this particular problem.
Where has it been proven that adding capacity won't solve the P2P bandwidth problem? I'm aware that some studies have shown that P2P demand increases when capacity is added, but I am not aware that anyone has attempted to see if there is an upper limit for that appetite. In any case, politicians can often be convinced that a different action is better (or at least good enough) if they can see action being taken.
Packet switch networks are darn cheap because you share capacity with lots of other uses; Circuit switch networks are more expensive because you get dedicated capacity for your sole use.
That leaves us with the technology of sharing, and as others have pointed out, use of DSCP bits to deploy a Scavenger service would resolve the P2P bandwidth crunch, if operators work together with P2P software authors. Since BitTorrent is open source, and written in Python which is generally quite easy to figure out, how soon before an operator runs a trial with a customized version of BitTorrent on their network? --Michael Dillon
On Oct 25, 2007, at 12:24 PM, <michael.dillon@bt.com> wrote:
Rep. Boucher's solution: more capacity, even though it has been demonstrated many times more capacity doesn't actually solve this particular problem.
Where has it been proven that adding capacity won't solve the P2P bandwidth problem?
I don't think it has.
I'm aware that some studies have shown that P2P demand increases when capacity is added, but I am not aware that anyone has attempted to see if there is an upper limit for that appetite.
I have raised this issue with P2P promoters, and they all feel that the limit will be about at the limit of what people can watch (i.e., full rate video for whatever duration they want to watch such, at somewhere between 1 and 10 Mbps). From that regard, it's not too different from the limit _without_ P2P, which is, after all, a transport mechanism, not a promotional one. Regards Marshall
In any case, politicians can often be convinced that a different action is better (or at least good enough) if they can see action being taken.
Packet switch networks are darn cheap because you share capacity with lots of other uses; Circuit switch networks are more expensive because you get dedicated capacity for your sole use.
That leaves us with the technology of sharing, and as others have pointed out, use of DSCP bits to deploy a Scavenger service would resolve the P2P bandwidth crunch, if operators work together with P2P software authors. Since BitTorrent is open source, and written in Python which is generally quite easy to figure out, how soon before an operator runs a trial with a customized version of BitTorrent on their network?
--Michael Dillon
On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I have raised this issue with P2P promoters, and they all feel that the limit will be about at the limit of what people can watch (i.e., full rate video for whatever duration they want to watch such, at somewhere between 1 and 10 Mbps). From that regard, it's not too different from the limit _without_ P2P, which is, after all, a transport mechanism, not a promotional one.
Wrong direction. In the downstream the limit is how much they watch. The limit on how much they upload is how much everyone else in the world wants. With today's bottlenecks, the upstream utilization can easily be 3-10 times greater than the downstream. And that's with massively asymetric upstreams capacity limits. When you increase the upstream bandwith, it doesn't change the downstream demand. But the upstream demand continues to increase to consume the increased capacity. However big you make the upstream, the world-wide demand is always greater. And that demand doesn't seem to be constrained by anything a human might watch, read, listen, etc. And despite the belief P2P is "local," very little of the traffic is local particularly in the upstream direction. But again, its not an issue with any particular protocol. Its how does a network manage any and all unbehaved protocols so all the users of the network, not just the few using one particular protocol, receive a fair share of the network resources? If 5% of the P2P users only used 5% of the network resources, I doubt any network engineer would care.
On Oct 25, 2007, at 1:09 PM, Sean Donelan wrote:
On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I have raised this issue with P2P promoters, and they all feel that the limit will be about at the limit of what people can watch (i.e., full rate video for whatever duration they want to watch such, at somewhere between 1 and 10 Mbps). From that regard, it's not too different from the limit _without_ P2P, which is, after all, a transport mechanism, not a promotional one.
Wrong direction.
In the downstream the limit is how much they watch. The limit on how much they upload is how much everyone else in the world wants.
With today's bottlenecks, the upstream utilization can easily be 3-10 times greater than the downstream. And that's with massively asymetric upstreams capacity limits.
When you increase the upstream bandwith, it doesn't change the downstream demand. But the upstream demand continues to increase to consume the increased capacity. However big you make the upstream, the world-wide demand is always greater.
I don't follow this, on a statistical average. This is P2P, right ? So if I send you a piece of a file this will go out my door once, and in your door once, after a certain (& finite !) number of hops (i.e., transmissions to and from other peers). So if usage is limited to each customer, isn't upstream and downstream demand also going to be limited, roughly to no more than the usage times the number of hops ? This may be large, but it won't be unlimited. Regards Marshall
And that demand doesn't seem to be constrained by anything a human might watch, read, listen, etc.
And despite the belief P2P is "local," very little of the traffic is local particularly in the upstream direction.
But again, its not an issue with any particular protocol. Its how does a network manage any and all unbehaved protocols so all the users of the network, not just the few using one particular protocol, receive a fair share of the network resources?
If 5% of the P2P users only used 5% of the network resources, I doubt any network engineer would care.
On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I don't follow this, on a statistical average. This is P2P, right ? So if I send you a piece of a file this will go out my door once, and in your door once, after a certain (& finite !) number of hops (i.e., transmissions to and from other peers).
So if usage is limited to each customer, isn't upstream and downstream demand also going to be limited, roughly to no more than the usage times the number of hops ? This may be large, but it won't be unlimited.
Is the size of a USENET feed limited by how fast people can read? If there isn't a reason for people/computers to be efficient, they don't seem to be very efficient. There seems to be a lot of repetious transfers and transfers much larger than any human could view, listen or read in a lifetime. But again, that isn't the problem. Network operators like people who pay to do stuff they don't need. The problem is sharing network capacity between all the users of the network, so a few users/applications don't greatly impact all the other users/applications. I still doubt any network operator would care if 5% of the users consumed 5% of the network capacity 24x7x365. Network operators don't care as much even when 5% of the users consumer 100% of the network capacity when there is no other demand for network capacity. Networks operators get concerned when 5% of the users consume 95% of the network capacity and the other 95% of the users complain about long delays, timeouts, stuff not working. When 5% of the users don't play nicely with the rest of the 95% of the users; how can network operators manage the network so every user receives a fair share of the network capacity?
On Fri, 26 Oct 2007, Sean Donelan wrote:
When 5% of the users don't play nicely with the rest of the 95% of the users; how can network operators manage the network so every user receives a fair share of the network capacity?
By making sure that the 5% of users upstream capacity doesn't cause the distribution and core to be full. If the 5% causes 90% of the traffic and at peak the core is 98% full, the 95% of the users that cause 10% of the traffic couldn't tell the different from if the core/distribution was only used at 10%. If your access media doesn't support what's needed (it might be a shared media like cable) then your original bad engineering decision of choosing a shared media without fairness implemented from the beginning is something you have to live with, and you have to keep making bad decisions and implementations to patch what's already broken to begin with. You can't rely on end user applications to play fair when it comes to ISP network being full, and if they don't play fair and it's filling up the end user access, then it's that single end user that gets affected by it, not their neighbors. -- Mikael Abrahamsson email: swmike@swm.pp.se
When 5% of the users don't play nicely with the rest of the 95% of the users; how can network operators manage the network so every user receives a fair share of the network capacity? This question keeps getting asked in this thread. What is there about a scavenger class (based either on monthly volume or actual traffic rate)
Sean Donelan wrote: that doesn't solve this? Sam
On Thu, 25 Oct 2007, michael.dillon@bt.com wrote:
Where has it been proven that adding capacity won't solve the P2P bandwidth problem? I'm aware that some studies have shown that P2P demand increases when capacity is added, but I am not aware that anyone has attempted to see if there is an upper limit for that appetite.
The upper-limit is where packet switching turns into circuit (lambda, etc) switching with a fixed amount of bandwidth between each end-point. As long as the packet switch capacity is less, then you will have a bottleneck and statistical multiplexing. TCP does per-flow sharing, but P2P may have hundreds of independent flows sharing with each other, but tending to congest the bottleneck and crowding out single-flow network users. As long as you have a shared bottleneck in the network, it will be a problem. The only way more bandwidth solves this problem is using a circuit (lambda, etc) switched network without shared bandwidth between flows. And even then you may get "All Circuits Are Busy, Please Try Your Call Later." Of course, then the network cost will be similar to circuit networks instead of packet networks.
That leaves us with the technology of sharing, and as others have pointed out, use of DSCP bits to deploy a Scavenger service would resolve the P2P bandwidth crunch, if operators work together with P2P software authors.
Comcast's network is QOS DSCP enabled, as are many other large provider networks. Enterprise customers use QOS DSCP all the time. However, the net neutrality battles last year made it politically impossible for providers to say they use QOS in their consumer networks. Until P2P applications figure out how to play nicely with non-P2P network uses, its going to be a network wreck.
On 25-okt-2007, at 18:50, Sean Donelan wrote:
Comcast's network is QOS DSCP enabled, as are many other large provider networks. Enterprise customers use QOS DSCP all the time. However, the net neutrality battles last year made it politically impossible for providers to say they use QOS in their consumer networks.
And generating packets with false address information is more acceptable? I don't buy it. The problem is that ISPs work under the assumption that users only use a certain percentage of their available bandwidth, while (some) users work under the assumption that they get to use all their available bandwidth 24/7 if they choose to do so. Obviously the two are fundamentally incompatible, which becomes apparent if the number of high usage users starts to fill up available capacity to the detriment of other users. I don't see any way around instituting some kind of traffic limit. Obviously that can't be a peak bandwidth limit because that way ISPs would have to go back to selling 56k connections. (Still enough to generate 15 GB or so per month in one direction.) So it has to be a traffic limit. But then what happens when a customer goes over the limit? I think in the mobile broadband business such customers are harassed to leave. That's a good business practice if you can get away with it, but the Verizon case shows that you probably can't in the long run. So after a customer goes over the traffic limit, you still need to give them SOME service but it must be a reduced one for some time so the customer doesn't keep using up more than their share of available bandwidth. One approach is to limt bandwidth. The other is dumping that user in a lower traffic class. If there is a reasonable amount of bandwidth available for that traffic class, then the user still gets to burst (a little) so this gives them a better service level. I don't see how this logic violates net neutrality principles.
Until P2P applications figure out how to play nicely with non-P2P network uses, its going to be a network wreck.
And how exactly do you propose that they do that? My answer is: set a different DSCP. As I said before, at least one popular BitTorrent client can already do that. And if ISPs like Comcast already have diffserv-enabled networks, this seems like a no- brainer to me. Don't forget that the first victim of an overloaded last mile link is the user of that link themselves: if they let their torrents rip at max speed, they get in the way of their own interactive traffic.
The problem is that ISPs work under the assumption that users only use a certain percentage of their available bandwidth, while (some) users work under the assumption that they get to use all their available bandwidth 24/7 if they choose to do so.
My home dsl is 6mb/384k, so what exactly is the true cost of a dedicated 384K of bandwidth? I mean what you say would be true if we were talking download but for most dsl up speed is so insignificant compared to downspeed I have trouble believing that the true cost for 24x7 isn't being paid. It's just that some of the cable services are offering more up speed (1mb plus) and so are getting a disproportionate amount of fileshare upload traffic (if a download takes X minutes more is upload by a source on a 1mb upload pipe compared to a 384k upload pipe so the upload totals are greater for the cable isp). Geo. George Roettger Netlink Services
On Fri, 26 Oct 2007, Iljitsch van Beijnum wrote:
And generating packets with false address information is more acceptable? I don't buy it.
When a network is congested, someone is going to be upset about any possible response. Within the limitations the network operator has, using a TCP RST to cause applications to back-off network use is an interesting "hack" (in the original sense of the word: quick, elaborate and/or "jerry rigged" solution). Using a TCP RST is probably more "transparent" than using some other clever active queue management technique to drop particular packets from the network. Comcast's publicity problem seems to be that they used a more "visible" technique instead of a harder to detect technique to respond to network congestion. If Comcast had used Sandvine's other capabilities to inspect and drop particular packets, would that have been more acceptable? Please re-read my first post about some of the alternatives, and people griping about all of them. Dropping random packets (i.e. FIFO queue, RED, not good on multiple-flows) Dropping particular packets (i.e. AQM, WRED, etc, difficult for multiple flows) Dropping DSCP marked packets first (i.e. scavenger class requires voluntary marking) Dropping particular protocols (i.e. ACLs, difficult for dynamic protocols) Sending an ICMP Source quench (i.e. ignored by many IP stacks) Sending a TCP RST (i.e. most application protocols respond, easy for out-of-band devices) Changing IP headers (i.e. ECN bits, not implemented widely, requires inline device) Changing TCP headers (i.e. decrease windowsize, requires inline device) Changing access speed (i.e. dropping user down to 64Kbps, crushes every application) Charging for overuse (i.e. more than X Gbps data transferred per time period, complaints about extra charges) Terminate customers using too much capacity (i.e. move the problem to a different provider) and of course Do nothing (i.e. let the applications grab whatever they can, even if that results in incredibly bad performance for many users) Add more capacity (i.e. what do you do in the mean time, people want something now) Raise prices (i.e. discourage additional use) People are going to gripe no matter what. One week they are griping about ISPs not doing anything, the next week they are griping about ISPs doing something.
On Fri, 26 Oct 2007, Sean Donelan wrote:
If Comcast had used Sandvine's other capabilities to inspect and drop particular packets, would that have been more acceptable?
Yes, definately.
Dropping random packets (i.e. FIFO queue, RED, not good on multiple-flows) Dropping particular packets (i.e. AQM, WRED, etc, difficult for multiple flows) Dropping DSCP marked packets first (i.e. scavenger class requires voluntary marking) Dropping particular protocols (i.e. ACLs, difficult for dynamic protocols)
Dropping a limited ratio of the packets is acceptable at least to me.
Sending a TCP RST (i.e. most application protocols respond, easy for out-of-band devices)
... but terminating the connection is not. Spoofing packets is not something an ISP should do. Ever. Dropping and/or delaying packets, yes, spoofing, no.
Changing IP headers (i.e. ECN bits, not implemented widely, requires inline device) Changing TCP headers (i.e. decrease windowsize, requires inline device) Changing access speed (i.e. dropping user down to 64Kbps, crushes every application) Charging for overuse (i.e. more than X Gbps data transferred per time period, complaints about extra charges) Terminate customers using too much capacity (i.e. move the problem to a different provider)
These are all acceptable, where I think the adjust MSS is bordering on intrusion in customer traffic. An ISP should be in the market of forwarding packets, not changing them. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Fri, 26 Oct 2007, Mikael Abrahamsson wrote:
If Comcast had used Sandvine's other capabilities to inspect and drop particular packets, would that have been more acceptable?
Yes, definately.
So another in-line device is better than an out-of-band device.
... but terminating the connection is not. Spoofing packets is not something an ISP should do. Ever. Dropping and/or delaying packets, yes, spoofing, no.
So ISPs should not do any NAT, transparent accelerators, transparent web caches, walled gardens for infected computers, etc. We seem to agree that ISPs can "intefere" with network traffic, the debate is only how they do it.
On 26 okt 2007, at 18:29, Sean Donelan wrote:
And generating packets with false address information is more acceptable? I don't buy it.
When a network is congested, someone is going to be upset about any possible response.
That doesn't mean all possible responses are equally acceptable. There are three reasons why what Comcast does is worse than some other things they could do: 1. They're not clearly saying what they're doing 2. They inject packets that pretend to come from someone else 3. There is nothing the user can do to work within the system
Using a TCP RST is probably more "transparent" than using some other clever active queue management technique to drop particular packets from the network.
With shaping/policing I still get to transmit a certain amount of data. With sending RSTs in some cases and not others there's nothing I can do to use the service, even at a moderate level, if I'm unlucky. Oh, and let me add: 4. It won't work in the long run, it just means people will have to use IPsec with their peer-to-peer apps to sniff out the fake RSTs
If Comcast had used Sandvine's other capabilities to inspect and drop particular packets, would that have been more acceptable?
Depends. But it all has to start with them making public what service level users can expect.
Add more capacity (i.e. what do you do in the mean time, people want something now)
Since you can't know on which path the capacity is needed, it's impossible to build enough of it to cover all possible eventualities. So even though Comcast probably needs to increase capacity, that doesn't solve the fundamental problem.
Raise prices (i.e. discourage additional use)
Higher flat fee pricing doesn't discourage additional use. I'd say it encourages it: if I have to pay this much, I'll make sure I get my money's worth!
People are going to gripe no matter what. One week they are griping about ISPs not doing anything, the next week they are griping about ISPs doing something.
Guess what: sometimes the gripes are legitimate. On 26 okt 2007, at 17:24, Sean Donelan wrote:
The problem is not bandwidth, its shared congestion points.
While that is A problem, it's not THE problem. THE problem is that Comcast can't deliver the service that customers think they're buying.
However, I think a better idea instead of trying to eliminate all shared congestion points everywhere in a packet network would be for the TCP protocol magicians to develop a TCP-multi-flow congestion avoidance which would share the available capacity better between all of the demand at the various shared congestion points in the network.
The problem is not with TCP: TCP will try to get the most out of the available bandwidth that it sees, which is the only reasonable behavior for such a protocol. You can easily get a bunch of TCP streams to stay within a desired bandwidth envelope by dropping the requisite number of packets. Techniques such as RED will create a reasonable level of fairness between high and low bandwidth flows. What you can't easily do by dropping packets without looking inside of them is favoring certain applications or making sure that low-volume users get a better service level than low-volume users. Those are issues that I don't think can reasonably be shoehorned into TCP congestion management. However, we do have a technique that was created for exactly this purpose: diffserv. Yes, it's unfortunate that diffserv is the same technology that would power a non-neutral internet, but that doesn't mean that ANY use of diffserv is automatically at odds with net neutrality principles. Diffserv is just a tool; like all tools, it can be used in different ways. For good and evil, if you will.
Isn't the Internet supposed be a "dumb" network with "smart" hosts? If the hosts act dumb, is the network forced to act smart?
It's not the intelligence that's the problem, but the incentive structure. Iljitsch
On Thu, 25 Oct 2007 12:50:32 -0400 (EDT) Sean Donelan <sean@donelan.com> wrote:
Comcast's network is QOS DSCP enabled, as are many other large provider networks. Enterprise customers use QOS DSCP all the time. However, the net neutrality battles last year made it politically impossible for providers to say they use QOS in their consumer networks.
re: <http://www.merit.edu/mail.archives/nanog/2005-12/msg00334.html> This came up before and I'll ask again, what do you mean by QoS? And what precisely does QoS DSCP really mean here? It's important to know what queueing, dropping, limiting, etc. policies and hardware/buffering capabilities are with the DSCP settings. Otherwise it's just a buzzword on a checklist that might not even actually do anything. I'd also like to hear about monitoring and management capabilities are deployed, that was a real problem last time I checked. How much has really changed? Do you (or if someone on these big nets wants to own up offlist) have pointers to indicate that deployment is significantly different now than they were a couple years ago? Even better, perhaps someone can do a preso at a future meeting on their recent deployment experience? I did one a couple years and I haven't heard of things improving markedly since then, but then I am still recovering from having drunk from that jug of kool-aid. :-) John
On Mon, 29 Oct 2007, John Kristoff wrote:
How much has really changed? Do you (or if someone on these big nets wants to own up offlist) have pointers to indicate that deployment is significantly different now than they were a couple years ago? Even better, perhaps someone can do a preso at a future meeting on their recent deployment experience? I did one a couple years and I haven't heard of things improving markedly since then, but then I am still recovering from having drunk from that jug of kool-aid. :-)
Once you get past the religious debates, DSCP can be very useful to large, complicated networks with many entry and exit points. Think about how large networks use tools such as BGP Communities to manage routing policies across many different types of interconnections. You may want to consider how networks use similar tools such as DSCP to mark packets entering networks from internal, external, source address validated, management, etc interfaces. There are limited code-points so you can't be too clever, but even knowing on the other side of then network that a packet entered the network through a spoofable/non-spoofable network interface may be very useful.
Rep. Boucher's solution: more capacity, even though it has been demonstrated many times more capacity doesn't actually solve this particular problem.
That would seem to be an inaccurate statement.
Is there something in humans that makes it difficult to understand the difference between circuit-switch networks, which allocated a fixed amount of bandwidth during a session, and packet-switched networks, which vary the available bandwidth depending on overall demand throughout a session?
Packet switch networks are darn cheap because you share capacity with lots of other uses; Circuit switch networks are more expensive because you get dedicated capacity for your sole use.
So, what happens when you add sufficient capacity to the packet switch network that it is able to deliver committed bandwidth to all users? Answer: by adding capacity, you've created a packet switched network where you actually get dedicated capacity for your sole use. If you're on a packet network with a finite amount of shared capacity, there *IS* an ultimate amount of capacity that you can add to eliminate any bottlenecks. Period! At that point, it behaves (more or less) like a circuit switched network. The reasons not to build your packet switched network with that much capacity are more financial and technical than they are "impossible." We "know" that the average user will not use all their bandwidth. It's also more expensive to install more equipment; it is nice when you can fit more subscribers on the same amount of equipment. However, at the point where capacity becomes a problem, you actually do have several choices: 1) Block certain types of traffic, 2) Limit {certain types of, all} traffic, 3) Change user behaviours, or 4) Add some more capacity Come to mind as being the major available options. ALL of these can be effective. EACH of them has specific downsides. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Fri, 26 Oct 2007, Joe Greco wrote:
So, what happens when you add sufficient capacity to the packet switch network that it is able to deliver committed bandwidth to all users?
Answer: by adding capacity, you've created a packet switched network where you actually get dedicated capacity for your sole use.
Changing the capacity at different points in the network merely moves the congestion points around the network. There will still be congestion points in any packet network. The problem is not bandwidth, its shared congestion points. Don't share congestion points: bandwidth irrelevant. Shared congestion points: bandwidth irrelevant. A 56Kbps network with no shared congestion points: not a problem A 1,000 Terabit network with shared congestion points: a problem The difference is if there is shared congestion points, not the bandwidth. If you think adjusting capacity is the solution, and hosts don't voluntarily adjust their demand on their own, then you should be *REDUCING* your access capacity which will move the congestion point closer to the host. However, I think a better idea instead of trying to eliminate all shared congestion points everywhere in a packet network would be for the TCP protocol magicians to develop a TCP-multi-flow congestion avoidance which would share the available capacity better between all of the demand at the various shared congestion points in the network. Isn't the Internet supposed be a "dumb" network with "smart" hosts? If the hosts act dumb, is the network forced to act smart?
Bora Akyol wrote:
1) Legal Liability due to the content being swapped. This is not a technical matter IMHO.
Instead of sending an icmp host unreachable, they are closing the connection via spoofing. I think it's kinder than just dropping the packets all together.
2) The breakdown of network engineering assumptions that are made when network operators are designing networks.
I think network operators that are using boxes like the Sandvine box are doing this due to (2). This is because P2P traffic hits them where it hurts, aka the pocketbook. I am sure there are some altruistic network operators out there, but I would be sincerely surprised if anyone else was concerned about "fairness"
As has been pointed out a few times, there are issues with CMTS systems, including maximum upstream bandwidth allotted versus maximum downstream bandwidth. I agree that there is an engineering problem, but it is not on the part of network operators. DSL fits in it's own little world, but until VDSL2 was designed, there were hard caps set to down speed versus up speed. This has been how many last mile systems were designed, even in shared bandwidth mediums. More downstream capacity will be needed than upstream. As traffic patterns have changed, the equipment and the standards it is built upon have become antiquated. As a tactical response, many companies do not support the operation of servers for last mile, which has been defined to include p2p seeding. This is their right, and it allows them to protect the precious upstream bandwidth until technology can adapt to a high capacity upstream as well as downstream for the last mile. Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I run a DSL network. I waste a lot of upstream bandwidth on my backbone. Most downstream/upstream ratios I see on last mile standards and equipment derived from such standards isn't even close to 4:1. I'd expect such ratio's if I filtered out the p2p traffic on my network. If I ran a shared bandwidth last mile system, I'd definitely be filtering unless my overall customer base was small enough to not care about maximums on the CMTS. Fixed downstream/upstream ratios must die in all standards and implementations. It seems a few newer CMTS are moving that direction (though I note one I quickly found mentions it's flexible ratio as beyond DOCSIS 3.0 features which implies the standard is still fixed ratio), but I suspect it will be years before networks can adapt. Jack Bates
Here's a few downstream/upstream numbers and ratios: ADSL2+: 24/1.5 = 16:1 (sans Annex.M) DOCSIS 1.1: 38/9 = 4.2:1 (best case up and downstream modulations and carrier widths) BPON: 622/155 = 4:1 GPON: 2488/1244 = 2:1 Only the first is non-shared, so that even though the ratio is poor, a person can fill their upstream pipe up without impacting their neighbors. It's an interesting question to ask how much engineering decisions have led to the point where we are today with bandwidth-throttling products, or if that would have happened in an entirely symmetrical environment. DOCSIS 2.0 adds support for higher levels of modulation on the upstream, plus wider bandwidth (http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif), but still not enough to compensate for the higher downstreams possible with channel bonding in DOCSIS 3.0. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Jack Bates Sent: Monday, October 22, 2007 12:35 PM To: Bora Akyol Cc: Sean Donelan; nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? Bora Akyol wrote:
1) Legal Liability due to the content being swapped. This is not a technical matter IMHO.
Instead of sending an icmp host unreachable, they are closing the connection via spoofing. I think it's kinder than just dropping the packets all together.
2) The breakdown of network engineering assumptions that are made when network operators are designing networks.
I think network operators that are using boxes like the Sandvine box are doing this due to (2). This is because P2P traffic hits them where it hurts, aka the pocketbook. I am sure there are some altruistic network operators out there, but I would be sincerely surprised if anyone else was concerned about "fairness"
As has been pointed out a few times, there are issues with CMTS systems, including maximum upstream bandwidth allotted versus maximum downstream bandwidth. I agree that there is an engineering problem, but it is not on the part of network operators. DSL fits in it's own little world, but until VDSL2 was designed, there were hard caps set to down speed versus up speed. This has been how many last mile systems were designed, even in shared bandwidth mediums. More downstream capacity will be needed than upstream. As traffic patterns have changed, the equipment and the standards it is built upon have become antiquated. As a tactical response, many companies do not support the operation of servers for last mile, which has been defined to include p2p seeding. This is their right, and it allows them to protect the precious upstream bandwidth until technology can adapt to a high capacity upstream as well as downstream for the last mile. Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I run a DSL network. I waste a lot of upstream bandwidth on my backbone. Most downstream/upstream ratios I see on last mile standards and equipment derived from such standards isn't even close to 4:1. I'd expect such ratio's if I filtered out the p2p traffic on my network. If I ran a shared bandwidth last mile system, I'd definitely be filtering unless my overall customer base was small enough to not care about maximums on the CMTS. Fixed downstream/upstream ratios must die in all standards and implementations. It seems a few newer CMTS are moving that direction (though I note one I quickly found mentions it's flexible ratio as beyond DOCSIS 3.0 features which implies the standard is still fixed ratio), but I suspect it will be years before networks can adapt. Jack Bates
I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered. A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could "cache out" at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense. Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them . I am told these methods are in fact covered by the DMCA but remember I am no lawyer. Feel free to reply direct if you want contacts Rich -------------------------------------------------- From: "Sean Donelan" <sean@donelan.com> Sent: Sunday, October 21, 2007 12:24 AM To: <nanog@merit.edu> Subject: Can P2P applications learn to play fair on networks?
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited.
User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too.
Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe.
Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions.
Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even "passive" taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
I don't see how this Oversi caching solution will work with today's HFC deployments -- the demodulation happens in the CMTS, not in the field. And if we're talking about de-coupling the RF from the CMTS, which is what is happening with M-CMTSes (http://broadband.motorola.com/ips/modular_CMTS.html), you're really changing an MSO's architecture. Not that I'm dissing it, as that may be what's necessary to deal with the upstream bandwidth constraint, but that's a future vision, not a current reality. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Rich Groves Sent: Monday, October 22, 2007 3:06 PM To: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered. A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could "cache out" at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense. Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them . I am told these methods are in fact covered by the DMCA but remember I am no lawyer. Feel free to reply direct if you want contacts Rich -------------------------------------------------- From: "Sean Donelan" <sean@donelan.com> Sent: Sunday, October 21, 2007 12:24 AM To: <nanog@merit.edu> Subject: Can P2P applications learn to play fair on networks?
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited.
User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too.
Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe.
Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions.
Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even "passive" taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
Hey Rich. We discussed the technology before but the actual mental click here is important -- thank you. BTW, I *think* it was Randy Bush who said "today's leechers are tomorrow's cachers". His quote was longer but I can't remember it. Gadi. On Mon, 22 Oct 2007, Rich Groves wrote:
I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered.
A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could "cache out" at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense.
Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them .
I am told these methods are in fact covered by the DMCA but remember I am no lawyer.
Feel free to reply direct if you want contacts
Rich
-------------------------------------------------- From: "Sean Donelan" <sean@donelan.com> Sent: Sunday, October 21, 2007 12:24 AM To: <nanog@merit.edu> Subject: Can P2P applications learn to play fair on networks?
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited.
User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too.
Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe.
Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions.
Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even "passive" taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
Frank, The problem caching solves in this situation is much less complex than what you are speaking of. Caching toward your client base brings down your transit costs (if you have any)........or lowers congestion in congested areas if the solution is installed in the proper place. Caching toward the rest of the world gives you a way to relieve stress on the upstream for sure. Now of course it is a bit outside of the box to think that providers would want to cache not only for their internal customers but also users of the open internet. But realistically that is what they are doing now with any of these peer to peer overlay networks, they just aren't managing the boxes that house the data. Getting it under control and off of problem areas of the network should be the first (and not just future) solution. There are both negative and positive methods of controlling this traffic. We've seen the negative of course, perhaps the positive is to give the user what they want ......just on the providers terms. my 2 cents Rich -------------------------------------------------- From: "Frank Bulk" <frnkblk@iname.com> Sent: Monday, October 22, 2007 7:42 PM To: "'Rich Groves'" <rich@richgroves.com>; <nanog@merit.edu> Subject: RE: Can P2P applications learn to play fair on networks?
I don't see how this Oversi caching solution will work with today's HFC deployments -- the demodulation happens in the CMTS, not in the field. And if we're talking about de-coupling the RF from the CMTS, which is what is happening with M-CMTSes (http://broadband.motorola.com/ips/modular_CMTS.html), you're really changing an MSO's architecture. Not that I'm dissing it, as that may be what's necessary to deal with the upstream bandwidth constraint, but that's a future vision, not a current reality.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Rich Groves Sent: Monday, October 22, 2007 3:06 PM To: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks?
I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered.
A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could "cache out" at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense.
Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them .
I am told these methods are in fact covered by the DMCA but remember I am no lawyer.
Feel free to reply direct if you want contacts
Rich
-------------------------------------------------- From: "Sean Donelan" <sean@donelan.com> Sent: Sunday, October 21, 2007 12:24 AM To: <nanog@merit.edu> Subject: Can P2P applications learn to play fair on networks?
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited.
User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too.
Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe.
Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions.
Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even "passive" taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
My apologies if I wasn't clear -- my point was that caching toward the client base changes installed architectures, an expensive proposition. If caching will find any success it needs to be at the lowest possible price point, which means collocating where access and transport meet, not in the field. I have little reason to believe that providers are going to cache for the internet to solve their last-mile upstream challenges. Frank -----Original Message----- From: Rich Groves [mailto:rich@richgroves.com] Sent: Monday, October 22, 2007 11:49 PM To: frnkblk@iname.com; nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? Frank, The problem caching solves in this situation is much less complex than what you are speaking of. Caching toward your client base brings down your transit costs (if you have any)........or lowers congestion in congested areas if the solution is installed in the proper place. Caching toward the rest of the world gives you a way to relieve stress on the upstream for sure. Now of course it is a bit outside of the box to think that providers would want to cache not only for their internal customers but also users of the open internet. But realistically that is what they are doing now with any of these peer to peer overlay networks, they just aren't managing the boxes that house the data. Getting it under control and off of problem areas of the network should be the first (and not just future) solution. There are both negative and positive methods of controlling this traffic. We've seen the negative of course, perhaps the positive is to give the user what they want ......just on the providers terms. my 2 cents Rich -------------------------------------------------- From: "Frank Bulk" <frnkblk@iname.com> Sent: Monday, October 22, 2007 7:42 PM To: "'Rich Groves'" <rich@richgroves.com>; <nanog@merit.edu> Subject: RE: Can P2P applications learn to play fair on networks?
I don't see how this Oversi caching solution will work with today's HFC deployments -- the demodulation happens in the CMTS, not in the field. And if we're talking about de-coupling the RF from the CMTS, which is what is happening with M-CMTSes (http://broadband.motorola.com/ips/modular_CMTS.html), you're really changing an MSO's architecture. Not that I'm dissing it, as that may be what's necessary to deal with the upstream bandwidth constraint, but that's a future vision, not a current reality.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Rich Groves Sent: Monday, October 22, 2007 3:06 PM To: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks?
I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered.
A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could "cache out" at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense.
Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them .
I am told these methods are in fact covered by the DMCA but remember I am no lawyer.
Feel free to reply direct if you want contacts
Rich
-------------------------------------------------- From: "Sean Donelan" <sean@donelan.com> Sent: Sunday, October 21, 2007 12:24 AM To: <nanog@merit.edu> Subject: Can P2P applications learn to play fair on networks?
Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else.
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other "polite" network applications.
While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited.
User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too.
Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe.
Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions.
Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even "passive" taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
participants (18)
-
Bora Akyol
-
Brandon Galbraith
-
Florian Weimer
-
Frank Bulk
-
Gadi Evron
-
Geo.
-
Iljitsch van Beijnum
-
Jack Bates
-
James Blessing
-
Joe Greco
-
Joe Provo
-
John Kristoff
-
Marshall Eubanks
-
michael.dillon@bt.com
-
Mikael Abrahamsson
-
Rich Groves
-
Sam Stickland
-
Sean Donelan