Re: Can P2P applications learn to play fair on networks?
On Sun, 21 Oct 2007, Florian Weimer wrote:
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
Uhm, what about civil liability? It's not necessarily a technical issue that motivates them, I think.
If it was civil liability, why are they responding to the protocol being used instead of the content?
So is Sun RPC. I don't think the original implementation performs exponential back-off.
If lots of people were still using Sun RPC, causing other subscribers to complain, then I suspect you would see similar attempts to throttle it.
If there is a technical reason, it's mostly that the network as deployed is not sufficient to meet user demands. Instead of providing more resources, lack of funds may force some operators to discriminate against certain traffic classes. In such a scenario, it doesn't even matter much that the targeted traffic class transports content of questionable legaility. It's more important that the measures applied to it have actual impact (Amdahl's law dictates that you target popular traffic), and that you can get away with it (this is where the legality comes into play).
Sandvine, packeteer, etc boxes aren't cheap either. The problem is giving P2P more resources just means P2P consumes more resources, it doesn't solve the problem of sharing those resources with other users. Only if P2P shared network resources with other applications well does increasing network resources make more sense.
On Sun, 21 Oct 2007, Sean Donelan wrote:
Sandvine, packeteer, etc boxes aren't cheap either. The problem is giving P2P more resources just means P2P consumes more resources, it doesn't solve the problem of sharing those resources with other users. Only if P2P shared network resources with other applications well does increasing network resources make more sense.
If your network cannot handle the traffic, don't offer the services. It all boils down to the fact that the only thing that end users really have to give us as ISPs, is their source address (which we usually assign to them), the destination address of the packet they want transported, and we can implicitly look at the size of the packet and get that information. That's the ONLY thing they have to give us. Forget looking at L4 or alike, that will be encrypted as soon as ISPs start to discriminate on it. Users have enough computing power available to encrypt everything. So any device that looks inside packets to decide what to do with them is going to fail in the long run and is thus a stop-gap measure before you can figure out anything better. Next step for these devices is to start doing statistical analysis of traffic to find patterns, such as "you're sending traffic to hundreds of different IPs simultaneously, you must be filesharing" or alike. This is the next step and a lot of the box manufacturers are already looking into this. So, trench war again, I can see countermeasures to this also. The long term solution is of course to make sure that you can handle the traffic that the customer wants to send (because that's what they can control), perhaps by charging for it by some scheme that involves not offering flat-fee. Saying "p2p doesn't play nice with the rest of the network" and blaming p2p, only means you're congesting due to insufficient resources, and the fact that p2p uses a lot of simultaneous TCP sessions and individually they're playing nice, but together they're not when compared to web surfing. The solution is not to try to change p2p, the solution is to fix the network or the business model so your network is not congesting. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved? A better idea might be for the application protocol designers to improve those particular applications. In the mean time, universities, enterprises and ISPs have a lot of other users to serve.
On Sun, 21 Oct 2007, Sean Donelan wrote:
So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?
They should stop to offer flat-rate ones anyway. Or do general per-user ratelimiting that is protocol/application agnostic. There are many ways to solve the problem generally instead of per application, that will also work 10 years from now when the next couple of killer apps have arrived and past away again.
A better idea might be for the application protocol designers to improve those particular applications.
Good luck with that. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's management has publically stated anyone who doesn't like the network management controls on its flat rate service can upgrade to Comcat's business class service. Problem solved? Or would some P2P folks complain about having to pay more money?
Or do general per-user ratelimiting that is protocol/application agnostic.
As I mentioned previously about the issues involving additional in-line devices and so on in networks, imposing per user network management and billing is a much more complicated task. If only a few protocol/applications are causing a problem, why do you need an overly complex response? Why not target the few things that are causing problems?
A better idea might be for the application protocol designers to improve those particular applications.
Good luck with that.
It took a while, but it worked with the UDP audio/video protocol folks who used to stress networks. Eventually those protocol designers learned to control their applications and make them play nicely on the network.
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's management has publically stated anyone who doesn't like the network management controls on its flat rate service can upgrade to Comcat's business class service.
Problem solved?
Assuming a "business class" service that's reasonably priced and featured? Absolutely. I'm not sure I've seen that to be the case, however. Last time I checked with a local cable company for T1-like service, they wanted something like $800/mo, which was about $300-$400/mo more than several of the CLEC's. However, that was awhile ago, and it isn't clear that the service offerings would be the same. I don't class cable service as being as reliable as a T1, however. We've witnessed that the cable network fails shortly after any regional power outage here, and it has somewhat regular burps in the service anyways. I'll note that I can get unlimited business-class DSL (2M/512k ADSL) for about $60/mo (24m), and that was explicitly spelled out to be unlimited- use as part of the RFP. By way of comparison, our local residential RR service is now 8M/512k for about $45/mo (as of just a month or two ago). I think I'd have to conclude that I'd certainly see a premium above and beyond the cost of a residential plan to be reasonable, but I don't expect it to be many multiples of the resi service price, given that DSL plans will promise the bandwidth at just a slightly higher cost.
Or would some P2P folks complain about having to pay more money?
Of course they will.
Or do general per-user ratelimiting that is protocol/application agnostic.
As I mentioned previously about the issues involving additional in-line devices and so on in networks, imposing per user network management and billing is a much more complicated task.
If only a few protocol/applications are causing a problem, why do you need an overly complex response? Why not target the few things that are causing problems?
Well, because when you promise someone an Internet connection, they usually expect it to work. Is it reasonable for Comcast to unilaterally decide that my P2P filesharing of my family photos and video clips is bad?
A better idea might be for the application protocol designers to improve those particular applications.
Good luck with that.
It took a while, but it worked with the UDP audio/video protocol folks who used to stress networks. Eventually those protocol designers learned to control their applications and make them play nicely on the network.
:-) ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sun, 21 Oct 2007, Joe Greco wrote:
If only a few protocol/applications are causing a problem, why do you need an overly complex response? Why not target the few things that are causing problems?
Well, because when you promise someone an Internet connection, they usually expect it to work. Is it reasonable for Comcast to unilaterally decide that my P2P filesharing of my family photos and video clips is bad?
So what about the other 490 people on the node expecting it to work? Do you tell them sorry, but 10 of your neighbors are using badly behaved applications so everything you are trying to use it for is having problems. Maybe Comcast should just tell the other 490 neighbors the 10 names and addresses of poorly behaved P2P users and let the neighhood solve the problem. Is it reasonable for your filesharing of your family photos and video clips to cause problems for all the other users of the network? Is that fair or just greedy?
Sean Donelan wrote:
So what about the other 490 people on the node expecting it to work? Do you tell them sorry, but 10 of your neighbors are using badly behaved applications so everything you are trying to use it for is having problems.
Maybe Comcast should fix their broken network architecture if 10 users sending their own data using TCP (or something else with TCP-like congestion control) can break the 490 other people on a node. Or get on their vendor to fix it, if they can't. If that means traffic shaping at the CPE or very near the customer, then perhaps that's what it means, but installing a 3rd-party box that sniffs away and then sends forged RSTs in order to live up to its advertised claims is clearly at the "wrong" end of the spectrum of possible solutions.
Maybe Comcast should just tell the other 490 neighbors the 10 names and addresses of poorly behaved P2P users and let the neighhood solve the problem.
Maybe Comcast's behavior will cause all 500 neighbors to find an ISP that isn't broken. We can only hope.
Is it reasonable for your filesharing of your family photos and video clips to cause problems for all the other users of the network? Is that fair or just greedy?
It isn't fair or greedy, it is a bug that it does so. Greedy would be if you were using a non-congestion-controlled protocol like most naive RTP-based VoIP apps do. Matthew Kaufman matthew@eeph.com
Matthew Kaufman wrote:
Maybe Comcast should fix their broken network architecture if 10 users sending their own data using TCP (or something else with TCP-like congestion control) can break the 490 other people on a node.
That's somewhat like saying you should fix your debt problem by acquiring more money. Clearly there are things that need to be improved in broadband networks as a whole, but the path to that solution isn't nearly as simple as you make it sound.
Or get on their vendor to fix it, if they can't.
They have. Enter DOCSIS 3.0. The problem is that the benefits of DOCSIS 3.0 will only come after they've allocated more frequency space, upgraded their CMTS hardware, upgraded their HFC node hardware where necessary, and replaced subscriber modems with DOCSIS 3.0 capable versions. On an optimistic timeline that's at least 18-24 months before things are going to be better; the problem is things are broken _today_.
If that means traffic shaping at the CPE or very near the customer, then perhaps that's what it means, but installing a 3rd-party box that sniffs away and then sends forged RSTs in order to live up to its advertised claims is clearly at the "wrong" end of the spectrum of possible solutions.
On a philosophical level I would agree with you, but we also live in a world of compromise. Sure, Comcast could drop their upstream sync rate to 64kbps, but why should they punish everyone on the node for the actions of a few? From the perspective of practical network engineering, as long as impact can be contained to just seeding activities from P2P applications I don't think injected resets are as evil as people make them out to be. You don't see people getting up in arms about spoofed TCP ACKs that satellite internet providers use to overcome high latency effects on TCP transfer rates. In both cases the ISP is generating traffic on your behalf, the only difference is the outcome. In Comcast's case I believe for their solutions the net effect is the same; by limiting the number of seeding connections they are essentially rate limiting P2P traffic. It just happens that reset inject is by far the easiest option to implement.
Maybe Comcast's behavior will cause all 500 neighbors to find an ISP that isn't broken. We can only hope.
Broken is a relative term. If Comcast's behavior causes their heavy P2P users to find another ISP then those who remain will not have broken service. For $40/mo you can't expect the service to be all things to all people, and given the shared nature of the service I find little moral disagreement with a utilitarian approach to network management. -Eric
On Sun, 21 Oct 2007, Eric Spaeth wrote:
They have. Enter DOCSIS 3.0. The problem is that the benefits of DOCSIS 3.0 will only come after they've allocated more frequency space, upgraded their CMTS hardware, upgraded their HFC node hardware where necessary, and replaced subscriber modems with DOCSIS 3.0 capable versions. On an optimistic timeline that's at least 18-24 months before things are going to be better; the problem is things are broken _today_.
Could someone who knows DOCSIS 3.0 (perhaps these are general DOCSIS questions) enlighten me (and others?) by responding to a few things I have been thinking about. Let's say cable provider is worried about aggregate upstream capacity for each HFC node that might have a few hundred users. Do the modems support schemes such as "everybody is guaranteed 128 kilobit/s, if there is anything to spare, people can use it but it's marked differently in IP PRECEDENCE and treated accordingly to the HFC node", and then carry it into the IP aggregation layer, where packets could also be treated differently depending on IP PREC. This is in my mind a much better scheme (guarantee subscribers a certain percentage of their total upstream capacity, mark their packets differently if they burst above this), as this is general and not protocol specific. It could of course also differentiate on packet sizes and a lot of other factors. Bad part is that it gives the user an incentive to "hack" their CPE to allow them to send higher speed with high priority traffic, thus hurting their neighbors. -- Mikael Abrahamsson email: swmike@swm.pp.se
With PCMM (PacketCable Multimedia, http://www.cedmagazine.com/out-of-the-lab-into-the-wild.aspx) support it's possible to dynamically adjust service flows, as has been done with Comcast's "Powerboost". There also appears to be support for flow prioritization. Regards, Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Mikael Abrahamsson Sent: Monday, October 22, 2007 1:02 AM To: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? On Sun, 21 Oct 2007, Eric Spaeth wrote:
They have. Enter DOCSIS 3.0. The problem is that the benefits of DOCSIS 3.0 will only come after they've allocated more frequency space, upgraded their CMTS hardware, upgraded their HFC node hardware where necessary, and replaced subscriber modems with DOCSIS 3.0 capable versions. On an optimistic timeline that's at least 18-24 months before things are going to be better; the problem is things are broken _today_.
Could someone who knows DOCSIS 3.0 (perhaps these are general DOCSIS questions) enlighten me (and others?) by responding to a few things I have been thinking about. Let's say cable provider is worried about aggregate upstream capacity for each HFC node that might have a few hundred users. Do the modems support schemes such as "everybody is guaranteed 128 kilobit/s, if there is anything to spare, people can use it but it's marked differently in IP PRECEDENCE and treated accordingly to the HFC node", and then carry it into the IP aggregation layer, where packets could also be treated differently depending on IP PREC. This is in my mind a much better scheme (guarantee subscribers a certain percentage of their total upstream capacity, mark their packets differently if they burst above this), as this is general and not protocol specific. It could of course also differentiate on packet sizes and a lot of other factors. Bad part is that it gives the user an incentive to "hack" their CPE to allow them to send higher speed with high priority traffic, thus hurting their neighbors. -- Mikael Abrahamsson email: swmike@swm.pp.se
On 10/22/07 2:01 AM, "Mikael Abrahamsson" <swmike@swm.pp.se> wrote:
Could someone who knows DOCSIS 3.0 (perhaps these are general DOCSIS questions) enlighten me (and others?) by responding to a few things I have been thinking about.
Let's say cable provider is worried about aggregate upstream capacity for each HFC node that might have a few hundred users. Do the modems support schemes such as "everybody is guaranteed 128 kilobit/s, if there is anything to spare, people can use it but it's marked differently in IP PRECEDENCE and treated accordingly to the HFC node", and then carry it into the IP aggregation layer, where packets could also be treated differently depending on IP PREC.
This is in my mind a much better scheme (guarantee subscribers a certain percentage of their total upstream capacity, mark their packets differently if they burst above this), as this is general and not protocol specific. It could of course also differentiate on packet sizes and a lot of other factors. Bad part is that it gives the user an incentive to "hack" their CPE to allow them to send higher speed with high priority traffic, thus hurting their neighbors.
Yes, as a part of the DOCSIS specification (waiting for D3.0 not required); however, implementations vary on the CMTS end of the equation though. Having this capability ubiquitously on the CMTS equipment simplifies the problem space greatly (plus removes that hacked CPE risk). -ron
Is it reasonable for your filesharing of your family photos and video clips to cause problems for all the other users of the network? Is that fair or just greedy?
It's damn well fair, is what it is. Is it somehow better for me to go and e-mail the photos and movies around? What if I really don't want to involve the ISP's servers, because they've proven to be unreliable, or I don't want them capturing backup copies, or whatever? My choice of technology for distributing my pictures, in this case, would probably result in *lower* overall bandwidth consumption by the ISP, since some bandwidth might be offloaded to Uncle Fred in Topeka, and Grandma Jones in Detroit, and Brother Tom in Florida who happens to live on a much higher capacity service. If filesharing my family photos with friends and family is sufficient to cause my ISP to buckle, there's something very wrong. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Joe Greco wrote:
Well, because when you promise someone an Internet connection, they usually expect it to work. Is it reasonable for Comcast to unilaterally decide that my P2P filesharing of my family photos and video clips is bad?
Comcast is currently providing 1GB of web hosting space per e-mail address associated with each account; one could argue that's a significantly more efficient method of distributing that type of content and it still doesn't cost you anything extra. The use case you describe isn't the problem though, it's the gluttonous "kid in the candy store" reaction that people tend to have when they're presented with all of the content available via P2P networks. This type of behavior has been around forever, be it in people tagging up thousands of Usenet articles, or setting themselves up on several DCC queues on IRC. Certainly innovations like newsreaders capable of using NZB files have made retrieval of content easier on Usenet, but nothing has lowered the barrier to content access more than P2P software. It's to the point now where people will download anything and everything via P2P whether they want it or not. For the AP article they were attempting to seed the Project Gutenburg version of the King James Bible -- a work that is readily available with a 3 second Google search and a clicked hyperlink straight to the eBook. Even with that being the case, the folks doing the testing still saw connection attempts against against their machine to retrieve the content. Must of this is due to a disturbing trend in users subscribing to RSS feeds for new torrent content, with clients automatically joining in the distribution of any new content presented to the tracker regardless of what it is. Again, flat-rate pricing does little to discourage this type of behavior. -Eric
Joe Greco wrote:
Well, because when you promise someone an Internet connection, they usually expect it to work. Is it reasonable for Comcast to unilaterally decide that my P2P filesharing of my family photos and video clips is bad?
Comcast is currently providing 1GB of web hosting space per e-mail address associated with each account; one could argue that's a significantly more efficient method of distributing that type of content and it still doesn't cost you anything extra.
Wow, that's incredibly ...small. I've easily got ten times that online with just one class of photos. There's a lot of benefit to just letting people yank stuff right off the old hard drive. (I don't /actually/ use P2P for sharing photos, we have a ton of webserver space for it, but I know people who do use P2P for it)
The use case you describe isn't the problem though,
Of course it's not, but the point I'm making is that they're using a shotgun to solve the problem. [major snip]
Again, flat-rate pricing does little to discourage this type of behavior.
I certainly agree with that. Despite that, the way that Comcast has reportedly chosen to deal with this is problematic, because it means that they're not really providing true full Internet access. I don't expect an ISP to actually forge packets when I'm attempting to communicate with some third party. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
At 01:59 PM 10/21/2007, Sean Donelan wrote:
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?
They should stop to offer flat-rate ones anyway.
Comcast's management has publically stated anyone who doesn't like the network management controls on its flat rate service can upgrade to Comcat's business class service.
I have Comcast business service in my office, and residential service at home. I use CentOS for some stuff, and so tried to pull a set of ISOs over BitTorrent. First few came through OK, now I can't get BitTorrent to do much of anything. I made the files I obtained available for others, but noted the streams quickly stop. This is on my office (business) service, served over cable. It's promised as 6Mbps/768K and costs $100/month. I can (and will) solve this by just setting up a machine in my data center for the purpose of running BT, and shape the traffic so it only gets a couple of Mbps (then pull the files over VPN to my office) But no, their business service is being stomped in the same fashion. So if they did say somewhere (and I haven't seen such a statement) that their business service is not affecteds by their efforts to squash BitTorrent, then it appears they're not being truthful.
Problem solved?
Or would some P2P folks complain about having to pay more money?
Or do general per-user ratelimiting that is protocol/application agnostic.
As I mentioned previously about the issues involving additional in-line devices and so on in networks, imposing per user network management and billing is a much more complicated task.
If only a few protocol/applications are causing a problem, why do you need an overly complex response? Why not target the few things that are causing problems?
Ask the same question about the spam problem. We spend plenty of dollars and manpower to filter out an ever-increasing volume of noise. The actual traffic rate of desired email to and from our customers has not appreciably changed (typical emails per customer per day) in several years.
A better idea might be for the application protocol designers to improve those particular applications.
Good luck with that.
It took a while, but it worked with the UDP audio/video protocol folks who used to stress networks. Eventually those protocol designers learned to control their applications and make them play nicely on the network.
If BitTorrent and similar care to improve their image, they'll need to work with others to ensure they respect networks and don't flatten them. Otherwise, this will become yet another arms race (as if it hasn't already) between ISPs and questionable use.
On Sun, 2007-10-21 at 17:10 -0400, Daniel Senie wrote:
I have Comcast business service in my office, and residential service at home. I use CentOS for some stuff, and so tried to pull a set of ISOs over BitTorrent. First few came through OK, now I can't get BitTorrent to do much of anything. I made the files I obtained available for others, but noted the streams quickly stop.
I have Comcast residential service and I've been pulling down torrents all weekend (Ubuntu v7.10, etc.), with no problems. I don't think that Comcast is blocking torrent downloads, I think they are blocking a zillion Comcast customers from serving torrents to the rest of the world. It's a network operations thing... why should Comcast provide a fat pipe for the rest of the world to benefit from? Just my $.02. -Jim P.
It's a network operations thing... why should Comcast provide a fat pipe for the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe? --Michael Dillon
On 10/22/07, michael.dillon@bt.com <michael.dillon@bt.com> wrote:
It's a network operations thing... why should Comcast provide a fat pipe for the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
You are correct, customers pay Comcast to provide a fat pipe for THEIR use (MSO's typically understand this as eyeball heavy content retrieval, not content generation). They do not provide that pipe for somebody on another network to use, I mean abuse. Comcast's SLA is with their user, not the remote user. Also, its a long standing policy on most "broadband" type networks that the do not support user offered services, which this clearly falls into. charles
It's a network operations thing... why should Comcast provide a fat pipe for the rest of the world to benefit from? Just my $.02.
Because their customers PAY them to provide that fat pipe?
You are correct, customers pay Comcast to provide a fat pipe for THEIR use (MSO's typically understand this as eyeball heavy content retrieval, not content generation). They do not provide that pipe for somebody on another network to use, I mean abuse. Comcast's SLA is with their user, not the remote user.
Comcast is cutting off their user's communication session with a remote user. Since every session on a network involves communications between two customers, only one of whom is usually local, this is the same as randomly killing http sessions or IM sessions or disconnecting voice calls.
Also, its a long standing policy on most "broadband" type networks that the do not support user offered services, which this clearly falls into.
I agree that there is a bid truth-in-advertising problem here. Cable providers claim to offer Internet access but instead only deliver a Chinese version of the Internet. If you are not sure why I used the term "Chinese", you should do some research on the Great Firewall of China. Ever since the beginning of the commercial Internet, the killer application has been the same. End users want to communicate with other end users. That is what motivates them to pay a monthly fee to an ISP. Any operational measure that interferes with communication is ultimately non-profitable. Currently, it seems that traffic shaping is the least invasive way of limiting the negative impacts. There clearly is demand for P2P file transfer services and there are hundreds of protocols and protocol variations available to do this. We just need to find the right way that meets the needs of both ISPs and end users. To begin with, it helps if ISPs document the technical reasons why P2P protocols impact their networks negatively. Not all networks are built the same. --Michael Dillon
* Sean Donelan:
On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
If your network cannot handle the traffic, don't offer the services.
So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?
I think a lot of companies implement OOB controls to curb P2P traffic, and those controls remain in place even without congestion on the network. It's like making sure that nobody uses the company plotter to print posters. In my experience, a permanently congested network isn't fun to work with, even if most of the flows are long-living and TCP-compatible. The lack of proper congestion control is kind of a red herring, IMHO.
On Sun, 21 Oct 2007, Florian Weimer wrote:
In my experience, a permanently congested network isn't fun to work with, even if most of the flows are long-living and TCP-compatible. The lack of proper congestion control is kind of a red herring, IMHO.
Why do you think so many network operators of all types are implementing controls on that traffic? http://www.azureuswiki.com/index.php/Bad_ISPs Its not just the greedy commercial ISPs, its also universities, non-profits, government, co-op, etc networks. It doesn't seem to matter if the network has 100Mbps user connections or 128Kbps user connection, they all seem to be having problems with these particular applications.
On Sun, 21 Oct 2007, Sean Donelan wrote:
Its not just the greedy commercial ISPs, its also universities, non-profits, government, co-op, etc networks. It doesn't seem to matter if the network has 100Mbps user connections or 128Kbps user connection, they all seem to be having problems with these particular applications.
I'm going to call bullshit here. The problem is that the customers are using too much traffic for what is provisioned. If those same customers were doing the same amount of traffic via NNTP, HTTP or FTP downloads then you would still be seeing the same problem and whining as much [1] . In this part of the world we learnt (the hard way) that your income has to match your costs for bandwidth. A percentage [2] of your customers are *always* going to move as much traffic as they can on a 24x7 basis. If you are losing money or your network is not up to that then you are doing something wrong, it is *your fault* for not building your network and pricing it correctly. Napster was launched 8 years ago so you can't claim this is a new thing. So stop whinging about how bitorrent broke your happy Internet, Stop putting in traffic shaping boxes that break TCP and then complaining that p2p programmes don't follow the specs and adjust your pricing and service to match your costs. [1] See "SSL and ISP traffic shaping?" at http://www.usenet.com/ssl.htm [2] - That percentage is always at least 10% . If you are launching a new "flat rate, uncapped" service at a reasonable price it might be closer to 80%. -- Simon J. Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT.
On Mon, Oct 22, 2007, Simon Lyall wrote:
So stop whinging about how bitorrent broke your happy Internet, Stop putting in traffic shaping boxes that break TCP and then complaining that p2p programmes don't follow the specs and adjust your pricing and service to match your costs.
So which ISPs have contributed towards more intelligent p2p content routing and distribution; stuff which'd play better with their networks? Or are you all busy being purely reactive? Surely one ISP out there has to have investigated ways that p2p could co-exist with their network.. Adrian
On Mon, Oct 22, 2007 at 08:08:47AM +0800, Adrian Chadd wrote: [snip]
So which ISPs have contributed towards more intelligent p2p content routing and distribution; stuff which'd play better with their networks? Or are you all busy being purely reactive?
A quick google search found the one I spotted last time I was looking around http://he.net/faq/bittorrent.html ...and last time I talked to any HE folks, they didn't get much uptick for the service. -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
Surely one ISP out there has to have investigated ways that p2p could co-exist with their network..
Some ideas from one small ISP. First, fileshare networks drive the need for bandwidth, and since an ISP sells bandwidth that should be viewed as good for business because you aren't going to sell many 6mb dsl lines to home users if they just want to do email and browse. Second, the more people on your network running fileshare network software and sharing, the less backbone bandwidth your users are going to use when downloading from a fileshare network because those on your network are going to supply full bandwidth to them. This means that while your internal network may see the traffic your expensive backbone connections won't (at least for the download). Blocking the uploading is a stupid idea because now all downloading has to come across your backbone connection. Uploads from your users are good, this is the traffic that everyone looks for when looking for peering partners. Ok now all that said, the users are going to do what they are going to do. If it takes them 20 minutes or 3 days to download a file they are still going to download that file. So it's like the way people thought back in the old dialup days when everyone said you can't build megabit pipes on the last mile because the network won't support it. People download what they want then the bandwidth sits idle. Nothing you do is going to stop them from using the internet as they see fit so either they get it fast or they get it slow but the bandwidth usage is still going to be there and as an ISP your job is to make sure supply meets demand. If you expect them to pay for 6mb pipes, they better see it run faster than it does on a 1.5mb pipe or they are going to head to your competition. Geo. George Roettger Netlink Services
Surely one ISP out there has to have investigated ways that p2p could co-exist with their network..
Some ideas from one small ISP.
First, fileshare networks drive the need for bandwidth, and since an ISP sells bandwidth that should be viewed as good for business because you aren't going to sell many 6mb dsl lines to home users if they just want to do email and browse.
One of the things to remember is that many customers are simply looking for Internet access, but couldn't tell a megabit from a mackerel. Given that they don't really have any true concept, many users will look at the numbers, just as they look at numbers for other things they purchase, and they'll assume that the one with better numbers is a better product. It's kind of hard to test drive an Internet connection, anyways. This has often given cable here in the US a bit of an advantage, and I've noticed that the general practice of cable providers is to try to maintain a set of numbers that's more attractive than those you typically land with DSL. [snip a bunch of stuff that sounds good in theory, may not map in practice]
If you expect them to pay for 6mb pipes, they better see it run faster than it does on a 1.5mb pipe or they are going to head to your competition.
A small number of them, perhaps. Here's an interesting issue. I recently learned that the local RR affiliate has changed its service offerings. They now offer 7M/512k resi for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact). Now, does anybody really think that the additional capacity that they're offering for just a few bucks more is real, or are they just playing the numbers for advertising purposes? I have no doubt that you'll be able to burst higher, but I'm a bit skeptical about continuous use. Noticed about two months ago that AT&T started putting kiosks for U-verse at local malls and movie theatres. Coincidence? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
[note that this post also relates to the thread Re: Comcast blocking p2p uploads] While both discussions started out as operational, most of the mail traffic is things that are not very much related to technology or operations. To clarify, things like these are on-topic: * Whether p2p protocols are "well-behaved", and how can we help making them behave. * Filtering "non-behaving" applications, whether these are worms or p2p applications. * Helping p2p authors write protocols that are topology- and congestion-aware These are on-topic, but all arguments for and against have already been made. Unless you have something new and insightful to say, please avoid continuing conversations about these subjects: * ISPs should[n't] have enough capacity to accomodate any application, no matter how well or badly behaved * ISPs should[n't] charge per byte * ISPs should[n't] have bandwidth caps * Legality of blocking and filtering These are clearly off-topic: * End-user comments about their particular MSO/ISP, pricing, etc. * Morality of blocking and filtering As a guideline, if you can expect a presentation at nanog conference about something, it belongs on the list. If you can't, it doesn't. It is a clear distinction. In addition, keep in mind that this is the "network operators" mailing list, *not* the end-user mailing list. Marty Hannigan (MLC member) already made a post on the "Comcast blocking p2p uploads" asking to stick to the operational content (vs, politics and morality of blocking p2p application), but people still continue to make non-technical comments. Accordingly, to increase signal/noise (as applied to network operations) MLC (that's us, the team who moderate this mailing list) won't hesitate to warn posters who ignore the limits set by AUP and guidance set up by MLC. If you want to discuss this moderation request, please do so on nanog-futures. -alex [mlc chair]
actually, it would be really helpful to the masses uf us who are being liberal with our delete keys if someone would summarize the two threads, comcast p2p management and 204/4. randy
On Mon, 22 Oct 2007, Randy Bush wrote:
actually, it would be really helpful to the masses uf us who are being liberal with our delete keys if someone would summarize the two threads, comcast p2p management and 204/4. 240/4 has been summarized before: Look for email with "MLC Note" in subject. However, in future, MLC emails will contain "[admin]" in the subject.
Interestingly, the content for the p2p threads boils down to: a) Original post by Sean Donelan: Allegation that p2p software "does not play well" with the rest of the network users - unlike TCP-based protocols which results in more or less fair bandwidth allocation, p2p software will monopolize upstream or downstream bandwidth unfairly, resulting in attempts by network operators to control such traffic. Followup by Steve Bellovin noting that if p2p software (like bt) uses tcp-based protocols, due to use of multiple tcp streams, fairness is achieved *between* BT clients, while being unfair to the rest of the network. No relevant discussion of this subject has commenced, which is troubling, as it is, without doubt, very important for network operations. b) Discussion started by Adrian Chadd whether p2p software is aware of network topology or congestion - without apparent answer, which leads me to guess that the answer is "no". c) Offtopic whining about filtering liability, MSO pricing, fairness, equality, end-user complaints about MSOs, filesharing of family photos, disk space provided by MSOs for web hosting. Note: if you find yourself to have posted something that was tossed into the category c) - please reconsider your posting habits. As usual, I apologise if I skipped over your post in this summary. -alex
One of the things to remember is that many customers are simply looking for Internet access, but couldn't tell a megabit from a mackerel.
That may have been true 5 years ago, it's not true today. People learn.
Here's an interesting issue. I recently learned that the local RR affiliate has changed its service offerings. They now offer 7M/512k resi for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact).
Now, does anybody really think that the additional capacity that they're offering for just a few bucks more is real, or are they just playing the numbers for advertising purposes?
Windstream offers 6m/384k for $29.95 and 6m/768k for $100, does that answer your question? What is comcast's upspeed, is it this low or is comcast's real problem that they offer 1m or more of upspeed for too cheap a price? Hmmm.. perhaps it's not the customers who don't know a megabit from a mackerel but instead perhaps it's comcast who thinks customers are stupid and as a result they've ended up with the people who want upspeed? Geo. George Roettger Netlink Services
On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
Second, the more people on your network running fileshare network software and sharing, the less backbone bandwidth your users are going to use when downloading from a fileshare network because those on your network are going to supply full bandwidth to them.
Hmmmm... me wonders how you know this for fact? Last time I took the time to snoop a running torrent, I didn't get the the impression it was pulling packets from the same country as I, let alone my network neighbors. -Jim P.
Jim Popovitch wrote:
On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
Second, the more people on your network running fileshare network software and sharing, the less backbone bandwidth your users are going to use when downloading from a fileshare network because those on your network are going to supply full bandwidth to them.
Hmmmm... me wonders how you know this for fact? Last time I took the time to snoop a running torrent, I didn't get the the impression it was pulling packets from the same country as I, let alone my network neighbors.
-Jim P.
http://www.bittorrent.org/protocol.html Peer selection algorithm is based on which peers have the blocks, and their willingness to serve them. You will note that peers that allow you to download from them are treated preferentially as far as uploads relative to those which do not (Which is a problem from the perspective of comcast customers). It's unclear to me from the outset, how many peers for a given torrent would be required before one could place a preference on topological locality over availability of blocks and willingness to serve. The principle motivator here is after all displacing costs of downloads onto a cooperative set of peers where it's assumed to be a marginal incremental cost. Reciprocity is a plausible basis for a social contract, or at least that's what I learned in Montessori school.
Hmmmm... me wonders how you know this for fact? Last time I took the time to snoop a running torrent, I didn't get the the impression it was pulling packets from the same country as I, let alone my network neighbors.
That would be totally dependent on what tracker you use. Geo.
On Sun, Oct 21, 2007 at 10:45:49PM -0400, Geo. wrote: [snip]
Second, the more people on your network running fileshare network software and sharing, the less backbone bandwidth your users are going to use when downloading from a fileshare network because those on your network are going to supply full bandwidth to them. This means that while your internal network may see the traffic your expensive backbone connections won't (at least for the download). Blocking the uploading is a stupid idea because now all downloading has to come across your backbone connection.
As stated in several previous threads on the topic, the clump of p2p protocols in themselves do not provide any topology or locality awareness. At least some of the policing middleboxes have worked with network operators to address the need and bring topology-awareness into varous p2p clouds by eating a BGP feed to redirect traffic on-net (or to non-transit, or same region, or latency class or ...) when possible. Of course the on-net has less long-haul costs, but the last-mile node congestion is killer; at least lower-latency on-net to on-net trafsfers should complete quickly if the network isn't completely hosed. One then can create a token scheme for all the remaining traffic and prioritize, say, the customers actually downloading over those seeding from scratch. -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
So which ISPs have contributed towards more intelligent p2p content routing and distribution; stuff which'd play better with their networks? Or are you all busy being purely reactive?
Surely one ISP out there has to have investigated ways that p2p could co-exist with their network..
I can imagine a middlebox that would interrupt multiple flows of the same file, shut off all but one, and then masquerade as the source of the other flows so that everyone still gets their file. If P2P protocols were more transparent, i.e. not port-hopping, this kind of thing would be easier to implement. This would make a good graduate research project, I would imagine. --Michael Dillon
* Adrian Chadd:
So which ISPs have contributed towards more intelligent p2p content routing and distribution; stuff which'd play better with their networks?
Perhaps Internet2, with its DC++ hubs? 8-P I think the problem is that better "routing" (Bittorrent content is *not* routed by the protocol AFAIK) inevitably requires software changes. For Bittorrent, you could do something on the tracker side: You serve .torrent files which contain mostly nodes which are topologically close to the requesting IP address. The clients could remain unchanged. (If there's some kind of AS database, you could even mark some nodes as local, so that they only get advertised to nodes within the same AS.) However, there's little incentive for others to use your tracker software. What's worse, it's even less convenient to use because it would need a BGP feed. It's not even obvious if this is going to fix problems. If upload-related congestion on the shared media to the customer is the issue (could be, I don't know), it's unlikely to help to prefer local nodes. It could make things even worse because customers in one area are somewhat likely to be interested in the same data at the same time (for instance, after watching a movie trailer on local TV).
On Mon, 22 Oct 2007, Simon Lyall wrote:
So stop whinging about how bitorrent broke your happy Internet, Stop putting in traffic shaping boxes that break TCP and then complaining that p2p programmes don't follow the specs and adjust your pricing and service to match your costs.
Folks in New Zealand seem to also whine about data caps and "fair usage policies," I doubt changing US pricing and service is going to stop the whining. Those seem to discourage people from donating their bandwidth for P2P applications. Are there really only two extremes? Don't use it and abuse it? Will P2P applications really never learn to play nicely on the network?
On 10/21/07, Sean Donelan <sean@donelan.com> wrote:
On Mon, 22 Oct 2007, Simon Lyall wrote:
So stop whinging about how bitorrent broke your happy Internet, Stop putting in traffic shaping boxes that break TCP and then complaining that p2p programmes don't follow the specs and adjust your pricing and service to match your costs.
Folks in New Zealand seem to also whine about data caps and "fair usage policies," I doubt changing US pricing and service is going to stop the whining.
Those seem to discourage people from donating their bandwidth for P2P applications.
Are there really only two extremes? Don't use it and abuse it? Will P2P applications really never learn to play nicely on the network?
Can last-mile providers play nicely with their customers and not continue to offer "Unlimited" (but we really mean only as much as we say, but we're not going to tell you the limit until you reach it) false advertising? It skews the playing field, as well as ticks off the customer. The P2P applications are already playing nicely. They're only using the bandwidth that has been allocated to the customer. -brandon
On Oct 22, 2007, at 7:50 AM, Sean Donelan wrote:
Will P2P applications really never learn to play nicely on the network?
Here are some more specific questions: Is some of the difficulty perhaps related to the seemingly unconstrained number of potential distribution points in systems of this type, along with 'fairness' issues in terms of bandwidth consumption of each individual node for upload purposes, and are there programmatic ways of altering this behavior in order to reduce the number, severity, and duration of 'hot-spots' in the physical network topology? Is there some mechanism by which these applications could potentially leverage some of the CDNs out there today? Have SPs who've deployed P2P-aware content-caching solutions on their own networks observed any benefits for this class of application? Would it make sense for SPs to determine how many P2P 'heavy-hitters' they could afford to service in a given region of the topology and make a limited number of higher-cost accounts available to those willing to pay for the privilege of participating in these systems? Would moving heavy P2P users over to metered accounts help resolve some of the problems, assuming that even those metered accounts would have some QoS-type constraints in order to ensure they don't consume all available bandwidth? ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice I don't sound like nobody. -- Elvis Presley
Will P2P applications really never learn to play nicely on the network?
So from an operations perspective, how should P2P protocols be designed? There appears that the current solution at the moment is for ISP's to put up barriers to P2P usage (like comcasts spoof'd RSTs), and thus P2P clients are trying harder and harder to hide to work around these barriers. Would having a way to proxy p2p downloads via an ISP proxy be used by ISPs and not abused as an additional way to shutdown and limit p2p usage? If so how would clients discover these proxies or should they be manually configured? Would stronger topological sharing be beneficial? If so, how do you suggest end users software get access to the information required to make these decisions in an informed manner? Should p2p clients be participating in some kind of weird IGP? Should they participate in BGP? How can the p2p software understand your TE decisions? At the moment p2p clients upload to a limited number of people, every so often they discard the slowest person and choose someone else. This in theory means that they avoid slow/congested paths for faster ones. Another easy metric they can probably get at is RTT, is RTT a good metric of where operators want traffic to flow? p2p clients can also perhaps do similarity matches based on the remote IP and try and choose people with similar IPs, presumably that isn't going to work well for many people, would it be enough to help significantly? What else should clients be using as metrics for selecting their peers that works in an ISP friendly manner? If p2p clients started using multicast to stream pieces out to peers, would ISP's make sure that multicast worked (at least within their AS?). Would this save enough bandwidth for ISP's to care? Can enough ISP's make use of multicast or would it end up with them hauling the same data multiple times across their network anyway? Are there any other obvious ways of getting the bits to the user without them passing needlessly across the ISP's network several times (often in alternating directions)? Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits to state that this is bulk transfers? Would ISP's use these sensibly or will they just use these hints to add additional barriers into the network? Should p2p clients avoid TCP entirely because of it 's "fairness between flows" and try and implement their own congestion control algorithms on top of UDP that attempt to treat all p2p connections as one single "congestion entity"? What happens if this is buggy on the first implementation? Should p2p clients be attempting to mark all their packets as coming from a single application so that ISP's can QoS them as one single entity (eg by setting the IPv6 flowid to the same value for all p2p flows)? What incentive can the ISP provide the end user doing this to keep them from just turning these features off and going back to the current way things are done? Software is easy to fix, and thanks to automatic updates of much p2p network can see a global improvement very quickly. So what other ideas do operations people have for how these things could be fixed from the p2p software point of view?
On Tue, Oct 23, 2007, Perry Lorier wrote:
Would having a way to proxy p2p downloads via an ISP proxy be used by ISPs and not abused as an additional way to shutdown and limit p2p usage? If so how would clients discover these proxies or should they be manually configured?
http://www.azureuswiki.com/index.php/ProxySupport http://www.azureuswiki.com/index.php/JPC Although JPC is now marked "Discontinued due to lack of ISP support." I guess noone wanted to buy their boxes. Would anyone like to see open source JPC-aware P2P caches to build actual meshes inside and between ISPs? Are people even thinking its a good or bad idea? Here's the real question. If an open source protocol for p2p content routing and distribution appeared? The last time I spoke to a few ISPs about it they claimed they didn't want to do it due to possible legal obligations.
Would stronger topological sharing be beneficial? If so, how do you suggest end users software get access to the information required to make these decisions in an informed manner? Should p2p clients be participating in some kind of weird IGP? Should they participate in
[snip] As you noted, topological information isn't enough; you need to know about the TE stuff - link capacity, performance, etc. The ISP knows about their network and its current performance much, much more than any edge application would. Unless you're pulling tricks like Cisco OER..
If p2p clients started using multicast to stream pieces out to peers, would ISP's make sure that multicast worked (at least within their AS?). Would this save enough bandwidth for ISP's to care? Can enough ISP's make use of multicast or would it end up with them hauling the same data multiple times across their network anyway? Are there any other obvious ways of getting the bits to the user without them passing needlessly across the ISP's network several times (often in alternating directions)?
ISPs properly doing multicast pushed from clients? Ahaha.
Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits to state that this is bulk transfers? Would ISP's use these sensibly or will they just use these hints to add additional barriers into the network?
People who write and the most annoying client users will do whatever they can to maximise their throughput over all others. If this means opening up 50 TCP connections to one host to get the top possible speed and screw the rest of the link, they would. It looks somewhat like GIH's graphs for multi-gige-over-LFN publication.. :) Adrian
Adrian Chadd wrote: [..]
Here's the real question. If an open source protocol for p2p content routing and distribution appeared?
It is called NNTP, it exists and is heavily used for doing exactly where most people use P2P for: Warezing around without legal problems. NNTP is of course "nice" to the network as people generally only download, not upload. I don't see the point though, traffic is traffic, somewhere somebody pays for that traffic, from an ISP point of view there is thus no difference between p2p and NNTP. NNTP has quite some overhead (as it is 7bits in general afterall and people need to then encode those 4Gb DVDs ;), but clearly ISPs exist solely for the purpose of providing access to content on NNTP and they are ready to invest lots of money in infrastructure and especially also storage space. I did notice in a recent newsarticle (hardcopy 20min.ch) that the RIAA has finally found NNTP though and are suing Usenet.com though... I wonder what they will do with all those ISPs who are simply selling "NNTP access", who still are claiming that they don't know what they are actually requiring "those big caches" (NNTP servers) for and that they don't know that there is this alt.bin.dvd-r.* stuff on it :) Going to be fun times I guess... Greets, Jeroen
Would stronger topological sharing be beneficial? If so, how do you suggest end users software get access to the information required to make these decisions in an informed manner?
I would think simply looking at the TTL of packets from it's peers should be sufficient to decide who is close and who is far away. The problem comes in do you pick someone who is 2 hops away but only has 12K upload or do you pick someone 20 hops away but who has 1M upload? I mean obviously from the point of view of a file sharer, it's speed not location that is important. Geo. George Roettger Netlink Services
On Mon, 2007-10-22 at 12:55 +1300, Simon Lyall wrote:
The problem is that the customers are using too much traffic for what is provisioned.
Nope. Not sure where you got that from. With P2P, it's others outside the Comcast network that are over saturating the Comcast customers' bandwidth. It's basically an ebb and flow problem, 'cept there is more of one than the other. ;-) Btw, is Comcast in NZ? -Jim P.
On Mon, Oct 22, 2007 at 12:55:08PM +1300, Simon Lyall wrote:
On Sun, 21 Oct 2007, Sean Donelan wrote:
Its not just the greedy commercial ISPs, its also universities, non-profits, government, co-op, etc networks. It doesn't seem to matter if the network has 100Mbps user connections or 128Kbps user connection, they all seem to be having problems with these particular applications.
I'm going to call bullshit here.
The problem is that the customers are using too much traffic for what is provisioned. If those same customers were doing the same amount of traffic via NNTP, HTTP or FTP downloads then you would still be seeing the same problem and whining as much [1] .
There is significant protocol behavior differences between BT and FTP. Hint - downloads are not the Problem. -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
If your network cannot handle the traffic, don't offer the services. In network access for the masses, downstream bandwidth has always been easier to deliver than upstream. It's been that way since modem manufacturers found they could leverage a single digital/analog conversion in the PTSN to deliver 56kbps downstream data rates over
Mikael Abrahamsson wrote: phone lines. This is still true today in nearly every residential access technology: DSL, Cable, Wireless (mobile 3G / EVDO), and Satellite all have asymmetrical upstream/downstream data rates, with downstream being favored in some cases by a ratio of 20:1. Of that group, only DSL doesn't have a common upstream bottleneck between the subscriber and head-end. For each of the other broadband technologies, the overall user experience will continue to diminish as the number of subscribers saturating their upstream network path grows. Transmission technology issues aside, how do you create enough network capacity for a technology that is designed to use every last bit of transport capacity available? P2P more closely resembles denial of service traffic patterns than "standard" Internet traffic.
The long term solution is of course to make sure that you can handle the traffic that the customer wants to send (because that's what they can control), perhaps by charging for it by some scheme that involves not offering flat-fee. I agree with the differential billing proposal. There are definitely two sides to the coin when it comes to Internet access available to most of the US; on one side the open and unrestricted access allows for the growth of new ideas and services no matter how unrealistic (ie, unicast IP TV for the masses), but on the other side sets up a "tragedy of the commons" situation where there is no incentive _not_ to abuse the "unlimited" network resources. Even with as insanely cheap as web hosting has become, people are still electing to use P2P for content distribution over $4/mo hosting accounts because it's "cheaper"; the higher network costs in P2P distribution go ignored because the end user never sees them. The problem in converting to a usage-based billing system is that there's a huge potential to simultaneously lose both market share and public perception of your brand. I'm sure every broadband provider would love to go to a system of usage-based billing, but none of them wants to be the first.
-Eric
* Eric Spaeth:
Of that group, only DSL doesn't have a common upstream bottleneck between the subscriber and head-end.
DSL has got that, too, but it's much more statically allocated and oversubscription results in different symptoms. If you've got a cable with 50 wire pairs, and you can run ADSL2+ at 16 Mbps downstream on one pair, you can't expect to get full 800 Mbps across the whole cable, at least not with run-of-the-mill ADSL2+. (Actual numbers may be different, but there's a significant problem with interference when you get closer to theoretical channel limits.)
Possible scenario... Subscriber bandwidth caps are in theory too high, if the ISP can't support it -- but if the ISP were to lower them, the competition's service would look better, advertising the larger supposed data rate -- plus the cap reduction would hurt polite users. In the absence of the P2P applications, the limits were fine, so hurting the P2P application may be a preferable solution to the ISP charging everyone more to support the excessive bandwidth usage of the 2-3% of subscribers who use P2P applications, or dropping that 6m bandwidth cap to a 256 kilobit cap just to be able to guarantee everyone can use it all at the same time. Many ISP customers might thank them for blocking P2P, if it keeps their subscription costs low --- in the absence of sufficient customer demand for P2P, it will be throttled, or filtered; if they're paying for a 1.5m connection (not a 6m) and it costs half the price of a normal 1.5m connection, but blocks P2P, many customers might like to make that tradeoff.
That's the ONLY thing they have to give us. Forget looking at L4 or alike, that will be encrypted as soon as ISPs start to discriminate on it. Users have enough computing power available to encrypt everything.
I'm afraid the response could then be for providers that limit P2P to begin treating everything encrypted as suspicious. The source and destination address are enough to do a lot in theory.... If the first packet exchanged between two hosts was sourced from a subscriber, then ISP monitoring mechanism can record a session... "Session started from inside to outside"; just like a stateful firewall. The ratio of bytes a customer sends to an address versus number of bytes they receive from that address can be used: anything above 1.0 is an upload, anything below 1.0 is a download; high ratio = reduced bandwidth cap. Very poor treatment could be given to sessions started from outside to inside. An address that only one or two subscribers exchange traffic with is probably a P2P app. An address that many subscribers try to exchange traffic with is probably an E-commerce site. Thus the whitelists could be built through automated means, just by counting the number of distinct inside sources per outside destination. ( if 1000 different customer source addresses send encrypted port 443 to one host, then that host could be automatically listed as "probably not a P2P host") -- a second possibility is the ISP could examine SSL certificate of remote destination -- f a site has gone through the trouble of having a high-grade X509 certificate signed by a for-fee official CA, then it's probably not a P2P peer. If a user tries to connect to a site that has no certificate signed by a recognized CA, then it's probably either a possible phisher a P2P peer --- these could in theory be blocked as a "stop phishing" measure. "Security" measure
Only if P2P shared network resources with other applications well does increasing network resources make more sense. If your network cannot handle the traffic, don't offer the services.
Exactly what they would seem to be doing. By blocking P2P uploads or throttling them, they are choosing to not offer full P2P access. Some ISPs may block P2P and be very quiet about it, and it's unfortunate, as customers would want to know about extra restrictions on the use of their X-megs connection. Generally warnings that excessive-bandwidth applications may be limited will be mentioned in ISPs' existing Acceptable Usage policies, they're probably just not outright saying "we block Xyz". P2P applications seem to be a valuable tool; however, it would be an ISP's available choice to refuse to offer it -- or require P2P users to pay extra, in proportion to the additional usage of their networks that are required to function with the service. When P2P usage is a burden on their network. Their network, their rules. The bigger issue I would say, is that in many areas, provider monopolies exist on affordable residential access services. So if "Provider A" happens to be the cable company in a local area, that owns all that infrastructure, and the rights to hang cable -- there's no opportunity for a "Provider B" to satisfy the demand, if they can't get a wire between them and their would-be customer.... No competition and no cost-effective alternative access path gives "Provider A" too much free reign. Free reign in terms of limiting consumer choice and forcing customers to accept substandard or partial services, when customers are tricked by shiny advertising into thinking they are buying high-grade fully featured services. -- -J
In the absence of the P2P applications, the limits were fine, so hurting the P2P application may be a preferable solution to the ISP charging everyone more to support the excessive bandwidth usage of the 2-3% of subscribers who use P2P applications,
I'd like to know where you get the 2-3% numbers? I mean the RIAA is sueing single mothers and children so it sure looks to me like it's likely way more than 2-3% of users running P2P apps at least some of the time. Geo.
MSO's typically understand this as eyeball heavy content retrieval, not content generation
I was under the impression Comcast advertised Internet access, which is read/write. Clearly I was mistaken... Really, the heart of the matter is that in doing this they are not being honest with their customers, the wider public, or other networks. If they want to do this they should say so. Arguably, they shouldn't do it anyway; I'd be delighted to pay some more for the assurance that my ISP will, uh, provide Internet service if that turns out to be necessary. Perhaps it's time to recognise that we've reached a pricing floor.
On 10/22/07, Alexander Harrowell <a.harrowell@gmail.com> wrote:
MSO's typically understand this as eyeball heavy content retrieval, not content generation
I was under the impression Comcast advertised Internet access, which is read/write. Clearly I was mistaken...
This is correct, MSO's offer residential user services for ~50$/month (or whatever the package special would be). They do offer commercial grade services which is built on the same fiber, but bypass the HFC plant, much like the other MSO's in the industry. Just don't expect that service to be comparable in price to the residential users. charles
* Sean Donelan:
On Sun, 21 Oct 2007, Florian Weimer wrote:
If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem.
Uhm, what about civil liability? It's not necessarily a technical issue that motivates them, I think.
If it was civil liability, why are they responding to the protocol being used instead of the content?
Because the protocol is detectable, and correlates (read: is perceived to correlate) well enough with the content?
If there is a technical reason, it's mostly that the network as deployed is not sufficient to meet user demands. Instead of providing more resources, lack of funds may force some operators to discriminate against certain traffic classes. In such a scenario, it doesn't even matter much that the targeted traffic class transports content of questionable legaility. It's more important that the measures applied to it have actual impact (Amdahl's law dictates that you target popular traffic), and that you can get away with it (this is where the legality comes into play).
Sandvine, packeteer, etc boxes aren't cheap either.
But they try to make things better for end users. If your goal is to save money, you'll use different products (even ngrep-with-tcpkill will do in some cases).
The problem is giving P2P more resources just means P2P consumes more resources, it doesn't solve the problem of sharing those resources with other users.
I don't see the problem. Obviously, there's demand for that kind of traffic. ISPs should be lucky because they're selling bandwidth, so it's just more business for them. I can see two different problems with resource sharing: You've got congestion not in the access network, but in your core or on some uplinks. This is just poor capacity planning. Tough luck, you need to figure that one out or you'll have trouble staying in business (if you strike the wrong balance, your network will cost much more to maintain than what the competition pays for therir own, or it will inadequate, leading to poor service). The other issue are ridiculously oversubscribed shared media networks on the last mile. This only works if there's a close-knit user community that can police themselves. ISPs who are in this situation need to figure out how they ended up there, especially if there isn't cut-throat competition. In the end, it's probably a question of how you market your products ("up to 25 Mbps of bandwidth" and stuff like that).
In my experience, a permanently congested network isn't fun to work with, even if most of the flows are long-living and TCP-compatible. The lack of proper congestion control is kind of a red herring, IMHO.
Why do you think so many network operators of all types are implementing controls on that traffic?
Because their users demand more bandwidth from the network than actually available, and non-user-specific congestion occurs to a significant degree. (Is there a better term for that? What I mean is that not just the private link to the customer is saturated, but something that is not under his or her direct control, so changing your own behavior doesn't benefit you instantly; see self-policing above.) Selectively degrading traffic means that you can still market your service as "unmetered 25 Mbps", instead of "unmetered 1 Mbps". One reason for degrading P2P traffic I haven't mentioned so far: P2P applications have got the nice benefit that they are inherently asynchronous, so cutting the speed to a fraction doesn't fatally impact users. (In that sense, there isn't strong user demand for additional network capacity.) But guess what happens if there's finally more demand for streamed high-entropy content. Then you'll have got not much choice; you need to build a network with the necessary capacity,
participants (25)
-
Adrian Chadd
-
Alex Pilosov
-
Alexander Harrowell
-
Brandon Galbraith
-
Charles Gucker
-
Daniel Senie
-
Eric Spaeth
-
Florian Weimer
-
Frank Bulk
-
Geo.
-
James Hess
-
Jeroen Massar
-
Jim Popovitch
-
Joe Greco
-
Joe Provo
-
Joel Jaeggli
-
Matthew Kaufman
-
michael.dillonļ¼ bt.com
-
Mikael Abrahamsson
-
Perry Lorier
-
Randy Bush
-
Roland Dobbins
-
Ron da Silva
-
Sean Donelan
-
Simon Lyall