ISPs slowing P2P traffic...
http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traf... http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-... http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225 If I am mistakenly being duped by some crazy fascists, please let me know. However, my question is simply.. for ISPs promising broadband service. Isn't it simpler to just announce a bandwidth quota/cap that your "good" users won't hit and your bad ones will? This chasing of the lump under-the-rug (slowing encrypted traffic, then VPN traffic and so on...) seems like the exact opposite of progress to me (by progressively nastier filters, impeding the traffic your network was built to move, etc). Especially when there is no real reason this P2P traffic can't masquerade as something really interesting... like Email or Web (https, hello!) or SSH or gamer traffic. I personally expect a day when there is a torrent "encryption" module that converts everything to look like a plain-text email conversation or IRC or whatever. When you start slowing encrypted or VPN traffic, you start setting yourself up to interfere with all of the bread&butter applications (business, telecommuters, what have you). I remember Bill Norton's peering forum regarding P2P traffic and how the majority of it is between cable and other broadband providers... Operationally, why not just lash a few additional 10GE cross-connects and let these *paying customers* communicate as they will? All of these "traffic shaping" and "traffic prioritization" techniques seem a bit like the providers that pushed for ubiquitous broadband because they liked the margins don't want to deal with a world where those users have figured out ways to use these amazing networks to do things... whatever they are. If they want to develop incremental revenue, they should do it by making clear what their caps/usage profiles are and moving ahead... or at least transparently share what shaping they are doing and when. I don't see how Operators could possibly debug connection/throughput problems when increasingly draconian methods are used to manage traffic flows with seemingly random behaviors. This seems a lot like the evil-transparent caching we were concerned about years ago. So, to keep this from turning into a holy war, or a non-operational policy debate, and assuming you agree that providers of consumer connectivity shouldn't employee transparent traffic shaping because it screws the savvy customers and business customers. ;) What can be done operationally? For legitimate applications: Encouraging "encryption" of more protocols is an interesting way to discourage this kind of shaping. Using IPv6 based IPs instead of ports would also help by obfuscating protocol and behavior. Even IP rotation through /64s (cough 1 IP per half-connection anyone). For illegitimate applications: Port knocking and pre-determined stream hopping (send 50Kbytes on this port/ip pairing then jump to the next, etc, etc) My caffeine hasn't hit, so I can't think of anything else. Is this something the market will address by itself? DJ
On Wed, 09 Jan 2008 15:04:37 EST, Deepak Jain said:
Encouraging "encryption" of more protocols is an interesting way to discourage this kind of shaping.
Dave Dittrich, on another list yesterday:
They're not the only ones getting ready. There are at least 5 anonymous P2P file sharing networks that use RSA or Diffie-Hellman key exchange to seed AES/Rijndael encryption at up to 256 bits. See:
You can only filter that which you can see, and there are many ways to make it hard to see what's going over the wire.
Bottom line - "they" can probably deploy the countermeasures faster than "we" can deploy the shaping....
They're not the only ones getting ready. There are at least 5 anonymous P2P file sharing networks that use RSA or Diffie-Hellman key exchange to seed AES/Rijndael encryption at up to 256 bits. See:
You can only filter that which you can see, and there are many ways to make it hard to see what's going over the wire.
Bottom line - "they" can probably deploy the countermeasures faster than "we" can deploy the shaping....
I'm certain of this. First adopters are always ahead of the curve. The question is when a "quality of service" (little Q) -- the purported "improving the surfing experience for the rest of our users" is the stated reason.... They (whatever provider is taking a position) should transparently state their policies and enforcement mechanisms. They shouldn't be selectively prioritizing traffic based on their perception of its purpose. The standard of reasonableness would be where the net functions better... such as dropping ICMPs or attack traffic in favor of traffic with a higher signal-to-noise ratio (e.g. TCP). As opposed to whose traffic can we drop that is the least likely to result in a complaint or cancellation... The reason I consider this invalid, is because its a kissing-cousin to "whose traffic can we penalize that we can later charge access to as a /premium service/"? I'm sure I'm preaching to the choir here, but basically if everyone got the 10mb/s service they believe they got when they ordered their connection, there would be no place to pay for "higher priority" service to Youtube or what-have-you -- except when you want more than 10mb/s service. I think the important trial of DirectTVs VoD service over the Internet is going to be an awesome test case of this in real life. It may save them from me cancelling my DirectTV subscription just to see how Verizon FIOS handle the video streams. :) DJ
Semi-related article: http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0 -Matt On 1/9/08 3:04 PM, "Deepak Jain" <deepak@ai.net> wrote:
http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traf... c-89260 http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-... tus-Notes-88673 http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225
If I am mistakenly being duped by some crazy fascists, please let me know.
However, my question is simply.. for ISPs promising broadband service. Isn't it simpler to just announce a bandwidth quota/cap that your "good" users won't hit and your bad ones will? This chasing of the lump under-the-rug (slowing encrypted traffic, then VPN traffic and so on...) seems like the exact opposite of progress to me (by progressively nastier filters, impeding the traffic your network was built to move, etc).
Especially when there is no real reason this P2P traffic can't masquerade as something really interesting... like Email or Web (https, hello!) or SSH or gamer traffic. I personally expect a day when there is a torrent "encryption" module that converts everything to look like a plain-text email conversation or IRC or whatever.
When you start slowing encrypted or VPN traffic, you start setting yourself up to interfere with all of the bread&butter applications (business, telecommuters, what have you).
I remember Bill Norton's peering forum regarding P2P traffic and how the majority of it is between cable and other broadband providers... Operationally, why not just lash a few additional 10GE cross-connects and let these *paying customers* communicate as they will?
All of these "traffic shaping" and "traffic prioritization" techniques seem a bit like the providers that pushed for ubiquitous broadband because they liked the margins don't want to deal with a world where those users have figured out ways to use these amazing networks to do things... whatever they are. If they want to develop incremental revenue, they should do it by making clear what their caps/usage profiles are and moving ahead... or at least transparently share what shaping they are doing and when.
I don't see how Operators could possibly debug connection/throughput problems when increasingly draconian methods are used to manage traffic flows with seemingly random behaviors. This seems a lot like the evil-transparent caching we were concerned about years ago.
So, to keep this from turning into a holy war, or a non-operational policy debate, and assuming you agree that providers of consumer connectivity shouldn't employee transparent traffic shaping because it screws the savvy customers and business customers. ;)
What can be done operationally?
For legitimate applications:
Encouraging "encryption" of more protocols is an interesting way to discourage this kind of shaping.
Using IPv6 based IPs instead of ports would also help by obfuscating protocol and behavior. Even IP rotation through /64s (cough 1 IP per half-connection anyone).
For illegitimate applications:
Port knocking and pre-determined stream hopping (send 50Kbytes on this port/ip pairing then jump to the next, etc, etc)
My caffeine hasn't hit, so I can't think of anything else. Is this something the market will address by itself?
DJ
On Wed, 09 Jan 2008 15:36:50 EST, Matt Landers said:
Semi-related article:
http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0
Odd, I saw *another* article that said that while the FCC is moving to investigate unfair behavior by Comcast, Congress is moving to investigate unfair behavior in the FCC. http://www.reuters.com/article/industryNews/idUSN0852153620080109 This will probably get.... interesting.
On Wed, Jan 09, 2008 at 03:58:13PM -0500, Valdis.Kletnieks@vt.edu wrote:
On Wed, 09 Jan 2008 15:36:50 EST, Matt Landers said:
Semi-related article:
http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0
Odd, I saw *another* article that said that while the FCC is moving to investigate unfair behavior by Comcast, Congress is moving to investigate unfair behavior in the FCC.
http://www.reuters.com/article/industryNews/idUSN0852153620080109
This will probably get.... interesting.
The FCC isn't just a small pool of people, like any gov't agency there's a *lot* of people behind all this stuff. From public-safety to calea to broadcast, pstn, etc.. FCC was quick to step in when some isp was blocking vonage stuff. This doesn't seem to be as big of an impact IMHO (ie: it won't obviously block your access to a PSAP/911) but still needs to be addressed. We'll see what happens, and how the 160Mb/s DOCSIS 3.0 connections and infrastructure to support it pan out on the comcast side.. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On 9 Jan 2008, at 20:04, Deepak Jain wrote:
I remember Bill Norton's peering forum regarding P2P traffic and how the majority of it is between cable and other broadband providers... Operationally, why not just lash a few additional 10GE cross-connects and let these *paying customers* communicate as they will?
This does nothing to affect last-mile costs, and these costs could be the reason that you need to cap at all (certainly this is the case in the UK).
On Wed, Jan 09, 2008 at 03:04:37PM -0500, Deepak Jain wrote: [snip]
However, my question is simply.. for ISPs promising broadband service. Isn't it simpler to just announce a bandwidth quota/cap that your "good" users won't hit and your bad ones will?
Simple bandwidth is not the issue. This is about traffic models using statistical multiplexing making assumption regardin humans at the helmu, and those models directing the capital investment of facilities and hardware. You likely will see p2p throttling where you also see "residential customers must not host servers" policies. Demand curves for p2p usage do not match any stat-mux models where brodband is sold for less than it costs to maintain and upgrade the physical plant.
Especially when there is no real reason this P2P traffic can't masquerade as something really interesting... like Email or Web (https, hello!) or SSH or gamer traffic. I personally expect a day when there is a torrent "encryption" module that converts everything to look like a plain-text email conversation or IRC or whatever.
The "problem" with p2p traffic is how it behaves, which will not be hidden by ports or encryption. If the *behavior* of the protocol[s] change such that they no longer look like digital fountains and more like "email conversation or IRC or whatever", then their impact is mitigated and they would not *be* a problem to be shaped/throttled/ managed. [snip]
I remember Bill Norton's peering forum regarding P2P traffic and how the majority of it is between cable and other broadband providers... Operationally, why not just lash a few additional 10GE cross-connects and let these *paying customers* communicate as they will?
Peering happens between broadband companies all the time. That does not resolve regional, city, or neighborhood congestion in one network. [snip]
Encouraging "encryption" of more protocols is an interesting way to discourage this kind of shaping.
This does nothing but reduce the pool of remote-p2p-nodes to those running encryption-capable clients. This is why people think they "get away" using encryption, as they are no longer the tallest nail to be hammered down, and often enough fit within their buckets. [snip]
My caffeine hasn't hit, so I can't think of anything else. Is this something the market will address by itself?
Likely. Some networks abandon standards and will tie customers to gear that looks more like dedicated pipes (narad, etc). Some will have the 800-lb-gorilla-tude to accelerate vendors' deployment of docsis3.0. Folks with the apropriate war chests can (and have) roll out PON and be somewhat generous... of course, the dedicated and mandatory ONT & CPE looks a lot like voice pre-carterfone... Joe, not promoting/supporting any position, just trying to provide facts about running last-mile networks. -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
On Jan 9, 2008 3:04 PM, Deepak Jain <deepak@ai.net> wrote:
However, my question is simply.. for ISPs promising broadband service. Isn't it simpler to just announce a bandwidth quota/cap that your "good" users won't hit and your bad ones will?
Deepak, No, it isn't. The bandwidth cap generally ends up being set at some multiple of the cost to service the account. Someone running at only half the cap is already a "bad" user. He's just not bad enough that you're willing to raise a ruckus about the way he's using his "unlimited" account. Let me put it to you another way: its the old 80-20 rule. You can usually select a set of users responsible for 20% of your revenue which account for 80% of your cost. If you could somehow shed only that 20% of your customer base without fouling the cost factors you'd have a slightly smaller but much healthier business. The purpose of the bandwidth cap isn't to keep usage within a reasonable cost or convince folks to upgrade their service... Its purpose is to induce the most costly users to close their account with you and go spend your competitors' money instead. 'Course, sometimes the competitor figures out a way to service those customers for less money and the departing folks each take their 20 friends with them. It's a double-edged sword which is why it rarely targets more than the hogs of the worst 1%. Regards, Bill Herrin -- William D. Herrin herrin@dirtside.com bill@herrin.us 3005 Crane Dr. Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Jan 9, 2008, at 9:04 PM, Deepak Jain wrote:
http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traf... http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-... http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225
If I am mistakenly being duped by some crazy fascists, please let me know.
However, my question is simply.. for ISPs promising broadband service. Isn't it simpler to just announce a bandwidth quota/cap that your "good" users won't hit and your bad ones will? This chasing of the lump under-the-rug (slowing encrypted traffic, then VPN traffic and so on...) seems like the exact opposite of progress to me (by progressively nastier filters, impeding the traffic your network was built to move, etc).
Especially when there is no real reason this P2P traffic can't masquerade as something really interesting... like Email or Web (https, hello!) or SSH or gamer traffic. I personally expect a day when there is a torrent "encryption" module that converts everything to look like a plain-text email conversation or IRC or whatever.
When you start slowing encrypted or VPN traffic, you start setting yourself up to interfere with all of the bread&butter applications (business, telecommuters, what have you).
I remember Bill Norton's peering forum regarding P2P traffic and how the majority of it is between cable and other broadband providers... Operationally, why not just lash a few additional 10GE cross- connects and let these *paying customers* communicate as they will?
All of these "traffic shaping" and "traffic prioritization" techniques seem a bit like the providers that pushed for ubiquitous broadband because they liked the margins don't want to deal with a world where those users have figured out ways to use these amazing networks to do things... whatever they are. If they want to develop incremental revenue, they should do it by making clear what their caps/usage profiles are and moving ahead... or at least transparently share what shaping they are doing and when.
I don't see how Operators could possibly debug connection/throughput problems when increasingly draconian methods are used to manage traffic flows with seemingly random behaviors. This seems a lot like the evil-transparent caching we were concerned about years ago.
So, to keep this from turning into a holy war, or a non-operational policy debate, and assuming you agree that providers of consumer connectivity shouldn't employee transparent traffic shaping because it screws the savvy customers and business customers. ;)
What can be done operationally?
For legitimate applications:
Encouraging "encryption" of more protocols is an interesting way to discourage this kind of shaping.
Using IPv6 based IPs instead of ports would also help by obfuscating protocol and behavior. Even IP rotation through /64s (cough 1 IP per half-connection anyone).
For illegitimate applications:
Port knocking and pre-determined stream hopping (send 50Kbytes on this port/ip pairing then jump to the next, etc, etc)
My caffeine hasn't hit, so I can't think of anything else. Is this something the market will address by itself?
DJ
Hi all, 1st post for me here, but I just couldn't help it. We've been noticing this for quite a couple years in France now. (same time Cisco buying PCUBE, anyone remember ?). What happened is that someday, some major ISP here decided customer were to be offered 24Mb/s DSL DOWN, unlimited, plus TV, plus VoIP towards hundreds of free destinations... ... all that for around 30€/months. Just make a simple calculation with the amount of bandwidth in terms of transit. Let's say you're a french ISP, transit price-per-meg could vary between 10€ and 20€ (which is already cheap isn't it ?), multiply this by 24Mb/s, now the 30€ that you charge makes you feel like you'd better do everything possible to limitate traffic going towards other ASes. Certainly sounds like you've screwed your business plan. Let's be honest still, dumping prices on Internet Access also brang the country amongst the leading Internet countries, having a rather positive effect on competition. Another side of the story is that once upon a time, ISPs had a naturally OUTBOUND traffic profile, which supposedly is was to good in terms of ratio to negociate peerings. Thanks to peer-to-peer, now their ratios are BALANCED, meaning ISPs are now in a dominant position for negociating peerings. Eventually the question is: why is it that you guys fight p2p while at the same time benefiting from it, it doesn't quite make sense does it ? In France, Internet got broken the very 1st day ISPs told people it was cheap. It definitely isn't, but there is no turning back now... Greg VILLAIN Independant Network & Telco Architecture Consultant
The vast majority of our last-mile connections are fixed wireless. The design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic. PTP heavily stresses the upload channel and left unchecked results in poor performance for other customers. Bandwidth quotas don't help much since it just moves the problem to the 'start' of the quota time. Hard limits on upload bandwidth help considerably but do not solve the problem since only a few dozen customers running a steady 256k upload stream can saturate the channel. We still need a way to shape the upload traffic. It's easy to say "put up more access points, sectors, etc.) but there are constraints due to RF spectrum, tower space, etc. Unfortunately there are no easy answers here. The network (at least ours) is designed to provide broadband download speeds to rural customers. It's not designed and is not capable of being a CDN for the rest of the world. I would be much happier creating a torrent server at the data center level that customers could seed/upload from rather than doing it over the last mile. I don't see this working from a legal standpoint though. -- Mark Radabaugh Amplex 419.837.5015 x21 mark@amplex.net
The vast majority of our last-mile connections are fixed wireless. The design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic. PTP heavily stresses the upload channel and left unchecked results in poor performance for other customers.
There are lots of things that could heavily stress your upload channel. Things I've seen would include: 1) Sending a bunch of full-size pictures to all your friends and family, which might not seem too bad until it's a gig worth of 8-megapixel photos and 30 recipients, and you send to each recipient separately, 2) Having your corporate laptop get backed up to the company's backup server, 3) Many general-purpose VPN tasks (file copying, etc), 4) Online gaming (capable of creating a vast PPS load, along with fairly steady but low volumetraffic), etc. P2P is only one example of things that could be stressful.
Bandwidth quotas don't help much since it just moves the problem to the 'start' of the quota time.
Hard limits on upload bandwidth help considerably but do not solve the problem since only a few dozen customers running a steady 256k upload stream can saturate the channel. We still need a way to shape the upload traffic.
It's easy to say "put up more access points, sectors, etc.) but there are constraints due to RF spectrum, tower space, etc.
Sure, okay, and you know, there's certainly some truth to that. We know that the cellular carriers and the wireless carriers have some significant challenges in this department, and even the traditional DSL/cable providers do too. However, as a consumer, I expect that I'm buying an Internet connection. What I'm buying that Internet connection for is, quite frankly, none of your darn business. I may want to use it for any of the items above. I may want to use my GPRS radio as emergency access to KVM-over-IP-reachable servers. I may want to use it to push videoconferencing from my desktop. There are all these wonderful and wildly differing things that one can do with IP connectivity.
Unfortunately there are no easy answers here. The network (at least ours) is designed to provide broadband download speeds to rural customers. It's not designed and is not capable of being a CDN for the rest of the world.
I'd consider that a bad attitude, however. Your network isn't being used as "a CDN for the rest of the world," even if that's where the content might happen to be going. That's an Ed Whitacre type attitude. You have a paying customer who has paid you to move packets for them. Your network is being used for heavy data transmission by one of your customers. You do not have a contract with "the rest of the world." Unless you are providing access to a walled garden, you have got to expect that your customers are going to be sending and receiving data from "the rest of the world." Your issue is mainly with the volume at which that is happening, and shouldn't be with the destination or purpose of that traffic. The questions boil down to things like: 1) Given that you unable to provide unlimited upstream bandwidth to your end users, what amount of upstream bandwidth /can/ you afford to provide? 2) Are there any design flaws within your network that are making the overall problem worse? 3) What have you promised customers?
I would be much happier creating a torrent server at the data center level that customers could seed/upload from rather than doing it over the last mile. I don't see this working from a legal standpoint though.
Why not? There's plenty of perfectly legal P2P content out there. Anyways, let's look at a typical example. There's a little wireless ISP called Amplex down in Ohio, and looking at http://www.amplex.net/wireless/wireless.htm they say:
Connection Speeds
Our residential service is rated at 384kbps download and 256kbps up, business service is 768kbps (equal down and up). The network normally provides speeds well over those listed (up to 10 Mbps) but speed is dependant on network load and the quality of the wireless connection.
Connection speed is nearly always faster than most DSL connections and equivalent (or faster) than many cable modems.
Our competitors list maximum burst speeds with no guaranteed minimum speed. We guarantee our speeds will be as good or better than we specify in the service package you choose..
And then much further down:
What Amplex won't do...
Provide high burst speed if you insist on running peer-to-peer file sharing on a regular basis. Occasional use is not a problem. Peer-to-peer networks generate large amounts of upload traffic. This continuous traffic reduces the bandwidth available to other customers - and Amplex will rate limit your connection to the minimum rated speed if we feel there is a problem.
So, the way I would read this, as a customer, is that my P2P traffic would most likely eventually wind up being limited to 256kbps up, unless I am on the business service, where it'd be 768kbps up. This seems quite fair and equitable. It's clearly and unambiguously disclosed, it's still guaranteeing delivery of the minimum class of service being purchased, etc. If such an ISP were unable to meet the commitment that it's made to customers, then there's a problem - and it isn't the customer's problem, it's the ISP's. This ISP has said "We guarantee our speeds will be as good or better than we specify" - which is fairly clear. You might want to check to see if you've made any guarantees about the level of service that you'll provide to your customers. If you've made promises, then you're simply in the unenviable position of needing to make good on those. Operating an IP network with a basic SLA like this can be a bit of a challenge. You have to be prepared to actually make good on it. If you are unable to provide the service, then either there is a failure at the network design level or at the business plan level. One solution is to stop accepting new customers where a tower is already operating at a level which is effectively rendering it "full." ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Joe Greco wrote,
There are lots of things that could heavily stress your upload channel. Things I've seen would include:
1) Sending a bunch of full-size pictures to all your friends and family, which might not seem too bad until it's a gig worth of 8-megapixel photos and 30 recipients, and you send to each recipient separately, 2) Having your corporate laptop get backed up to the company's backup server, 3) Many general-purpose VPN tasks (file copying, etc), 4) Online gaming (capable of creating a vast PPS load, along with fairly steady but low volumetraffic),
etc. P2P is only one example of things that could be stressful.
These things all happen - but they simply don't happen 24 hours a day, 7 days a week. A P2P client often does. <snip for brevity>
The questions boil down to things like:
1) Given that you unable to provide unlimited upstream bandwidth to your end users, what amount of upstream bandwidth /can/ you afford to provide?
Again - it depends. I could tell everyone they can have 56k upload continuous and there would be no problem from a network standpoint - but it would suck to be a customer with that restriction. It's a balance between providing good service to most customers while leaving us options.
What Amplex won't do...
Provide high burst speed if you insist on running peer-to-peer file sharing on a regular basis. Occasional use is not a problem. Peer-to-peer networks generate large amounts of upload traffic. This continuous traffic reduces the bandwidth available to other customers - and Amplex will rate limit your connection to the minimum rated speed if we feel there is a problem.
So, the way I would read this, as a customer, is that my P2P traffic would most likely eventually wind up being limited to 256kbps up, unless I am on the business service, where it'd be 768kbps up. Depends on your catching our attention. As a 'smart' consumer you might choose to set the upload limit on your torrent client to 200k and the odds are pretty high we would never notice you.
For those who play nicely we don't restrict upload bandwidth but leave it at the capacity of the equipment (somewhere between 768k and 1.5M). Yep - that's a rather subjective criteria. Sorry.
This seems quite fair and equitable. It's clearly and unambiguously disclosed, it's still guaranteeing delivery of the minimum class of service being purchased, etc.
If such an ISP were unable to meet the commitment that it's made to customers, then there's a problem - and it isn't the customer's problem, it's the ISP's. This ISP has said "We guarantee our speeds will be as good or better than we specify" - which is fairly clear.
We try to do the right thing - but taking the high road costs us when our competitors don't. I would like to think that consumers are smart enough to see the difference but I'm becoming more and more jaded as time goes on....
One solution is to stop accepting new customers where a tower is already operating at a level which is effectively rendering it "full."
Unfortunately "full" is an ambiguous definition. Is it when: a) Number of Customers * 256k up = access point limit? b) Number of Customers * 768k down = access point limit? c) Peak upload traffic = access point limit? d) Peak download traffic = access point limit? (e) Average ping times start to increase? History shows (a) and (b) occur well before the AP is particularly loaded and would be wasteful of resources. (c) occurs quickly with a relatively small number of P2P clients. (e) Ping time variations occur slightly before (d) and is our usual signal to add capacity to a tower. We have not yet run into the situation where we can not either reduce sector size (beamwidth, change polarity, add frequencies, etc.) but that day will come and P2P accelerates that process without contributing the revenue to pay for additional capacity. As a small provider there is a much closer connect between revenue and cost. 100 'regular' customers pay the bills. 10 customers running P2P unchecked doesn't (and makes 90 others unhappy). Were upload costs insignificant I wouldn't have a problem with P2P - but that unfortunately is not the case. Mark
Joe Greco wrote,
There are lots of things that could heavily stress your upload channel. Things I've seen would include:
1) Sending a bunch of full-size pictures to all your friends and family, which might not seem too bad until it's a gig worth of 8-megapixel photos and 30 recipients, and you send to each recipient separately, 2) Having your corporate laptop get backed up to the company's backup server, 3) Many general-purpose VPN tasks (file copying, etc), 4) Online gaming (capable of creating a vast PPS load, along with fairly steady but low volumetraffic),
etc. P2P is only one example of things that could be stressful.
These things all happen - but they simply don't happen 24 hours a day, 7 days a week. A P2P client often does.
It may. Some of those other things will, too. I picked 1) and 2) as examples where things could actually get busy for long stretches of time. In this business, you have to realize that the average bandwidth use of a residential Internet connection is going to grow with time, as new and wonderful things are introduced. In 1995, the average 14.4 modem speed was perfectly fine for everyone's Internet needs. Go try loading web pages now on a 14.4 modem... even web pages are bigger.
<snip for brevity>
The questions boil down to things like:
1) Given that you unable to provide unlimited upstream bandwidth to your end users, what amount of upstream bandwidth /can/ you afford to provide?
Again - it depends. I could tell everyone they can have 56k upload continuous and there would be no problem from a network standpoint - but it would suck to be a customer with that restriction.
If that's the reality, though, why not be honest about it?
It's a balance between providing good service to most customers while leaving us options.
The question is a lot more complex than that. Even assuming that you have unlimited bandwidth available to you at your main POP, you are likely to be using RF to get to those remote tower sites, which may mean that there are some specific limits within your network, which in turn implies other things.
What Amplex won't do...
Provide high burst speed if you insist on running peer-to-peer file sharing on a regular basis. Occasional use is not a problem. Peer-to-peer networks generate large amounts of upload traffic. This continuous traffic reduces the bandwidth available to other customers - and Amplex will rate limit your connection to the minimum rated speed if we feel there is a problem.
So, the way I would read this, as a customer, is that my P2P traffic would most likely eventually wind up being limited to 256kbps up, unless I am on the business service, where it'd be 768kbps up.
Depends on your catching our attention. As a 'smart' consumer you might choose to set the upload limit on your torrent client to 200k and the odds are pretty high we would never notice you.
... "today." And since 200k is less than 256k, I would certainly expect that to be true tomorrow, too. However, it might not be, because your network may not grow easily to accomodate more customers, and you may perceive it as easier to go after the high bandwidth users, yes?
For those who play nicely we don't restrict upload bandwidth but leave it at the capacity of the equipment (somewhere between 768k and 1.5M).
Yep - that's a rather subjective criteria. Sorry.
This seems quite fair and equitable. It's clearly and unambiguously disclosed, it's still guaranteeing delivery of the minimum class of service being purchased, etc.
If such an ISP were unable to meet the commitment that it's made to customers, then there's a problem - and it isn't the customer's problem, it's the ISP's. This ISP has said "We guarantee our speeds will be as good or better than we specify" - which is fairly clear.
We try to do the right thing - but taking the high road costs us when our competitors don't. I would like to think that consumers are smart enough to see the difference but I'm becoming more and more jaded as time goes on....
You've picked a business where many customers aren't technically sophisticated. That doesn't necessarily make it right to rip them off - even if your competitors do.
One solution is to stop accepting new customers where a tower is already operating at a level which is effectively rendering it "full."
Unfortunately "full" is an ambiguous definition. Is it when:
a) Number of Customers * 256k up = access point limit? b) Number of Customers * 768k down = access point limit? c) Peak upload traffic = access point limit? d) Peak download traffic = access point limit? (e) Average ping times start to increase?
History shows (a) and (b) occur well before the AP is particularly loaded and would be wasteful of resources.
Certainly, but it's the only way to actually be able to guarantee the service you've promised. If you were to choose to do that, the remainder of this becomes irrelevant, because you're able to deliver your promised service level.
(c) occurs quickly with a relatively small number of P2P clients.
(c) is probably the (current) hot spot for determining this, in your particular situation.
(e) Ping time variations occur slightly before (d) and is our usual signal to add capacity to a tower. We have not yet run into the situation where we can not either reduce sector size (beamwidth, change polarity, add frequencies, etc.) but that day will come and P2P accelerates that process without contributing the revenue to pay for additional capacity.
Then you've effectively made poor choices, in that you're using a limited technology, Internet bandwidth demands continue to increase (and will continue to increase, for reasons above and beyond P2P), and you've promised customers a level of service that you cannot deliver without relying on a level of overcommit that will apparently be untenable in the future. Am I missing anything? Oh, yes. The next question is one of ethics. What do you do next? Do you propose to silently rip them off by rate limiting above and beyond what you've guaranteed? Do you force a change in their terms of service upon them, knowing that they're locked into a term with you? Do you change the promises that you're making to new customers?
As a small provider there is a much closer connect between revenue and cost. 100 'regular' customers pay the bills. 10 customers running P2P unchecked doesn't (and makes 90 others unhappy).
Were upload costs insignificant I wouldn't have a problem with P2P - but that unfortunately is not the case.
Then what you are promising customers is directly in conflict with what you are able to deliver. This is a problem with your business plan. Ultimately, I believe that service providers will need to establish what the minimum service level is that they'll be able to provide to customers. In my opinion, this is best done reflecting what you can deliver based on every customer running full blast, because really, we don't know what the next killer app will be. I suspect it will be something devastating (to network operators) such as TiVo downloading content over the Internet. That isn't saying that you must build your network, /today/, to support it - but you need to make sure that you can build your network up fairly quickly to do so, and that you can afford to remain in business once you have done so. As an example, if you have max 1.5M uplink from your tower, can't expand, and you've got 20 customers on that tower, you may be able to deliver 256kbps upstream to the 5 full time P2P'ers in that customer base today, but you can't actually deliver 256kbps upstream to 20. Promise the 64kbps, and you can deliver that, with room to spare. Marketing doesn't like that? Then tackle it from a technical angle. Promise the 256, and /find/ a way to deliver it, if and when it becomes an issue. <other possible solutions omitted, there are none where everybody is happy> Otherwise, admit defeat, just be honest and say you're not going to honor the promises you've made to your customers, and then limit them. The high bandwidth users either decide to put up with it, or they go elsewhere. As time passes, more and more customers who want to be in the high bandwidth boat go elsewhere (i.e. to a competitor with a better network). Maybe your business eventually folds. Maybe not. Nobody said it was easy. :-) ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
I would be much happier creating a torrent server at the data center level that customers could seed/upload from rather than doing it over the last mile. I don't see this working from a legal standpoint though.
Why not? There's plenty of perfectly legal P2P content out there.
Hum... maybe there is an idea here. I believe the bittorrent protocol rewards uploading users with faster downloading. Moving the upload content to a more appropriate point on the network (a central torrent server) breaks this model. How would a client get faster download speeds based on the uploads they made to a central server? To solve the inevitable legal issues there would also need to be a way to track how content ended up on the server as well. Are there any torrent clients that do this? Mark
The vast majority of our last-mile connections are fixed wireless. The design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic.
This in a nutshell is the problem, the ratio between upload and download should be 1:1 and if it were then there would be no problems. Folks need to stop pretending they aren't part of the internet. Setting a ratio where upload:download is not 1:1 makes you a leech. It's a cheat designed to allow technology companies to claim their devices provide more bandwidth than they actually do. Bandwidth is 2 way, you should give as much as you get. Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what has created this problem, not file sharing, not running backups, not any of the things that require up speed. For the entire internet up speed must equal down speed or it can't work. You can't leech and expect everyone else to pay for your unbalanced approach. Geo.
Geo. wrote:
The vast majority of our last-mile connections are fixed wireless. The design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic.
This in a nutshell is the problem, the ratio between upload and download should be 1:1 and if it were then there would be no problems. Folks need to stop pretending they aren't part of the internet. Setting a ratio where upload:download is not 1:1 makes you a leech. It's a cheat designed to allow technology companies to claim their devices provide more bandwidth than they actually do. Bandwidth is 2 way, you should give as much as you get.
Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what has created this problem, not file sharing, not running backups, not any of the things that require up speed. For the entire internet up speed must equal down speed or it can't work. You can't leech and expect everyone else to pay for your unbalanced approach.
Geo.
Your back to the 'last mile access' problem. Most Cable, DSL, and Wireless is asymmetric and for good reason - making efficient use of limited overall bandwidth and providing customers the high download speeds they demand. You can posit that the Internet should be symmetric but it will take major financial and engineering investment to change that. Given that there is no incentive for network operators to assist 3rd party CDN's by increasing upload speeds I don't see this happening in the near future. I am not even remotely surprised that network operators would be interested in disrupting this traffic. Mark
Geo: That's an over-simplification. Some access technologies have different modulations for downstream and upstream. i.e. if a:b and a=b, and c:d and c>d, a+b<c+d. In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Geo. Sent: Sunday, January 13, 2008 1:47 PM To: nanog list Subject: Re: ISPs slowing P2P traffic...
The vast majority of our last-mile connections are fixed wireless. The design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic.
This in a nutshell is the problem, the ratio between upload and download should be 1:1 and if it were then there would be no problems. Folks need to stop pretending they aren't part of the internet. Setting a ratio where upload:download is not 1:1 makes you a leech. It's a cheat designed to allow technology companies to claim their devices provide more bandwidth than they actually do. Bandwidth is 2 way, you should give as much as you get. Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what has created this problem, not file sharing, not running backups, not any of the things that require up speed. For the entire internet up speed must equal down speed or it can't work. You can't leech and expect everyone else to pay for your unbalanced approach. Geo.
On Mon, 14 Jan 2008, Frank Bulk wrote:
In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio.
That might be your reality. My reality is that people with 8/1 ADSL download twice as much as they upload, people with 10/10 upload twice as much as they download. -- Mikael Abrahamsson email: swmike@swm.pp.se
Interesting, because we have a whole college attached of 10/100/1000 users, and they still have a 3:1 ratio of downloading to uploading. Of course, that might be because the school is rate-limiting P2P traffic. That further confirms that P2P, generally illegal in content, is the source of what I would call disproportionate ratios. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Mikael Abrahamsson Sent: Monday, January 14, 2008 11:22 AM To: nanog list Subject: RE: ISPs slowing P2P traffic... On Mon, 14 Jan 2008, Frank Bulk wrote:
In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio.
That might be your reality. My reality is that people with 8/1 ADSL download twice as much as they upload, people with 10/10 upload twice as much as they download. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Mon, 14 Jan 2008, Frank Bulk wrote:
Interesting, because we have a whole college attached of 10/100/1000 users, and they still have a 3:1 ratio of downloading to uploading. Of course, that might be because the school is rate-limiting P2P traffic. That further confirms that P2P, generally illegal in content, is the source of what I would call disproportionate ratios.
You're not delivering "Full Internet IP connectivity", you're delivering some degraded pseudo-Internet connectivity. If you take away one of the major reasons for people to upload (ie P2P) then of course they'll use less upstream bw. And what you call disproportionate ratio is just an idea of "users should be consumers" and "we want to make money at both ends by selling download capacity to users and upload capacity to webhosting" instead of the Internet idea that you're fully part of the internet as soon as you're connected to it. -- Mikael Abrahamsson email: swmike@swm.pp.se
We're delivering full IP connectivity, it's the school that's deciding to rate-limit based on application type. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Mikael Abrahamsson Sent: Monday, January 14, 2008 1:28 PM To: nanog list Subject: RE: ISPs slowing P2P traffic... On Mon, 14 Jan 2008, Frank Bulk wrote:
Interesting, because we have a whole college attached of 10/100/1000 users, and they still have a 3:1 ratio of downloading to uploading. Of course, that might be because the school is rate-limiting P2P traffic. That further confirms that P2P, generally illegal in content, is the source of what I would call disproportionate ratios.
You're not delivering "Full Internet IP connectivity", you're delivering some degraded pseudo-Internet connectivity. If you take away one of the major reasons for people to upload (ie P2P) then of course they'll use less upstream bw. And what you call disproportionate ratio is just an idea of "users should be consumers" and "we want to make money at both ends by selling download capacity to users and upload capacity to webhosting" instead of the Internet idea that you're fully part of the internet as soon as you're connected to it. -- Mikael Abrahamsson email: swmike@swm.pp.se
Mikael Abrahamsson wrote:
On Mon, 14 Jan 2008, Frank Bulk wrote:
In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio.
That might be your reality.
My reality is that people with 8/1 ADSL download twice as much as they upload, people with 10/10 upload twice as much as they download.
I'm a photographer. When I shoot a large event and have hundreds or thousands of photos to upload to the fulfillment servers, to the event websites, etc. it can take 12 hours or more over my slow ADSL uplink. When my contract is up, I'll be changing to a different service with symmetrical service, faster upload speeds. The faster-upload service costs more - ISPs charge more for 2 reasons: 1) Because they can (because the market will bear it) and 2) Because the average customer who buys this service uses more bandwidth. Do you really find it surprising that people who upload a lot of data are the ones who would pay extra for the service plan that includes a faster upload speed? Why "penalize" the customers who pay extra? I predicted this billing and usage problem back in the early days of DSL. Just as no webhost can afford to give customers "unlimited usage" on their web servers, no ISP can afford to give customers "unlimited usage" on their access plans. You hope that you don't get too many of the users who use your "unlimited" service - but you are afraid to change your service plans to a realistic plan that actually meets customer needs. You are terrified of dropping that term "unlimited" have having your competitors use this against you in advertising. So you try to "limit" the "unlimited" service without having to drop the term "unlimited" from your service plans. Some features of an ideal internet access service plan for home users include: 1) Reasonable bandwidth usage allotment per month 2) Proactive monitoring and notification from the ISP if the daily usage indicates they will exceed the plan's monthly bandwidth limit 3) A grace period, so the customer can change user behavior or change plans before being hit with an unexpected bill for "excess use". 4) Spam filtering that Just Works. 5) Botnet detection and proactive notifications when botnet activity is detected from end-user computers. Help them keep their computer running without viruses and botnets and they will love you forever! If you add the value-ads (#4 and 5), customers will gladly accept reasonable bandwidth caps as *part* of the total *service* package you provide. If all you want is to provide a pipe, no service, whine about those who use "too much" of the "unlimited" service you sell, well then you create an adversarial relationship with your customers (starting with your lie about "unlimited") and it's not surprising that you have problems. jc
Geo:
That's an over-simplification. Some access technologies have different modulations for downstream and upstream. i.e. if a:b and a=b, and c:d and c>d, a+b<c+d.
In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio.
So, is that actually true as a constant, or might there be some cause->effect mixed in there? For example, I know I'm not transferring any more than I absolutely must if I'm connected via GPRS radio. Drawing any sort of conclusions about my normal Internet usage from my GPRS stats would be ... skewed ... at best. Trying to use that "reality" as proof would yield you an exceedingly misleading picture. During those early years of the retail Internet scene, it was fairly easy for users to migrate to usage patterns where they were mostly downloading content; uploading content on a 14.4K modem would have been unreasonable. There was a natural tendency towards eyeball networks and content networks. However, these days, more people have "always on" Internet access, and may be interested in downloading larger things, such as services that might eventually allow users to download a DVD and burn it. http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-t... This means that they're leaving their PC on, and maybe they even have other gizmos or gadgets besides a PC that are Internet-aware. To remain doggedly fixated on the concept that an end-user is going to download more than they upload ... well, sure, it's nice, and makes certain things easier, but it doesn't necessarily meet up with some of the realities. Verizon recently began offering a 20M symmetrical FiOS product. There must be some people who feel differently. So, do the "modulations" of your "access technologies" dictate what your users are going to want to do with their Internet in the future, or is it possible that you'll have to change things to accomodate different realities? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
You're right, I shouldn't let the access technologies define the services I offer, but I have to deal with the equipment I have today. Although that equipment doesn't easily support a 1:1 product offering, I can tell you that all the decisions we're making in regards to upgrades and replacements are moving toward that goal. In the meantime, it is what it is and we need to deal with it. Frank -----Original Message----- From: Joe Greco [mailto:jgreco@ns.sol.net] Sent: Monday, January 14, 2008 3:17 PM To: frnkblk@iname.com Cc: nanog@merit.edu Subject: Re: ISPs slowing P2P traffic...
Geo:
That's an over-simplification. Some access technologies have different modulations for downstream and upstream. i.e. if a:b and a=b, and c:d and c>d, a+b<c+d.
In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio.
So, is that actually true as a constant, or might there be some cause->effect mixed in there? For example, I know I'm not transferring any more than I absolutely must if I'm connected via GPRS radio. Drawing any sort of conclusions about my normal Internet usage from my GPRS stats would be ... skewed ... at best. Trying to use that "reality" as proof would yield you an exceedingly misleading picture. During those early years of the retail Internet scene, it was fairly easy for users to migrate to usage patterns where they were mostly downloading content; uploading content on a 14.4K modem would have been unreasonable. There was a natural tendency towards eyeball networks and content networks. However, these days, more people have "always on" Internet access, and may be interested in downloading larger things, such as services that might eventually allow users to download a DVD and burn it. http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-t o-burn-scheme/ This means that they're leaving their PC on, and maybe they even have other gizmos or gadgets besides a PC that are Internet-aware. To remain doggedly fixated on the concept that an end-user is going to download more than they upload ... well, sure, it's nice, and makes certain things easier, but it doesn't necessarily meet up with some of the realities. Verizon recently began offering a 20M symmetrical FiOS product. There must be some people who feel differently. So, do the "modulations" of your "access technologies" dictate what your users are going to want to do with their Internet in the future, or is it possible that you'll have to change things to accomodate different realities? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
I would be much happier creating a torrent server at the data center level that customers could seed/upload from rather than doing it over the last mile. I don't see this working from a legal standpoint though.
From a technical point of view, if your Bittorrent protocol seeder does not have a copy of the file on its harddrive, but pulls it in from the customer's computer, you would only be caching the file in RAM and there is some legal precedent going back into
Seriously, I would discuss this with some lawyers who have experience in the Internet area before coming to a conclusion on this. The law is as complex as the Internet itself. In particular, there is a technical reason for setting up such torrent seeding servers in a data center and that technical reason is not that different from setting up a web-caching server (either in or out) in a data center. Or setting up a web server for customers in your data center. As long as you process takedown notices for illegal torrents in the same way that you process takedown notices for illegal web content, you may be able to make this work. Go to Google and read a half-dozen articles about "sideloading" to compare it to what you want to do. In fact, sideload.com may have done some of the initial legal legwork for you. It's worth discussing this with a lawyer to find out the limits in which you can work and still be legal. the pre-Internet era that exempts such copies from legislation. --Michael Dillon
participants (15)
-
Andy Davidson
-
Deepak Jain
-
Frank Bulk
-
Geo.
-
Greg VILLAIN
-
Jared Mauch
-
JC Dill
-
Joe Greco
-
Joe Provo
-
Mark Radabaugh
-
Matt Landers
-
michael.dillon@bt.com
-
Mikael Abrahamsson
-
Valdis.Kletnieks@vt.edu
-
William Herrin