Did Internet Founders Actually Anticipate Paid, Prioritized Traffic?
Is it remotely relevant what the founders anticipated? I doubt they anticipated Amazon, Ebay and Google too. On 9/13/10, Hank Nussbacher <hank@efes.iucc.ac.il> wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
-Hank
-- Sent from my mobile device William McCall, CCIE #25044
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access. Sent from my iPhone 4. On Sep 13, 2010, at 3:22 AM, Hank Nussbacher <hank@efes.iucc.ac.il> wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
-Hank
On Mon, 13 Sep 2010 09:28:09 -0400, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Because of net neutrality ?
Sent from my iPhone 4.
On Sep 13, 2010, at 3:22 AM, Hank Nussbacher <hank@efes.iucc.ac.il>
wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
-Hank
Why not, we (collectively) already pay for peering either directly or indirectly through restrictive peering policies. Jeff On Mon, Sep 13, 2010 at 6:10 PM, Julien Gormotte <julien@gormotte.info> wrote:
On Mon, 13 Sep 2010 09:28:09 -0400, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Because of net neutrality ?
Sent from my iPhone 4.
On Sep 13, 2010, at 3:22 AM, Hank Nussbacher <hank@efes.iucc.ac.il>
wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
-Hank
-- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications - AS32421 First and Leading in DDoS Protection Solutions
On Mon, Sep 13, 2010 at 01:40:10PM +0000, Julien Gormotte wrote:
On Mon, 13 Sep 2010 09:28:09 -0400, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Because of net neutrality ?
Much like Santa Claus or the Tooth Fairy, that's one of the many "re-assurance" myths that one eventually abandons in the course of maturing. [cue endless thread of knee-jerk responses; can we just Godwin it now please?] -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
On Sep 13, 2010, at 8:50 AM, Joe Provo wrote:
On Mon, Sep 13, 2010 at 01:40:10PM +0000, Julien Gormotte wrote:
On Mon, 13 Sep 2010 09:28:09 -0400, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Because of net neutrality ?
Much like Santa Claus or the Tooth Fairy, that's one of the many "re-assurance" myths that one eventually abandons in the course of maturing.
Well, speaking as a web-based service provider, I oppose it because I quite simply can't afford to operate otherwise. If we start having to pay for access based on geographical area or customer type or some other arbitrary classification we won't be able to stay open. We have enough problems as it is, I don't need my ISP telling me I have to pay for transit on a per-destination basis. If this becomes reality, we are going out of business, and I bet we aren't alone in this.
In a message written on Mon, Sep 13, 2010 at 09:50:21AM -0400, Joe Provo wrote:
[cue endless thread of knee-jerk responses; can we just Godwin it now please?]
Of course Hitler was the first to propose pay-to-play internet traffic. :) Consumers are more in need of regulatory protection than business customers, at $19.95 a month they are viewed as expendable by many of the companies that offer consumer services, and are often served by a monopoly or duopoly, often at the encouragement of government. They can't vote with their dollars as we like to say, and need some protection. However, the proposed "remedies" of banning all filtering ever, or requiring free peering to everyone (taking both to the extreme, of course) don't match the operational real world. Many of those who are pushing for network neutrality are pushing for an ideal that the network simply cannot deliver, no matter what. Rather than network neutrality, I'd simply like to see truth in advertising applied. If my provider advertises "8 Mbps" service then I should be able to get 8 Mbps from Google, or Yahoo, or you, or anyone else on the network, provided of course they have also purchased an 8 Mbps or higher plan from their provider. I don't care if it is done with transit, peering, paid priorization, or any other mechanism, those are back end details that will change over time. I don't care if it is Google building their own network, or you buying 8Mbps service from your local monopoly ISP. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
-----Original Message----- From: Leo Bicknell [mailto:bicknell@ufp.org] Sent: Monday, September 13, 2010 9:32 AM To: nanog@nanog.org Subject: Re: Did Internet Founders Actually Anticipate Paid,Prioritized Traffic?
In a message written on Mon, Sep 13, 2010 at 09:50:21AM -0400, Joe Provo wrote:
[cue endless thread of knee-jerk responses; can we just Godwin it now please?]
Of course Hitler was the first to propose pay-to-play internet traffic. :)
Well done. :)
Consumers are more in need of regulatory protection than business customers, at $19.95 a month they are viewed as expendable by many of the companies that offer consumer services, and are often served by a monopoly or duopoly, often at the encouragement of government. They can't vote with their dollars as we like to say, and need some protection.
OK... so doesn't this speak to the commoditization of service providers? I'm against more regulation and for competition.
However, the proposed "remedies" of banning all filtering ever, or requiring free peering to everyone (taking both to the extreme, of course) don't match the operational real world. Many of those who are pushing for network neutrality are pushing for an ideal that the network simply cannot deliver, no matter what.
Agreed. The bulk of the "Net Neutrality" crowd lives in a dream world. Most (maybe some, maybe a few depending on your view) approach filtering as a solution to a technical problem not as a money making proposition. I have always espoused it only as a fix to technical (security, abuse and the like) issues.
Rather than network neutrality, I'd simply like to see truth in advertising applied. If my provider advertises "8 Mbps" service then I should be able to get 8 Mbps from Google, or Yahoo, or you, or anyone else on the network, provided of course they have also purchased an 8 Mbps or higher plan from their provider. I don't care if it is done with transit, peering, paid priorization, or any other mechanism, those are back end details that will change over time. I don't care if it is Google building their own network, or you buying 8Mbps service from your local monopoly ISP.
Explain how the provider of access is supposed to be able to control all of the systems outside it's control to get a specific speed from a content provider. If you are espousing contracts with each content provider, then you will quickly be destroying the Internet. We advertise a rate and ensure we have no congestion on our Internet connection to ensure that all demands for traffic are met on our side. I cannot ensure that site X will not be flooded or have other restrictions on its bandwidth that will prevent your full utilization of the bandwidth. - Brian J. CONFIDENTIALITY NOTICE: This email message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copying, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. Thank you.
In a message written on Mon, Sep 13, 2010 at 09:44:40AM -0500, Brian Johnson wrote:
OK... so doesn't this speak to the commoditization of service providers? I'm against more regulation and for competition.
Competition would be wonderful, but is simply not practical in many cases. Most people and companies don't want to hear this, but from a consumer perspective the Internet is a utility, and very closely resembles water/sewer/electric/gas service. That is, having 20 people run fiber past your home when you're only going to buy from one of them makes no economic sense. Indeed, we probably wouldn't have both cable and DSL service if those were both to the home for other reasons already.
Explain how the provider of access is supposed to be able to control all of the systems outside it's control to get a specific speed from a content provider. If you are espousing contracts with each content provider, then you will quickly be destroying the Internet.
That's not exactly what I am proposing; rather I'm proposing we (the industry) develop a set of technical specifications and testing where we can generally demonstrate this to be the case. Of course, things may happen at any time, this isn't about individual machines, or flash mobs. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
On Mon, 13 Sep 2010 08:06:03 -0700 Leo Bicknell <bicknell@ufp.org> wrote:
In a message written on Mon, Sep 13, 2010 at 09:44:40AM -0500, Brian Johnson wrote:
OK... so doesn't this speak to the commoditization of service providers? I'm against more regulation and for competition.
Competition would be wonderful, but is simply not practical in many cases. Most people and companies don't want to hear this, but from a consumer perspective the Internet is a utility, and very closely resembles water/sewer/electric/gas service. That is, having 20 people run fiber past your home when you're only going to buy from one of them makes no economic sense. Indeed, we probably wouldn't have both cable and DSL service if those were both to the home for other reasons already.
Explain how the provider of access is supposed to be able to control all of the systems outside it's control to get a specific speed from a content provider. If you are espousing contracts with each content provider, then you will quickly be destroying the Internet.
That's not exactly what I am proposing; rather I'm proposing we (the industry) develop a set of technical specifications and testing where we can generally demonstrate this to be the case. Of course, things may happen at any time, this isn't about individual machines, or flash mobs.
That's why there isn't much value in them. You can't predict when these sorts of events are going to happen, so why would you want to make any sort of illusionary statements about assurances of service. The Internet is a best effort, not perfect effort, network. It does it's best with what is available at the time. There seems to be a fair bit of confusion between access rate and committed rate - some customers think access rate is committed rate. As mentioned earlier, because an ISP doesn't control the Internet, they can't make any committed rate assurances. What they can control is the access rate, and try to ensure that the access rate, which is dependent on what the customer is paying, marginally exceeds the common rate they can deliver to the customer, so that most of the time the customer sees the value in the bandwidth they've purchased. If there is too big a gap i.e. the customer never sees their link fully utilised, rather than occasionally, and hopefully quite often, they'll feel they've been sold something that isn't being delivered.
-- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
* Leo Bicknell:
Rather than network neutrality, I'd simply like to see truth in advertising applied. If my provider advertises "8 Mbps" service then I should be able to get 8 Mbps from Google, or Yahoo, or you, or anyone else on the network, provided of course they have also purchased an 8 Mbps or higher plan from their provider.
The interesting question is not so much bandwidth, but if the traffic counts against your monthly usage cap. For IPTV offered by your ISP, it won't. Likewise for Internet video platforms where your ISP has obtained an ad-sharing deal. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
I was thinking more along the lines of the fact that I pay for access at home, my employer pays for access here at work, and Google, Apple, etc. pay for access (unless they've moved into the DFZ, which only happens when it's beneficial for all players that you're there). Why should we pay extra for what we're already supposed to be getting. If the ISps can't deliver what we're already paying for, they're broken. Jamie -----Original Message----- From: Julien Gormotte [mailto:julien@gormotte.info] Sent: Monday, September 13, 2010 9:40 AM To: Rodrick Brown Cc: nanog@nanog.org Subject: Re: Did Internet Founders Actually Anticipate Paid, PrioritizedTraffic? On Mon, 13 Sep 2010 09:28:09 -0400, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Because of net neutrality ?
On 9/13/2010 9:15 AM, Jamie Bowden wrote:
I was thinking more along the lines of the fact that I pay for access at home, my employer pays for access here at work, and Google, Apple, etc. pay for access (unless they've moved into the DFZ, which only happens when it's beneficial for all players that you're there). Why should we pay extra for what we're already supposed to be getting. If the ISps can't deliver what we're already paying for, they're broken.
It gets more confusing. See media licensing such as ESPN3, which is provider based. Unfortunately, they treat it the same as they do the cable channel, so if you don't run video services, it puts you in a really bad position. It's also doesn't scale. Sure, with just ESPN3, we might be able to do some billing stuffers, but what about the next 50 video streaming sites that decide they want to do provider based licensing. How many stuffers can you put in with a bill? Jack
On Mon, Sep 13, 2010 at 10:15:02AM -0400, Jamie Bowden wrote:
I was thinking more along the lines of the fact that I pay for access at home, my employer pays for access here at work, and Google, Apple, etc. pay for access (unless they've moved into the DFZ, which only happens when it's beneficial for all players that you're there).
Moving into the DFZ is different from not paying for access. Many enterprises and providers take full BGP routes and have no default, but they're still paying for connectivity.
Why should we pay extra for what we're already supposed to be getting. If the ISps can't deliver what we're already paying for, they're broken.
The little secret (for some values of secret) that no one isn this thread is talking about is that consumer Internet access is a low margin cutthroat business. Consumers demand ever-increasing amounts of bandwidth and don't want to pay more for it. Providers figure out a way to deliver or lose the business to another provider who figures out a way. Of course they're going to try to monetize the other end, so they can charge the customer less and keep his business, and of course they're going to do things that the purists object to and that are harmful, because most of the customers won't care and they'll like the low price. It's the same reason we have NAT boxes in everyone's homes. It saves money, and consumers are heavily cost driven, and they don't know or don't care what they are losing when they buy purely, or almost purely, on price. There's no NAT in my house, and I'll switch to commercial grade Internet service (and pay the appropriate price) if residential service drops to an unacceptable level of quality for me. (Right now, I can opt out of their attempts to monetize the other end -- for example, I run my own DNS server rather than use my provider's that redirects typos somewhere that gets them money.) But my costs -- for more than one IP address, for a real router rather than a consumer grade toy -- are considerably higher than what most people are willing to pay. Companies of any significant size probably aren't going to fall prey to net-non-neutrality ... but they're going to pay business prices for Internet, and that's going to cover the costs of providing the service and a reasonable profit. If that's what you want at home, then pay that price and you can get that. But most people at home will choose to pay less for their service and let their provider monetize both ends of the connection. To be clear, I'm not staking out a philosophical postion here. I'm a purist -- see above, I don't NAT and I'll pay for a better connection if my consumer connections become insufficiently neutral -- but most people won't and there is and will be a real market in providing cheap, less pure, bandwidth. -- Brett
On Mon, Sep 13, 2010 at 09:28:09AM -0400, Rodrick Brown wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Because I pay my ISP for internet access. Not for google and their pre-approved list of websites access.
Sent from my iPhone 4.
On Sep 13, 2010, at 3:22 AM, Hank Nussbacher <hank@efes.iucc.ac.il> wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
-Hank
On 9/13/2010 9:28 AM, Rodrick Brown wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Who is paying for access to the Internet? I thought it was the end-user / customer who was paying, that with their payment is paying for the ISP to gain access to the rest of the net. I would hope that my monthly internet charges would keep me from being a set of "eyeballs" that are to be "monetized" by my ISP. Otherwise give me the service for free... --Patrick
On Mon, 13 Sep 2010 09:28:09 EDT, Rodrick Brown said:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Sure - I would have to pay $$/mo if I wanted satellite radio. But failure to do so doesn't interfere in the slightest with my ability to receive local free-air stations, or impact my neighbor's radio. If 15 of my neighbors pay extra each month to watch HBO or other premium content, I still get a reasonable level of performance watching MSNBC in the basic-cable package. That's the way it works for many other outlets now - you pay extra, you get extra, but if you don't, other people's choices don't affect you. But it's *not* how it works for the Internet. Think about it for a moment - if the net is uncongested, then paying for priority doesn't make economic sense. If it *is* congested, then the only way to give priority to some traffic is to screw the non-paid traffic. That's the dirty little secret of QOS. For the sake of argument, let's call TCP's current implementation of window management and congestion avoidance "the fairest and most equal we know how to build". I don't mind fighting for bandwidth with 30 (or whatever it is) neighbors on my cable feed on that sort of an equal basis. Yes, I recognize that I'm actually sharing resources upstream, so my "6M" pipe may get sluggish because I'm sharing with 15 people watching some live pay-per-view event. I'm OK with that. What I'm *NOT* OK with is some media conglomerate literally coming along and buying 4M of that bandwidth (that *I* *already* *paid* *for*, remember?) out from under me, and using it for that pay-per-view event. It's the difference between how mad you get at the supermarket when the person in front of you has a full basket and was already in line when you got there, and a person with a full basket slipping the cashier a $20 to cut in line in front of your half-full basket. Does that explain it better?
On 09/13/2010 06:28 AM, Rodrick Brown wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
Sent from my iPhone 4.
On Sep 13, 2010, at 3:22 AM, Hank Nussbacher<hank@efes.iucc.ac.il> wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
This genie has long since escaped the bottle, hasn't it? I remember the voip wars of the late 90's where there was lots and lots and lots of hand wringing about qos... but how much penetration does RSVP have, say? Approximately zero? And that's because, in reality, voice is a tiny fraction of net traffic and all of the visions of RSVP and AAL2 and TCRTP and all of the rest of the crazy things have basically come to naught. Does Skype care about qos? It doesn't even care about RTP. So the new bete noir is video and it's easier to be seduced because the traffic volumes are potentially horrific. But i'll place my money on the bet that by the time any scheme to wring money out of that volume could be implemented, the pipes transporting it will be asking what all the hand wringing is about. Just like voice. The human and technological complications of grafting qos/settlement on top of the net are huge in comparison to stuffing more bits into glass. Mike, brute force and ignorance always wins
On Mon, Sep 13, 2010 at 3:22 AM, Hank Nussbacher <hank@efes.iucc.ac.il> wrote:
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
No, the founders anticipated source-declared priorities for unpaid military and government traffic. Commercial Internet really wasn't on their radar. On Mon, Sep 13, 2010 at 9:28 AM, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent. It'd be like the post office treating first class mail like bulk mail unless the recipient pays a first class mailbox fee in addition to the sender paying for first class delivery. On Mon, Sep 13, 2010 at 10:31 AM, Leo Bicknell <bicknell@ufp.org> wrote:
However, the proposed "remedies" of banning all filtering ever, or requiring free peering to everyone (taking both to the extreme, of course) don't match the operational real world. Many of those who are pushing for network neutrality are pushing for an ideal that the network simply cannot deliver, no matter what.
The network could deliver "cost-reimbursable" peering, in which any service provider above a particular size is by regulation compelled to provide peering at the cost of the basic connection in at least one location in each state in which they operate Internet infrastructure. As a matter of simple fairness, someone else has already paid them to move the packets. Why should you have to pay them more than the cost of the port? A small number of transit-frees would resent it, but it would damage them only in that it levels the playing field for small businesses, enhancing the small businesses' capabilities without enhancing their own.
Rather than network neutrality, I'd simply like to see truth in advertising applied.
Now you're talking about something that truly can't happen. You can't sell a service that, on paper, delivers less than the other guy's. Advertising is a constant race to the bottom because that's the behavior consumers reward. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
-----Original Message----- From: William Herrin [mailto:bill@herrin.us] Sent: Monday, September 13, 2010 11:05 AM To: Hank Nussbacher Cc: nanog@nanog.org Subject: Re: Did Internet Founders Actually Anticipate Paid,Prioritized Traffic?
<SNIP>
On Mon, Sep 13, 2010 at 9:28 AM, Rodrick Brown <rodrick.brown@gmail.com> wrote:
Its unrealistic to believe payment for priority access isn't going to happen this model is used for many other outlets today I'm not sure why so many are against it when it comes to net access.
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent.
It'd be like the post office treating first class mail like bulk mail unless the recipient pays a first class mailbox fee in addition to the sender paying for first class delivery.
This is a pretty clunky analogy. First class mail is treated differently (better?) than bulk mail in the USPS. There is no double payment for this service.
On Mon, Sep 13, 2010 at 10:31 AM, Leo Bicknell <bicknell@ufp.org>
wrote:
However, the proposed "remedies" of banning all filtering ever, or requiring free peering to everyone (taking both to the extreme, of course) don't match the operational real world. Many of those who are pushing for network neutrality are pushing for an ideal that the network simply cannot deliver, no matter what.
The network could deliver "cost-reimbursable" peering, in which any service provider above a particular size is by regulation compelled to provide peering at the cost of the basic connection in at least one location in each state in which they operate Internet infrastructure. As a matter of simple fairness, someone else has already paid them to move the packets. Why should you have to pay them more than the cost of the port?
So for clarity... who pays for the peering?
A small number of transit-frees would resent it, but it would damage them only in that it levels the playing field for small businesses, enhancing the small businesses' capabilities without enhancing their own.
HUH? Inanimate objects (transit-frees) do not have the ability to resent. Providers being forced to do something do not resent it (unless they are personally Invested), but they do have to recover their costs and as such would have to raise rates given nothing else changes.
Rather than network neutrality, I'd simply like to see truth in advertising applied.
Now you're talking about something that truly can't happen. You can't sell a service that, on paper, delivers less than the other guy's. Advertising is a constant race to the bottom because that's the behavior consumers reward.
I'm with you here. Keep in mind that it is the CONSUMER'S RESPONSIBILITY to understand what they are buying. I've seen tons of people buy something, not understanding it, then, realizing their mistake, blaming the supplier. I have also seen providers blatantly use wordsmithing (is that a word?) to "trick" people into buying there snake oil. Then hold people to contracts entered under suspicious circumstances. It's all so frustrating. - Brian J. CONFIDENTIALITY NOTICE: This email message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copying, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. Thank you.
On Mon, Sep 13, 2010 at 1:01 PM, Brian Johnson <bjohnson@drtel.com> wrote:
The network could deliver "cost-reimbursable" peering, in which any service provider above a particular size is by regulation compelled to provide peering at the cost of the basic connection in at least one location in each state in which they operate Internet infrastructure. As a matter of simple fairness, someone else has already paid them to move the packets. Why should you have to pay them more than the cost of the port?
So for clarity... who pays for the peering?
Hi Brian, Whichever party forces the other to accept peering under the regs. Of course, that's not what would happen. People being people, what would happen is that having been forced that close to balance, most of the companies would go ahead and offer settlement free peering to whoever showed up at locations where they peer with anyone else. Ethernet ports are relatively cheap, even on big iron, and their "generosity" positions them at the next regulatory challenge to say, "See, fairness doesn't require us to unbundle our fiber services because we already have open third-party access here." And unlike open peering, unbundling really is expensive and difficult.
A small number of transit-frees would resent it, but it would damage them only in that it levels the playing field for small businesses, enhancing the small businesses' capabilities without enhancing their own.
HUH? Inanimate objects (transit-frees) do not have the ability to resent.
Providers being forced to do something do not resent it (unless they are personally Invested), but they do have to recover their costs and as such would have to raise rates given nothing else changes.
By stating "resent," I suppose I'm personifying a process in which a large company warns of dire consequences for the consumer should it be forced to accept reasonable regulation after which the consequences either don't materialize at all or show up in some other way significantly less destructive than the obstructed behavior. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On 9/13/2010 12:05 PM, William Herrin wrote:
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent.
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis? It would look something like my Road Runner PowerBoost(tm) service, only it never cuts off when the consumer is accessing a particular content provider's service. That would allow Netflix/Hulu/OnLive/whoever to offer me a streaming service that requires a 15Mbps connection even though I'm not willing to upgrade my 10 up/1 down ISP connection to get it. -- Dave
On Tue, Sep 14, 2010 at 11:47 AM, Dave Sparro <dsparro@gmail.com> wrote:
On 9/13/2010 12:05 PM, William Herrin wrote:
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent.
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
It would look something like my Road Runner PowerBoost(tm) service, only it never cuts off when the consumer is accessing a particular content provider's service.
That would allow Netflix/Hulu/OnLive/whoever to offer me a streaming service that requires a 15Mbps connection even though I'm not willing to upgrade my 10 up/1 down ISP connection to get it.
Hi Dave, That depends. Can I trust you to handle packets in such a way that my 10/1 circuit consistently gets 10 megs to the hosts that haven't paid you for a boost? That was a rhetorical question. The answer, of course, is "no," I can't trust you to do that. You don't always stay ahead of the upgrade curve now. You allow some internal and upstream links to reach capacity, where congestion control slows everybody down. We all do it. Some products would cost more than we can recover if we didn't. In order to sell Netflix the ability to dynamically upgrade my circuit, you'd also have to give their packets priority on the congested links, beating out packets to the systems that are the most important to me. You *might* be able to convince me that it would be OK to let *me* pay for port bursting to designated sites. 10/1 plus the video package which unlimits the link to netflix/hulu/etc. But that's a different story: someone else isn't paying you to mess with my link, *I'm* paying you to mess with my link. Even then, I'd want to see you prohibited from prioritizing their packets anywhere except my immediate link to you. You want my extra money, I expect you to keep the total system capacity high enough to handle the total demand. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Tue, 14 Sep 2010 11:47:38 EDT, Dave Sparro said:
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
It would look something like my Road Runner PowerBoost(tm) service, only it never cuts off when the consumer is accessing a particular content provider's service.
That would allow Netflix/Hulu/OnLive/whoever to offer me a streaming service that requires a 15Mbps connection even though I'm not willing to upgrade my 10 up/1 down ISP connection to get it.
What happens to your neighbor's 10M while you're doing this?
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
Yes - and the reason is extremely simple. There are a lot of ISPs and a lot of plans. If I'm an entrepreneur looking to build Hulu from the ground up in a pre-Hulu world, am I really going to find EVERY ISP who supports this, and then raise the funding to pay them to allow their paying customers to get to my servers? The answer is no. Implementing this kind of system would stunt innovative new ideas that require a level playing field. The more disturbing effect, though, is this: What if I'm a content provider that your ISP doesn't like? I'm out of luck because you won't take my money to deliver my content at the rate I need to your customers - even though you will take my competitors? I don't see how anyone wins. Innovators lose for want of being able to execute. Content providers lose due to having to manage, maintain, and pay out a fee structure that's almost as complex as the routing table. Customers lose as a result of inconsistent and unpredictable usability. ISPs lose as function of customers losing confidence in their ability to provide service (to them, it 'just doesn't work for everything'). Yes, I would object. Nathan
On Sep 14, 2010, at 8:47 AM, Dave Sparro wrote:
On 9/13/2010 12:05 PM, William Herrin wrote:
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent.
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
Yes... Because the reality is that it wouldn't be an upgrade. It would be a euphemism for downgrading the subscriber's experience with other content providers.
It would look something like my Road Runner PowerBoost(tm) service, only it never cuts off when the consumer is accessing a particular content provider's service.
Except that PowerBoost(tm) provides a burstable service where the capacity is already available and using it would not negatively impact other subscribers. This, on the other had, would create an SLA requiring your ISP to either build out quite a bit of additional capacity (not so likely) or to negatively impact their other subscribers in order to deliver content to the subscriber using this enhanced service.
That would allow Netflix/Hulu/OnLive/whoever to offer me a streaming service that requires a 15Mbps connection even though I'm not willing to upgrade my 10 up/1 down ISP connection to get it.
There's little difference in my mind between this model and a model where service provider X is in bed with content provider Y (perhaps they share common ownership) and subscribers to provider X are given a dramatically better user experience to content Y than to other content of a similar nature. Owen
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
Yes... Because the reality is that it wouldn't be an upgrade. It would be a euphemism for downgrading the subscriber's experience with other content providers.
A lot of people hear the term "quality of service" and think that it refers to some mechanism that makes some packets go faster like a JATO rocket pack made late 40's, early 50's airplanes go faster. But that is not how the mechanism works. QOS mechanisms are based on making some packets go slower, either by delaying them or deleting them so that they have to be resent. This creates the illusion of speed for the remaining untouched packets if the QOS is successful in preventing congestion at network bottlenecks. QOS does not always prevent congestion; it just reduces the likelihood that congestion will occur. If the delayed/deleted packets belong to the same organization as the so-called boosted packets, then this works OK because this organization will have reasons for preferring that certain packets be delayed/deleted. The problem begins when the delayed/deleted packets belong to a different organization than the boosted ones. That is not net neutrality even if the packets have different diffserv markings. It is even worse when the network operator selectively remarks packets from one organization to cause them to be delayed/deleted. In this second scenario both organizations inject packets into the network with the same IETF diffserv markings but another network operator degrades the service for one organization. --Michael Dillon
On 9/14/2010 1:08 PM, Owen DeLong wrote:
On Sep 14, 2010, at 8:47 AM, Dave Sparro wrote:
On 9/13/2010 12:05 PM, William Herrin wrote:
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent.
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
Yes... Because the reality is that it wouldn't be an upgrade. It would be a euphemism for downgrading the subscriber's experience with other content providers.
So it's not fair for an ISP to limit a consumer's circuit to the speed they paid for, if there's excess capacity in the network? ie. If the ISP has capacity to offer 15Mbps down, that's what they should provide to a customer that has paid for 10Mbps. Where's the cut-off?
It would look something like my Road Runner PowerBoost(tm) service, only it never cuts off when the consumer is accessing a particular content provider's service.
Except that PowerBoost(tm) provides a burstable service where the capacity is already available and using it would not negatively impact other subscribers. This, on the other had, would create an SLA requiring your ISP to either build out quite a bit of additional capacity (not so likely) or to negatively impact their other subscribers in order to deliver content to the subscriber using this enhanced service.
I would think that the content provider's bag of cash is what would provide the incentive to add to capacity where needed.
That would allow Netflix/Hulu/OnLive/whoever to offer me a streaming service that requires a 15Mbps connection even though I'm not willing to upgrade my 10 up/1 down ISP connection to get it.
There's little difference in my mind between this model and a model where service provider X is in bed with content provider Y (perhaps they share common ownership) and subscribers to provider X are given a dramatically better user experience to content Y than to other content of a similar nature.
I just don't see a way to get passed the current impasse. The consumers are saying "I want faster, as long as I don't have to pay more." Content providers are saying, "If consumers had faster, I'd be able to invent 'Killer App'. I sure wish the ISPs would upgrade their networks." ISPs are saying, "Why should we upgrade our networks, nobody is willing to pay us to do so."
Dave Sparro wrote:
I just don't see a way to get passed the current impasse. The consumers are saying "I want faster, as long as I don't have to pay more." Content providers are saying, "If consumers had faster, I'd be able to invent 'Killer App'. I sure wish the ISPs would upgrade their networks." ISPs are saying, "Why should we upgrade our networks, nobody is willing to pay us to do so."
I predict a future where major content providers (MCPs) (such as Google, or Yahoo or MSN) offer customers access for free[1]. When you enroll in free MCP internet access, MCP content and apps will stream to you as fast as their network can possibly get them to you. What MCPs will do with content that comes from other networks is another question. One reason traffic shaping hasn't caught on yet is that it hasn't been beneficial to the people selling access to slow down traffic (which is the only way to shape traffic). Their bean counters keep thinking there's a market behind this service, but each time they try to create a market, someone else simply says "here, I'll get *everything* to you as fast as possible[2], why get your access from that other guy?". This was the Above.net model. So, when one of the MCPs (e.g. Google) comes out with their own access product (which will happen sooner or later), will they find that there's a market to give consumers faster access to Google's content? Or, will Yahoo or MSN or Facebook decide to give customers a product where they get everyone's content as fast as possible, thus negating the value of the free service from Google (or whoever) that is only as fast as possible for that company's content? My bet is on the above.net model - as soon as someone puts up a service with different speeds depending on where the content comes from, someone else will come out with a service that is everything, as fast as possible, and that second offering will win. The technology to stream everything as fast as possible will not be that much more expensive than the technology to provide different speeds for different sources, and the customer will flock to the "everything as fast as possible" offerings. jc [1] And then once everyone is getting their consumer access for free, will they start paying consumers to sign up with their service? We are getting quite close to this in other areas, such as "fill out this form get a free coupon".... [2] Sonic.net is offering "as fast as we can get it to you" now (for one price, no more tiered service for tiered pricing) to home customers in the SF Bay Area.
The consumers are saying "I want faster, as long as I don't have to pay more." Content providers are saying, "If consumers had faster, I'd be able to invent 'Killer App'. I sure wish the ISPs would upgrade their networks." ISPs are saying, "Why should we upgrade our networks, nobody is willing to pay us to do so."
Find me an ISP that is asking why they should upgrade their network if no one is going to pay them to do so. From a business perspective, this is a ludicrous claim. The answer is simple: because your competitors are upgrading their networks RIGHT NOW, and your customers will use them instead if you make them wait too long. There's no deadlock. Content providers that truly have a next generation product that modern broadband isn't good enough for are stuck, like anyone else who invents something that existing infrastructure can't support. Inventing a bizarre service prioritization model doesn't solve the infrastructure problem.
My bet is on the above.net model - as soon as someone puts up a service with different speeds depending on where the content comes from, someone else will come out with a service that is everything, as fast as possible, and that second offering will win. The technology to stream everything as fast as possible will not be that much more expensive than the technology to provide different speeds for different sources, and the customer will flock to the "everything as fast as possible" offerings.
Bingo. Keep it simple, and you win. Make it complex, and you create vulnerabilities through which your marketshare will be removed. If capacity is an issue, then as they say in Starcraft - you must construct additional pylons. Nathan
On 9/14/2010 4:02 PM, Nathan Eisenberg wrote:
The consumers are saying "I want faster, as long as I don't have to pay more." Content providers are saying, "If consumers had faster, I'd be able to invent 'Killer App'. I sure wish the ISPs would upgrade their networks." ISPs are saying, "Why should we upgrade our networks, nobody is willing to pay us to do so."
Find me an ISP that is asking why they should upgrade their network if no one is going to pay them to do so. From a business perspective, this is a ludicrous claim. The answer is simple: because your competitors are upgrading their networks RIGHT NOW, and your customers will use them instead if you make them wait too long.
There's no deadlock. Content providers that truly have a next generation product that modern broadband isn't good enough for are stuck, like anyone else who invents something that existing infrastructure can't support. Inventing a bizarre service prioritization model doesn't solve the infrastructure problem.
I don't see much competition from here. What I am seeing is a bunch of ISPs sitting on their hands waiting for the Feds to unlock the USF for broadband, or some other form of mana from heaven. -- Dave
On Sep 14, 2010, at 11:57 AM, Dave Sparro wrote:
On 9/14/2010 1:08 PM, Owen DeLong wrote:
On Sep 14, 2010, at 8:47 AM, Dave Sparro wrote:
On 9/13/2010 12:05 PM, William Herrin wrote:
It's a question of double-billing. I've already paid you to send and receive packets on my behalf. Detuning my packets because a second party hasn't also paid you is cheating, maybe fraudulent.
Would you object to an ISP model where a content provider could pay to get an ISP subscriber's package upgraded on a dynamic basis?
Yes... Because the reality is that it wouldn't be an upgrade. It would be a euphemism for downgrading the subscriber's experience with other content providers.
So it's not fair for an ISP to limit a consumer's circuit to the speed they paid for, if there's excess capacity in the network? ie. If the ISP has capacity to offer 15Mbps down, that's what they should provide to a customer that has paid for 10Mbps. Where's the cut-off?
If they only downgraded things to the capacity I paid for, sure. However, that isn't what happens.
It would look something like my Road Runner PowerBoost(tm) service, only it never cuts off when the consumer is accessing a particular content provider's service.
Except that PowerBoost(tm) provides a burstable service where the capacity is already available and using it would not negatively impact other subscribers. This, on the other had, would create an SLA requiring your ISP to either build out quite a bit of additional capacity (not so likely) or to negatively impact their other subscribers in order to deliver content to the subscriber using this enhanced service.
I would think that the content provider's bag of cash is what would provide the incentive to add to capacity where needed.
It hasn't worked that way in similar situations I have observed in the past. In my experience, they pocket the cash as a windfall and move on.
That would allow Netflix/Hulu/OnLive/whoever to offer me a streaming service that requires a 15Mbps connection even though I'm not willing to upgrade my 10 up/1 down ISP connection to get it.
There's little difference in my mind between this model and a model where service provider X is in bed with content provider Y (perhaps they share common ownership) and subscribers to provider X are given a dramatically better user experience to content Y than to other content of a similar nature.
I just don't see a way to get passed the current impasse. The consumers are saying "I want faster, as long as I don't have to pay more." Content providers are saying, "If consumers had faster, I'd be able to invent 'Killer App'. I sure wish the ISPs would upgrade their networks." ISPs are saying, "Why should we upgrade our networks, nobody is willing to pay us to do so."
I'm actually happy with the speed I currently have. For $99/month I get about 30mbps down and about 8mbps up. That's adequate for my household needs. I haven't encountered a content provider that has content I want that requires more than that. Where I have trouble is AT&T where I have paid them and they still haven't upgraded their wireless network. Owen
The article seems to jump around between 1973 and 1998 pretty easily. I guess for some "10 years ago" will always be "the early internet". That said, the author says AT&T hinges on the use of the word 'pricing' in RFC2475 which is dated December 1998, founders? Besides, "pricing" is a term of art, like "cost". It could well have been intended to mean money, just like "a big pile" could be referring to money or it could be referring to horse leavings. THAT SAID, I agree that the only problem is lack of competition. I don't care if someone implements network non-neutrality so long as there is a realistic opportunity for someone else to compete with a neutral network. Right now the net has oligopolized, largely through govt granted monopolies. *THAT SAID*, my suspicion is that the whole thing is a bluff and they (for some value of "they") can't implement network non-neutrality. It's some kind of big bluff to accomplish something else, probably just to sell FUD to large customers -- ooh, we better get a link to XYZ, otherwise when this non-neutral thing flies we're gonna be out in the cold! Then they're gonna REALLY charge the big bucks to get on their net. Something like that, I could propose other motivations more in the regulatory realm these players live in. -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
Oh and one more thing... In the "early internet", let's call that prior to 1990, the hierarchy wasn't price etc, it was: 1. ARPA/ONR (and later NSF) Research sites and actual network research 2. Faculty with funding from 1 at major university research sites 3. Faculty with funding from 1 at not so major universities 4. Faculty at 2 and 3 w/o actual research grants from 1 4. Students at 2 and 3 (tho less so at 3) 5. Everyone else who managed to sneak onto the net (DEC salesmen etc) People worried a fair amount about bandwidth on a network with a 56kb backbone. And those thoughts tended to turn to those hierarchies. I remember when word got out that some UK postal facility had demanded and gotten a set-up so they could sample email traffic on the ARPAnet (circa 1980?) to determine whether or not it was all truly research or were people using this govt-funded research facility to chit-chat and thereby depriving them of postage. They basically wanted postage on email paid to them and were trying to make their case. Warnings went out, I used the arpanet via an acct at MIT at the time so that must be where I saw the warnings about non-professional use of the arpanet. Anyhow, that was the pecking order. The point being that there was a sense that there were "real" people (i.e., properly funded faculty) who needed to do "real" work and some of them expressed concern from time to time that they needed priority. Remember that an early motivation for funding the net was so big fast computers could be accessed by researchers who weren't at the same facility as they were located, and that wasn't solved by cries for their own big fast computer. And that certainly went on in practice. I was involved in writing a $100M proposal for Boston University for a super-computing facility around 1986 and a major requirement was describing how you would get remote researchers to it. It wasn't for BU, per se, it was to be housed at BU. This was the competition that gave us ETF and the Jon von Neumann computing center and all that (that is, BU got nothing, which is probably about what they deserved, but it was my job to help w/ their proposal so I did.) You also had to figure out where to put 50-100 tons of chilled water and where to get about 1.5MW of electric service if I remember the numbers right, "a lot". -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
On Mon, 13 Sep 2010, Barry Shein wrote:
Oh and one more thing...
In the "early internet", let's call that prior to 1990, the hierarchy wasn't price etc, it was:
1. ARPA/ONR (and later NSF) Research sites and actual network research 2. Faculty with funding from 1 at major university research sites 3. Faculty with funding from 1 at not so major universities 4. Faculty at 2 and 3 w/o actual research grants from 1 4. Students at 2 and 3 (tho less so at 3) 5. Everyone else who managed to sneak onto the net (DEC salesmen etc)
People worried a fair amount about bandwidth on a network with a 56kb backbone. And those thoughts tended to turn to those hierarchies.
And don't forget the research & education network folks almost always charged commercial institutions a "premium" (sometimes called a "donation") to connect to the Internet in the early days. Even in the early 1990's during privatization, ANS charged differentiated pricing with educational instituations being charged less and commercial institutions being charged more. During the pre-1990's, I doubt any of the Internet "founders" were thinking of how to pay for networks other than asking for more grant money. ARPA and friends paid the bills, and asked for things like TOS/COS long before DiffServ because the military likes to prioritize things for all sorts of reasons besides price.
In the "early internet", let's call that prior to 1990, the hierarchy wasn't price etc, it was:
During the pre-1990's, I doubt any of the Internet "founders" were thinking of how to pay for networks other than asking for more grant money. ARPA and friends paid the bills, and asked for things like TOS/COS long before DiffServ because the military likes to prioritize things for all sorts of reasons besides price.
And let's not forget that the article which came up with the title of this thread equates IETF with "Internet Founders" and is talking about the 1990s and the introduction of diffserv. --Michael Dillon
On Sep 14, 2010, at 1:37 AM, Michael Dillon wrote:
And let's not forget that the article which came up with the title of this thread equates IETF with "Internet Founders" and is talking about the 1990s and the introduction of diffserv.
If that's the case, the proceedings of ISOC's INET '98 should be of interest. The speakers were not working in the IETF, but they were very aware of IETF proceedings at the time. Basically, the IETF was formalizing a tool that had been in various products in various forms for a decade or more already, in response to specific requests from operators, and which the operators wanted to have in a generalized fashion from any vendor. These guys were commenting on the expected use of the tool by the operators. http://www.isoc.org/inet98/proceedings/3e/3e_2.htm
Since I am a dinosaur and remember what was going on then ( one of many on this list I am sure ) 1) There was no clue that what we have today would develop. 2) General solutions to what were then abstract problems caused a lot of "open" things to be thrown around. And what does this "appeal to the ancient wisdom" have to do with technology and business today anyway? Bruce Williams .
On September 14, 2010 at 00:49 williams.bruce@gmail.com (Bruce Williams) wrote:
And what does this "appeal to the ancient wisdom" have to do with technology and business today anyway?
The article claimed that AT&T is claiming (to the FCC I think it was) that net non-neutrality was an early design goal of the internet, so they should be allowed to do whatever it is they want to do. Well, of course it was, only big research sites got IMPs with real 56k connections. Little guys like Apple, e.g., had to live on X.25 links from CSNET. BU was hooked up for a while via a 9600bps "cypress" link (a Vax 11/725* later Sun3/50 imp-a-like, via a serial port.) And we won't even talk about who got /8s. AT&T got 2 if I remember right though that company had no relationship to this AT&T which is just a rename of SBC after they bought some AT&T assets which owned the original trademark which is kind of like the old "if my grandmother had wheels they'd call her a trolley car" but I digress. As Jimmy Carter said: Life isn't fair. But that doesn't necessarily implore one to make it *more* unfair. * Never heard of a Vax 11/725? Then you are truly blessed. -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
On Sep 14, 2010, at 9:30 32PM, Barry Shein wrote:
On September 14, 2010 at 00:49 williams.bruce@gmail.com (Bruce Williams) wrote:
And what does this "appeal to the ancient wisdom" have to do with technology and business today anyway?
The article claimed that AT&T is claiming (to the FCC I think it was) that net non-neutrality was an early design goal of the internet, so they should be allowed to do whatever it is they want to do.
Well, of course it was, only big research sites got IMPs with real 56k connections. Little guys like Apple, e.g., had to live on X.25 links from CSNET. BU was hooked up for a while via a 9600bps "cypress" link (a Vax 11/725* later Sun3/50 imp-a-like, via a serial port.)
And we won't even talk about who got /8s. AT&T got 2 if I remember right though that company had no relationship to this AT&T which is just a rename of SBC after they bought some AT&T assets which owned the original trademark which is kind of like the old "if my grandmother had wheels they'd call her a trolley car" but I digress.
No, they bought AT&T, which had an ISP business, a long distance business, a private line business, and AT&T Labs, as well as other miscellaneous pieces like the brand name. We can wonder if AT&T would have survived as an independent company, but it was a going concern and not in bankruptcy at the time of the transaction. But yes, SBC is the controlling piece of the new AT&T. As for the two /8s -- not quite. Back in the 1980s, AT&T got 12/8. We soon learned that we couldn't make good use of it, since multiple levels of subnetting didn't exist. We offered it back to Postel in exchange for 135/8 -- i.e., the equivalent in class B space -- but Postel said to keep 12/8 since no one else could use it, either. This was all long before addresses were tight. When AT&T decided to go into the ISP business, circa 1995, 12/8 was still lying around, unused except for a security experiment I was running.* However, a good chunk of 135/8 went to Lucent (now Alcatel-Lucent) in 1996, though I don't know how much. --Steve Bellovin, http://www.cs.columbia.edu/~smb *The early sequence number guessing attack tools required a dead host that would be impersonated by the attacker. By chance, one of the early tools used something in 12/8. I started announcing it from Murray Hill, to catch the back-scatter from the victims. We found some of that; we also found lots of folks who were using 12/8 themselves, probably internally.
On Tue, Sep 14, 2010 at 6:51 PM, Steven Bellovin <smb@cs.columbia.edu> wrote:
No, they bought AT&T, which [...] But yes, SBC is the controlling piece of the new AT&T.
As for the two /8s -- not quite. Back in the 1980s, AT&T got 12/8. We soon learned that we couldn't make good use of it, since multiple levels of subnetting didn't exist. We offered it back to Postel in exchange for 135/8 -- i.e., the equivalent in class B space -- but Postel said to keep 12/8 since no one else could use it, either. This was all long before addresses were tight. When AT&T decided to go into the ISP business, circa 1995, 12/8 was still lying around, unused except for a security experiment I was running.* However, a good chunk of 135/8 went to Lucent (now Alcatel-Lucent) in 1996, though I don't know how much.
---- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.
Sorry, fat-fingered something when I was trying to edit. On Fri, Sep 17, 2010 at 2:12 PM, Bill Stewart <nonobvious@gmail.com> wrote:
On Tue, Sep 14, 2010 at 6:51 PM, Steven Bellovin <smb@cs.columbia.edu> wrote:
No, they bought AT&T, which [...] But yes, SBC is the controlling piece of the new AT&T. Most of the wide-area ISP network is the old AT&T, while much of the consumer broadband grew out of the SBC DSL side.
As for the two /8s -- not quite. Back in the 1980s, AT&T got 12/8. We soon learned that we couldn't make good use of it, since multiple levels of subnetting didn't exist. We offered it back to Postel in exchange for 135/8 -- i.e., the equivalent in class B space -- but Postel said to keep 12/8 since no one else could use it, either. This was all long before addresses were tight. When AT&T decided to go into the ISP business, circa 1995, 12/8 was still lying around, unused except for a security experiment I was running.* However, a good chunk of 135/8 went to Lucent (now Alcatel-Lucent) in 1996, though I don't know how much.
The AT&T bits kept some fraction of 135; I don't know how much without dredging through ARIN Whois, but at least 135.63/16 is on my desktop. If I remember correctly, which is unlikely at this point, 12/8 was the Murray Hill Cray's Hyperchannel network, which I'd heard didn't know how to do subnetting except on classful boundaries, so it could happily handle 16M hosts on its Class A, and in fact only had two or three. -- ---- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.
On Sep 17, 2010, at 5:20 46PM, Bill Stewart wrote:
Sorry, fat-fingered something when I was trying to edit.
On Fri, Sep 17, 2010 at 2:12 PM, Bill Stewart <nonobvious@gmail.com> wrote:
On Tue, Sep 14, 2010 at 6:51 PM, Steven Bellovin <smb@cs.columbia.edu> wrote:
No, they bought AT&T, which [...] But yes, SBC is the controlling piece of the new AT&T. Most of the wide-area ISP network is the old AT&T, while much of the consumer broadband grew out of the SBC DSL side.
Yup.
As for the two /8s -- not quite. Back in the 1980s, AT&T got 12/8. We soon learned that we couldn't make good use of it, since multiple levels of subnetting didn't exist. We offered it back to Postel in exchange for 135/8 -- i.e., the equivalent in class B space -- but Postel said to keep 12/8 since no one else could use it, either. This was all long before addresses were tight. When AT&T decided to go into the ISP business, circa 1995, 12/8 was still lying around, unused except for a security experiment I was running.* However, a good chunk of 135/8 went to Lucent (now Alcatel-Lucent) in 1996, though I don't know how much.
The AT&T bits kept some fraction of 135; I don't know how much without dredging through ARIN Whois, but at least 135.63/16 is on my desktop.
I know -- that's why I wrote "a good chunk", but I sure don't know who got what. (FYI, I'm still a very part-time AT&T employee.)
If I remember correctly, which is unlikely at this point, 12/8 was the Murray Hill Cray's Hyperchannel network, which I'd heard didn't know how to do subnetting except on classful boundaries, so it could happily handle 16M hosts on its Class A, and in fact only had two or three.
Good point. I don't remember what time frame that was true, though. I'm certain about why Mark Horton got 12/8 and 135/8, but I don't remember the years, either. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On 9/13/10 5:39 PM, Sean Donelan wrote:
On Mon, 13 Sep 2010, Barry Shein wrote:
In the "early internet", let's call that prior to 1990, the hierarchy wasn't price etc, it was:
1. ARPA/ONR (and later NSF) Research sites and actual network research 2. Faculty with funding from 1 at major university research sites 3. Faculty with funding from 1 at not so major universities 4. Faculty at 2 and 3 w/o actual research grants from 1 4. Students at 2 and 3 (tho less so at 3) 5. Everyone else who managed to sneak onto the net (DEC salesmen etc)
People worried a fair amount about bandwidth on a network with a 56kb backbone. And those thoughts tended to turn to those hierarchies.
And don't forget the research & education network folks almost always charged commercial institutions a "premium" (sometimes called a "donation") to connect to the Internet in the early days.
Even in the early 1990's during privatization, ANS charged differentiated pricing with educational instituations being charged less and commercial institutions being charged more.
During the pre-1990's, I doubt any of the Internet "founders" were thinking of how to pay for networks other than asking for more grant money. ARPA and friends paid the bills, and asked for things like TOS/COS long before DiffServ because the military likes to prioritize things for all sorts of reasons besides price.
Another dinosaur speaking. I spent some 8 years in the '80s-'90s looking at pricing for the Michigan House Fiscal Agency, and wrote the legislative boilerplate for funding the Michigan NSFnet contribution. If you think of Cerf et alia as the "fathers" of the Internet, think of me as the midwife.... Barry is correct. Sean is partly correct (we talked about funding beyond grants). ATT is simply wrong. While we talked *a* *lot* about public-private partnerships, we *never* agreed on pricing per packet. On the contrary, whenever it was discussed, that was shot down. Vigorously! Vociferously! Micro economists Hal Varian and Jeff MacKie-Mason were *not* Internet founders! Every so often, I like to brag that for a $5 million annual initial investment, we saved Michigan alone $100 million in telecommunication and computing costs over the first few years. ATT + Ameritech + CWA *hated* me! (As did some of the department folks that justified their salaries and empire building by the dollar totals that flowed through their department.) Reminder: when we specified the first few PPP Over ISDN products, we assumed bits are bits are bits. Then the "I Smell Dollars Now" incumbents decided "data" bits were more valuable than "voice" bits. We went back to the drawing board, and *CHANGED* the specification to require the capability to send PPP Over ISDN voice without losing to robbed bit signaling (56Kbps), so that we could provision around the pricing problem. But there's only so much we can do technically, when they use lawyers and lobbying to outlaw our technical solutions that route around problems. ISPs really need to re-invigorate the old CIX, ISP/C, whatever. Otherwise, you will not survive as NANOG.
The problem I have with the concept is that paid prioritization only really has an impact once there is congestion. If your buffers are empty, then there is no real benefit to priority because everything is still being sent as it comes in. If you have paid prioritization, there is a financial incentive to have congestion in order to collect "toll" on the expressway. So if I have a network that is not congested, nobody is going to pay me to ride on a special lane. If I neglect upgrading the network and it becomes congested, I can make money by selling access to an express lane. I believe a network should be able to sell priotitization at the edge, but not in the core. I have no problem with Y!, for example, paying a network to be prioritized ahead of bit torrent on the segment to the end user but I do have a problem with networks selling prioritized access through the core as that only gives an incentive to congest the network to create revenue. G
-----Original Message----- From: Hank Nussbacher [mailto:hank@efes.iucc.ac.il] Sent: Monday, September 13, 2010 12:22 AM To: nanog@nanog.org Subject: Did Internet Founders Actually Anticipate Paid, Prioritized Traffic?
http://www.wired.com/epicenter/2010/09/paid-prioritized-traffic
-Hank
inline... On Wed, 2010-09-15 at 22:15 -0700, George Bonser wrote:
The problem I have with the concept is that paid prioritization only really has an impact once there is congestion. If your buffers are empty, then there is no real benefit to priority because everything is still being sent as it comes in. If you have paid prioritization, there is a financial incentive to have congestion in order to collect "toll" on the expressway. So if I have a network that is not congested, nobody is going to pay me to ride on a special lane.
That's a serious problem that came up verbatim in an overheard (#1) conversation yesterday. The bean-counters (who must, unfortunately, remain nameless) coined the phrase "fill your buffers and fill your boots". I was left with the distinct unsavoury impression that they were drawing up a (contingency) plan for that exact eventuality.
I believe a network should be able to sell priotitization at the edge, but not in the core. I have no problem with Y!, for example, paying a network to be prioritized ahead of bit torrent on the segment to the end user but I do have a problem with networks selling prioritized access through the core as that only gives an incentive to congest the network to create revenue.
+1, because anything other than that Paid-Edge-Prio(#2), to me, smells of theft, fraud, and frankly, B-S. IANAL Gord (#1) on a comletely unrelated topic, twisted pairs could possibly great mike leads, don't you think? <cough> (#2) you heard it here first. Like wise, Paid-Core-Prio. Hey, I could patent-troll this stuff :) -- $ cowsay paid-prio ( rip-off ) -------- o ^__^ o (oo)\_______ (__)\ )\/\ \ ||----w | \_____ || ||
On Sep 16, 2010, at 12:15 AM, George Bonser wrote:
I believe a network should be able to sell priotitization at the edge, but not in the core. I have no problem with Y!, for example, paying a network to be prioritized ahead of bit torrent on the segment to the end user but I do have a problem with networks selling prioritized access through the core as that only gives an incentive to congest the network to create revenue.
<end user> I DO have a problem with a content provider paying to get priority access on the last mile. I have no particular interest in any of the content that Yahoo provides, but I do have an interest in downloading my Linux updates via torrents. Should I have to go back and bid against Yahoo just so I can get my packets in a timely fashion? </end user> I understand that the last mile is going to be a congestion point, but the idea of allowing a bidding war for priority access for that capacity seems to be a path to madness. --Chris
On Sep 16, 2010, at 12:15 AM, George Bonser wrote:
I believe a network should be able to sell priotitization at the edge, but not in the core. I have no problem with Y!, for example, paying a network to be prioritized ahead of bit torrent on the segment to the = end user but I do have a problem with networks selling prioritized access through the core as that only gives an incentive to congest the = network to create revenue.
<end user> I DO have a problem with a content provider paying to get priority = access on the last mile. I have no particular interest in any of the = content that Yahoo provides, but I do have an interest in downloading my = Linux updates via torrents. Should I have to go back and bid against = Yahoo just so I can get my packets in a timely fashion? </end user>
I understand that the last mile is going to be a congestion point, but = the idea of allowing a bidding war for priority access for that capacity = seems to be a path to madness.
Well, that's really the whole problem here. All of the serious "paid priority access" schemes appear to exist to try to squeeze money out of the other end of the connection. You, as a consumer, are already paying your ISP for the privilege of connecting to the Internet. It shouldn't be your ISP's role to determine what your intended use of that connection is. If you're using it for VoIP and videoconferencing, you gain no benefit from Yahoo!(*) paying for "priority" access. If you use it for VPN into your employer's corporate network, there's no advantage to Netflix(*) shoving your packets aside for theirs. If you use it for torrents for your FreeBSD updates, etc... [requoting a bit of the above]
I have no problem with Y!, for example, paying a network to be prioritized ahead of bit torrent on the segment to the end user
I *do* see a problem with prioritizing traffic type A ahead of traffic type B. If I'm your customer and you've sold me a 15M/2M circuit, maybe I plan to use that to access my employer's network via VPN. Now you want to declare my traffic "unworthy" because Yahoo!(*) has paid extra for "priority"? So I get "leftovers"? On one hand, we all recognize oversubscription as an issue. As an industry, service providers have worked hard to avoid committing to any particular service level, especially for end-user products like cable and DSL. The usual reasoning is that there's no way to guarantee it once it leaves their network. However, on the other hand, no attempt is made to guarantee it even on their network. What prevents a service provider from saying "We're selling you a 15M/2M circuit, and we guarantee that we've got sufficient capacity to consistently deliver at least 4M/512K through our network and to our peers/upstreams?" If all my neighbors suddenly decide one day to watch YouTube(*) due to some major event, and YouTube(*) has paid for priority access, and my 15M circuit suddenly drops to a few tens of kilobits because everyone else on this leg is competing for bandwidth, and there's more prioritized traffic than the leg can handle, how is that in any way a benefit to the consumer? There are some real risks associated with paid prioritization. Put it a different way: [restating the above] "I have no problem with ${a major drug manufacturer}, for example, paying my doctor to prescribe their products instead of their competitor's, even where the competitor's product might be a better fit for my medical needs." Now, really, just think about that for a little while. (*) All companies listed are just used as hypothetical examples, and should not be considered as anything more than that. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Thu, 16 Sep 2010 08:59:23 CDT, Joe Greco said:
What prevents a service provider from saying "We're selling you a 15M/2M circuit, and we guarantee that we've got sufficient capacity to consistently deliver at least 4M/512K through our network and to our peers/upstreams?"
Can I have that as "4M guaranteed, burstable to 15M, 95th percentile billing"? I'd rather have that than pay for 15M for the 1 hour a month I actually need it. (And yes, I'm fully aware of what the margin is on the consumer-grade cable I have, and why cookie-cutter installs are required to make even that margin, and why it won't happen unless I jump to the business-class side. Doesn't mean I can't dream... :)
On Thu, Sep 16, 2010 at 12:10 PM, <Valdis.Kletnieks@vt.edu> wrote:
On Thu, 16 Sep 2010 08:59:23 CDT, Joe Greco said:
What prevents a service provider from saying "We're selling you a 15M/2M circuit, and we guarantee that we've got sufficient capacity to consistently deliver at least 4M/512K through our network and to our peers/upstreams?"
Can I have that as "4M guaranteed, burstable to 15M, 95th percentile billing"?
No, you can't. You're buying a consumer-oriented commodity product. Your few words pack a powerful amount of insider knowledge that the overwhelming majority of consumers don't want to have to learn in order to compare vendor offerings. So they aren't asked to. If you want a non-commodity product you'll have to pay a non-commodity price to a niche vendor. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
----- Original Message ----- From: "Joe Greco" <jgreco@ns.sol.net> To: "Chris Boyd" <cboyd@gizmopartners.com> Cc: "NANOG" <nanog@nanog.org> Sent: Thursday, September 16, 2010 8:59 AM Subject: Re: Did Internet Founders Actually Anticipate Paid,
On one hand, we all recognize oversubscription as an issue.
The high-level of oversub isn't the issue, it's part of the business model. tv
On one hand, we all recognize oversubscription as an issue.
The high-level of oversub isn't the issue, it's part of the business model.
Of course the high level of oversub is an issue, because service providers are not willing to commit to providing some particular level of service. The business model has kind-of worked up to this point, with the scary boogeyman of evil illegal P2P filesharing used as justification for traffic engineering on shared pipes that are too small to handle the deluge of data that is growing daily. Consider: the practical reality is that we're seeing more and more gizmos that do more and more network things. We're going to see DVR's downloading content over the Internet, you'll see your nav system downloading map updates over the Internet, these are all "new" devices that didn't exist ~10 years ago in their current form, and they're changing consumer usage patterns. ISP's developed a "business model" that allowed profitability in an environment where each access line had marginal usage associated with it. The environment continues to evolve. There is no reason to expect that the "business model" will remain useful or that any component of it, such as massive oversubscription, must necessarily be correct and remain viable in its current form, just because it worked a decade ago. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Of course the high level of oversub is an issue....
We'll disagree then. Oversub makes access affordable.
..with the scary boogeyman of evil illegal P2P filesharing
That just tips the money in the wrong direction. And it's a real threat (amongst others)...not just that deadly clown hiding under your bed.
Consider: the practical reality is that we're seeing more and more gizmos that do more and more network things. We're going to see DVR's downloading content over the Internet, you'll see your nav system downloading map updates over the Internet, these are all "new" devices that didn't exist ~10 years ago in their current form, and they're changing consumer usage patterns.
Yeah, I think we all know and see that stuff. But, unless some technological model changes bit pricing, the premise of oversub still wins. Going 1:1 today (or in the near future) makes no sense unless you layer something on top (advertising, qos, buttercream icing?).
There is no reason to expect that the "business model" will remain useful or that any component of it, such as massive oversubscription, must necessarily be correct and remain viable in its current form, just because it worked a decade ago.
Well, I'm talking 10 years ago up until present. How do you see the sub model turning? 1:1? If so, how? And, still some profit? tv
I 'bookmarked' these folks: http://www.plus.net/?home=hometop on June 18, 2008 because they were one of the few who openly admitted to using DPI to enforce QOS. Two + years later, they're still around and apparently successful. Just glancing through the site, I could no longer find any mention of DPI, but instead they say this: http://www.plus.net/support/broadband/speed_guide/traffic_management.shtml For what it's worth...
Of course the high level of oversub is an issue....
We'll disagree then. Oversub makes access affordable.
We don't disagree. Of course oversub makes access affordable. The point here is that carriers aren't willing to commit to supporting some level of service. Many people have recognized that a lack of net neutrality is an incentive for service providers to either tacitly allow congestion points to evolve in their networks, or, worse, deliberately engineer such a situation, with dollar signs flashing in Ed Whitacre's eyes at the idea of being able to bill a third party. That's pretty much the opposite end of the spectrum from committing to supporting some level of service.
..with the scary boogeyman of evil illegal P2P filesharing
That just tips the money in the wrong direction. And it's a real threat (amongst others)...not just that deadly clown hiding under your bed.
A real threat? Oh, please, get real. A _real_ threat is what happens as cable and satellite providers keep jacking their rates, and more and more of the "next generation" of television viewers stop subscribing to conventional television distribution because they're able to get content over the Internet. That's a real threat. When your HD television comes with Netflix Live On Demand built in, even grandma will be clicking on movies, I'll bet.
Consider: the practical reality is that we're seeing more and more gizmos that do more and more network things. We're going to see DVR's downloading content over the Internet, you'll see your nav system downloading map updates over the Internet, these are all "new" devices that didn't exist ~10 years ago in their current form, and they're changing consumer usage patterns.
Yeah, I think we all know and see that stuff. But, unless some technological model changes bit pricing, the premise of oversub still wins. Going 1:1 today (or in the near future) makes no sense unless you layer something on top (advertising, qos, buttercream icing?).
Why is it that you are talking about 1:1?
There is no reason to expect that the "business model" will remain useful or that any component of it, such as massive oversubscription, must necessarily be correct and remain viable in its current form, just because it worked a decade ago.
Well, I'm talking 10 years ago up until present. How do you see the sub model turning? 1:1? If so, how? And, still some profit?
If you want something interesting to ponder: In the last ~10 years, wholesale bandwidth costs have fallen, what, from maybe $100/mbit to $1/mbit? I don't even know or care just how accurate that is, but roughly speaking it's true. In the last ~10 years, DSL and cable prices have stayed pretty much consistent. Our local cable connections have maybe doubled in speed in that time. DSL speeds haven't changed, except for Uverse, which is a bit of an exception for a number of reasons. Now obviously building the network costs something, but fifteen years after they started providing service, I'm guessing that's been paid for. They don't seem to be dumping lots of funds into increasing their network speeds. That suggests profit. Do you have an alternative explanation? I'm looking at the current scenario, and what I see are monopolies who are afraid of the future. at&t is already witnessing the destruction of its legacy telephony business, the demise of ridiculous long distance rates, etc. The Comcasts of the world have got to recognize that the ability for customers to avoid paying a monthly cable fee by getting video over the net is bad for business. So you have cable and telco, both telecom businesses with Something To Lose, both of whom incidentally are also the gatekeepers of residential Internet service. The killer point, though, is when you look at what's happening in other areas of the world. You can see broadband Internet services elsewhere evolving. You can even see rogues here in the US (I'm looking at you, Sonic!) who are pushing the envelope. The reality is that the world is changing, and subscribers are going to be pushing more and more data, often without even recognizing that fact. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Joe Greco wrote:
In the last ~10 years, wholesale bandwidth costs have fallen, what, from maybe $100/mbit to $1/mbit? I don't even know or care just how accurate that is, but roughly speaking it's true.
In the last ~10 years, DSL and cable prices have stayed pretty much consistent. Our local cable connections have maybe doubled in speed in that time. DSL speeds haven't changed, except for Uverse, which is a bit of an exception for a number of reasons.
Now obviously building the network costs something, but fifteen years after they started providing service, I'm guessing that's been paid for. They don't seem to be dumping lots of funds into increasing their network speeds. That suggests profit. Do you have an alternative explanation?
Physics. The reason consumer connection speeds haven't increased is pure physics, they haven't figured out how to get packets to flow any faster over the last mile on the existing copper network, without spending megabucks to trench fiber to the home. The Telcos are afraid to spend the CapX to proactively trench in new technology (e.g. fiber) only to find that a new technology (e.g. 5G or 6G cell service) delivers faster bandwidth over some other path, and whoever trenches in the fiber goes BK before they can recover their costs. Anyone remember Ricochet? They spent a fortune on putting in a wireless network in Silicon Valley that was over-run by the cellular networks moving into broadband, providing faster and more ubiquitous service, service that worked while you were in-motion (Ricochet didn't work on a bus or train, it wasn't designed to hand off to neighboring cells). Buh By Richchet. Meanwhile, consumer utilization of their available last-mile bandwidth has gone up. 10 years ago how many people were watching downloaded movies, exchanging software with P2P, using skype video, etc? # Feb. 12, 2008. In a net neutrality filing with the FCC, Comcast stated (p. 13, footnote 31) that "[o]n average, each Comcast High-Speed Internet customer uses more than 40% more bandwidth today than one year ago." (Cite: <http://www.dtc.umn.edu/mints/ispreports.html> ) Anyone have handy graphs showing end user bandwidth consumption on broadband connections over time, say from ~2000-2010? A big part of the cost in providing service to end consumers is customer support and install costs, not the cost to move bits. Wild-ass speculation: This is why your base cable bill, your base broadband bill, your base POTS phone bill, your base cell phone bill, etc. hovers in the $20-30/month range, it simply costs that much to provide the people network (customer support, truck roll technical support, etc.) to support the customer, even though the underlying network cost to deliver the actual product is far less. (This is also why many systems dropped per-minute and per-call billing for local and in-country calls, because the cost to measure and bill for, and deal with customer complaints about, these metrics aren't worth doing - it's cheaper to raise the price slightly and give the user "unlimited" calling.) Of course, I could be wrong, but I know which way I'd bet on this question - do you want to give me odds? :-) jc
On Sep 20, 2010, at 7:04 AM, Joe Greco wrote:
Of course the high level of oversub is an issue....
We'll disagree then. Oversub makes access affordable.
We don't disagree. Of course oversub makes access affordable. The point here is that carriers aren't willing to commit to supporting some level of service. Many people have recognized that a lack of net neutrality is an incentive for service providers to either tacitly allow congestion points to evolve in their networks, or, worse, deliberately engineer such a situation, with dollar signs flashing in Ed Whitacre's eyes at the idea of being able to bill a third party. That's pretty much the opposite end of the spectrum from committing to supporting some level of service.
Exactly... Have we learned nothing from the Enron experience in California?
..with the scary boogeyman of evil illegal P2P filesharing
That just tips the money in the wrong direction. And it's a real threat (amongst others)...not just that deadly clown hiding under your bed.
A real threat? Oh, please, get real. A _real_ threat is what happens as cable and satellite providers keep jacking their rates, and more and more of the "next generation" of television viewers stop subscribing to conventional television distribution because they're able to get content over the Internet. That's a real threat. When your HD television comes with Netflix Live On Demand built in, even grandma will be clicking on movies, I'll bet.
You lost me here, Joe. Threat to whom? How is it a bad thing that consumers gain additional choices for sourcing content they want? What is wrong with Grandma enjoying Netflix from her built-in interface in her television?
There is no reason to expect that the "business model" will remain useful or that any component of it, such as massive oversubscription, must necessarily be correct and remain viable in its current form, just because it worked a decade ago.
Well, I'm talking 10 years ago up until present. How do you see the sub model turning? 1:1? If so, how? And, still some profit?
If you want something interesting to ponder:
In the last ~10 years, wholesale bandwidth costs have fallen, what, from maybe $100/mbit to $1/mbit? I don't even know or care just how accurate that is, but roughly speaking it's true.
In the last ~10 years, DSL and cable prices have stayed pretty much consistent. Our local cable connections have maybe doubled in speed in that time. DSL speeds haven't changed, except for Uverse, which is a bit of an exception for a number of reasons.
Now obviously building the network costs something, but fifteen years after they started providing service, I'm guessing that's been paid for. They don't seem to be dumping lots of funds into increasing their network speeds. That suggests profit. Do you have an alternative explanation?
Actually a lot of money goes into evolving technologies on the last-mile side. It's a bit of an arms race. For example, the reason your cable connections have doubled in speed is some pretty massive hardware upgrades to get from DOCSIS2 to DOCSIS3. There's also going to be quite a bit of investment to get the DSL networks ready for IPv6. The last mile remains an expensive place to play with minimal margins. The costs there have little to do with wholesale bandwidth pricing where your statements about once the network is built it costs less to keep it running are much more accurate.
I'm looking at the current scenario, and what I see are monopolies who are afraid of the future. at&t is already witnessing the destruction of its legacy telephony business, the demise of ridiculous long distance rates, etc. The Comcasts of the world have got to recognize that the ability for customers to avoid paying a monthly cable fee by getting video over the net is bad for business. So you have cable and telco, both telecom businesses with Something To Lose, both of whom incidentally are also the gatekeepers of residential Internet service.
Yes and no. To some extent, I think the smarter ones (I won't name names on either side in this message) actually see this as an opportunity to simplify their network and treat IP as a unified delivery platform for all of those traditionally disparate services. Yes, there's got to be some fear, but, a smart and sustainable business turns fear into opportunity.
The killer point, though, is when you look at what's happening in other areas of the world. You can see broadband Internet services elsewhere evolving. You can even see rogues here in the US (I'm looking at you, Sonic!) who are pushing the envelope.
The reality is that the world is changing, and subscribers are going to be pushing more and more data, often without even recognizing that fact.
Yep. Especially when we get the end-to-end model back and subscribers are able to be publishers just as easily as anyone else. That's a good thing. We should seek to embrace it. Owen
A real threat? Oh, please, get real. A _real_ threat is what happens as cable and satellite providers keep jacking their rates, and more and more of the "next generation" of television viewers stop subscribing to conventional television distribution because they're able to get content over the Internet. That's a real threat. When your HD television comes with Netflix Live On Demand built in, even grandma will be clicking on movies, I'll bet.
You lost me here, Joe. Threat to whom? How is it a bad thing that consumers gain additional choices for sourcing content they want? What is wrong with Grandma enjoying Netflix from her built-in interface in her television?
I'm sorry, "threat" in the primary ways that one could mean that, as something that's destined to melt down Internet connections and slag service provider infrastructure, and as a threat to existing revenue. *I* see it as perfectly reasonable that consumers should gain additional choices for sourcing content that they want, and if you look at the archives of NANOG, you'll find out I bring out stuff like this from time to time when talking about the future of consumer Internet access. Certain service providers, and I'm guessing most notably anyone with a legacy infrastructure, companies such as at&t and Comcast, will view as a threat any models where they are bypassed and used solely as a pipe. Pipe is commodity, pipe is not particularly profitable. It's the content that generates profit, and I'm pretty sure that some executives somewhere have done the math: * Charge $39.99 a month for an Internet pipe, and no annual increases * Charge $74.99 a month for basic Cable, plus upsell potential for PPV, set-top box/DVR rental, premium channels, etc. etc, and a 5%-10% annual increase (http://money.cnn.com/2010/01/06/news/companies/cable_bill_cost_increase/inde...) I don't have easily verifiable numbers as to the profit in each of those numbers, but it seems obvious that the one that's a bigger number and has upsell potential is going to seem more attractive to service providers. Now, the question you have to ask yourself is this, if you have a great revenue stream in the form of cable TV subscribers, and you can slow the adoption of a transition to Internet based TV by controlling and slowing the growth of broadband speeds, would it make sense to do that? My personal feeling is that the legacy providers feel threatened, and are intent on dragging their heels into the modern age.
There is no reason to expect that the "business model" will remain useful or that any component of it, such as massive oversubscription, must necessarily be correct and remain viable in its current form, just because it worked a decade ago.
Well, I'm talking 10 years ago up until present. How do you see the sub model turning? 1:1? If so, how? And, still some profit?
If you want something interesting to ponder:
In the last ~10 years, wholesale bandwidth costs have fallen, what, from maybe $100/mbit to $1/mbit? I don't even know or care just how accurate that is, but roughly speaking it's true.
In the last ~10 years, DSL and cable prices have stayed pretty much consistent. Our local cable connections have maybe doubled in speed in that time. DSL speeds haven't changed, except for Uverse, which is a bit of an exception for a number of reasons.
Now obviously building the network costs something, but fifteen years after they started providing service, I'm guessing that's been paid for. They don't seem to be dumping lots of funds into increasing their network speeds. That suggests profit. Do you have an alternative explanation?
Actually a lot of money goes into evolving technologies on the last-mile side. It's a bit of an arms race. For example, the reason your cable connections have doubled in speed is some pretty massive hardware upgrades to get from DOCSIS2 to DOCSIS3.
Ahhhh, no. DOCSIS2. And the last speed increase was some years ago. But what's a mere doubling? Look at other technology: In 2000, 100Mbps was "fast" and 1000baseT was bleeding edge brand new. In 2005, 1000baseT was commonplace and we were working on 10GbaseT. In 2010, 10GbaseT is now "fast" and now 100GbaseT is bleeding edge brand new. Approximate factor: 100x. In 2000, 80GB was a very large hard drive. In 2005, 500GB was a very large hard drive. In 2010, 3TB is a very large hard drive. Approximate factor: 37x. In 2000, a 1000MHz single-core CPU was a very fast CPU. In 2010, a 2500MHz 8-core CPU is a very fast CPU. Approximate factor: 20x. But fine, let's pretend for a moment that there's something special and magical about last-mile technology. Let's just look around at the rest of the world. Sweden: In Sweden, household broadband is mainly available through cable (in speeds of 128 kbit/s to 100 Mbit/s) and ADSL (256 kbit/s to 60 Mbit/s) [courtesy Wikipedia]. Boy, that sounds nothing like what's available here.
There's also going to be quite a bit of investment to get the DSL networks ready for IPv6. The last mile remains an expensive place to play with minimal margins. The costs there have little to do with wholesale bandwidth pricing where your statements about once the network is built it costs less to keep it running are much more accurate.
Have you looked at what Sonic is doing?
I'm looking at the current scenario, and what I see are monopolies who are afraid of the future. at&t is already witnessing the destruction of its legacy telephony business, the demise of ridiculous long distance rates, etc. The Comcasts of the world have got to recognize that the ability for customers to avoid paying a monthly cable fee by getting video over the net is bad for business. So you have cable and telco, both telecom businesses with Something To Lose, both of whom incidentally are also the gatekeepers of residential Internet service.
Yes and no. To some extent, I think the smarter ones (I won't name names on either side in this message) actually see this as an opportunity to simplify their network and treat IP as a unified delivery platform for all of those traditionally disparate services. Yes, there's got to be some fear, but, a smart and sustainable business turns fear into opportunity.
It could, but it also appears to have turned into a frenzy of lobbying in support of things that do not favor the consumer.
The killer point, though, is when you look at what's happening in other areas of the world. You can see broadband Internet services elsewhere evolving. You can even see rogues here in the US (I'm looking at you, Sonic!) who are pushing the envelope.
The reality is that the world is changing, and subscribers are going to be pushing more and more data, often without even recognizing that fact.
Yep. Especially when we get the end-to-end model back and subscribers are able to be publishers just as easily as anyone else.
That's a good thing. We should seek to embrace it.
Oh, absolutely. I just don't see that actually happening. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sat, Sep 18, 2010 at 2:51 PM, Tony Varriale <tvarriale@comcast.net> wrote:
Of course the high level of oversub is an issue....
We'll disagree then. Oversub makes access affordable.
Sure, at 10:1. At 100:1, oversub makes the service perform like crap. With QOS, it still performs like crap. The difference is that the popular stuff is modestly less crappy while all the not-as-popular stuff goes from crappy to non-functional. In my career I've encountered many QOS implementations. Only one of them did more good than harm: a college customer of mine had a T3's worth of demand but was only willing to pay for a pair of T1s. In other words, the *customer* intentionally chose to operate with a badly saturated pipe. QOS targetted only at peer to peer brought the rest of the uses back to a more or less tolerable level of performance. I note that I lost the customer the next year anyway. Tolerable != pleasant. They were unhappy with the service, even if it was their own fault. I might be more sympathetic to your viewpoint if "pick your oversub level" was part of the signup process, but it isn't. You hide that decision where your customers can't even find out what decision you made. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Sep 20, 2010, at 8:59 AM, William Herrin wrote:
On Sat, Sep 18, 2010 at 2:51 PM, Tony Varriale <tvarriale@comcast.net> wrote:
Of course the high level of oversub is an issue....
We'll disagree then. Oversub makes access affordable.
Sure, at 10:1. At 100:1, oversub makes the service perform like crap. With QOS, it still performs like crap. The difference is that the popular stuff is modestly less crappy while all the not-as-popular stuff goes from crappy to non-functional.
Only if the QoS is tilted in favor of the popular stuff. The concern here isn't QoS in favor of the popular stuff... The concern here is QoS in favor of one particular brand of service X vs another. (e.g. Netflix vs. Hulu). If QoS favors unpopular but more profitable services, it can make the user experience for those services significantly less crappy than the competing more popular services and actually drive shifts in consumer behavior towards the less popular services. Of course, as this succeeds, it becomes self-defeating over the long run, but, only if your goal is to provide good service to your customers. If your goal is to keep your customers spending $minimal per month and stay attached to your service while using QoS payments from content providers to drive much larger margins, then, you can make a circuit through the content providers watching each one's popularity wax and wane as you screw with their QoS based on the money you get. This is very bad for the consumer and, IMHO, should not be allowed.
In my career I've encountered many QOS implementations. Only one of them did more good than harm: a college customer of mine had a T3's worth of demand but was only willing to pay for a pair of T1s. In other words, the *customer* intentionally chose to operate with a badly saturated pipe. QOS targetted only at peer to peer brought the rest of the uses back to a more or less tolerable level of performance.
You are still making the mistake of assuming that the ISP is interested primarily in providing good service to their customers. When you move this from customer-oriented good service model to profit-oriented model built around keeping the pain threshold just barely within the consumer's tolerance, it becomes an entirely different game. Owen
-----Original Message----- From: Owen DeLong [mailto:owen@delong.com] Sent: Monday, September 20, 2010 10:43 AM To: William Herrin Cc: NANOG Subject: Re: Did Internet Founders Actually Anticipate Paid,
On Sep 20, 2010, at 8:59 AM, William Herrin wrote:
On Sat, Sep 18, 2010 at 2:51 PM, Tony Varriale <tvarriale@comcast.net> wrote:
Of course the high level of oversub is an issue....
We'll disagree then. Oversub makes access affordable.
Sure, at 10:1. At 100:1, oversub makes the service perform like crap. With QOS, it still performs like crap. The difference is that the popular stuff is modestly less crappy while all the not-as-popular stuff goes from crappy to non-functional.
Only if the QoS is tilted in favor of the popular stuff. The concern here isn't QoS in favor of the popular stuff... The concern here is QoS in favor of one particular brand of service X vs another. (e.g. Netflix vs. Hulu).
If QoS favors unpopular but more profitable services, it can make the user experience for those services significantly less crappy than the competing more popular services and actually drive shifts in consumer behavior towards the less popular services.
Of course, as this succeeds, it becomes self-defeating over the long run, but, only if your goal is to provide good service to your customers.
If your goal is to keep your customers spending $minimal per month and stay attached to your service while using QoS payments from content providers to drive much larger margins, then, you can make a circuit through the content providers watching each one's popularity wax and wane as you screw with their QoS based on the money you get.
This is very bad for the consumer and, IMHO, should not be allowed.
In my career I've encountered many QOS implementations. Only one of them did more good than harm: a college customer of mine had a T3's worth of demand but was only willing to pay for a pair of T1s. In other words, the *customer* intentionally chose to operate with a badly saturated pipe. QOS targetted only at peer to peer brought the rest of the uses back to a more or less tolerable level of performance.
You are still making the mistake of assuming that the ISP is interested primarily in providing good service to their customers. When you move this from customer-oriented good service model to profit-oriented model built around keeping the pain threshold just barely within the consumer's tolerance, it becomes an entirely different game.
Owen
Devil's Advocate here, What would you say to ISP A that provided similar speeds as ISP B, but B took payments from content providers and then provided the service for free? Gives you the choice, ISP A, which costs, and ISP B, which is free, and most people wouldn't know the difference. ~J
On Mon, Sep 20, 2010 at 2:08 PM, Justin Horstman <justin.horstman@gorillanation.com> wrote:
Devil's Advocate here,
What would you say to ISP A that provided similar speeds as ISP B, but B took payments from content providers and then provided the service for free?
Gives you the choice, ISP A, which costs, and ISP B, which is free, and most people wouldn't know the difference.
Justin, I'd say ISP B was incorrectly described. He doesn't provide service for free; he merely has a different customer. In ISP A, the end user is the customer but in ISP B, he isn't. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
Devil's Advocate here,
What would you say to ISP A that provided similar speeds as ISP B, but B took payments from content providers and then provided the service for free?
Gives you the choice, ISP A, which costs, and ISP B, which is free, and most people wouldn't know the difference.
~J
I would say that it's an interesting and unprecedented (to my knowledge) model. Could be an interesting business plan. I'm not sure if it's realistically viable, and it's certainly a risky proposition, but it's definitely unusual. Nathan
On 9/20/10 11:38 AM, Nathan Eisenberg wrote:
Devil's Advocate here,
What would you say to ISP A that provided similar speeds as ISP B, but B took payments from content providers and then provided the service for free?
Gives you the choice, ISP A, which costs, and ISP B, which is free, and most people wouldn't know the difference.
~J
I would say that it's an interesting and unprecedented (to my knowledge) model. Could be an interesting business plan. I'm not sure if it's realistically viable, and it's certainly a risky proposition, but it's definitely unusual.
It is called netzero... state of the art 1998 business model... advertisers pay for the right to spew crap at cheapskate modem users.
Nathan
Only if the QoS is tilted in favor of the popular stuff. The concern here isn't QoS in favor of the popular stuff... The concern here is QoS in favor of one particular brand of service X vs another. (e.g. Netflix vs. Hulu).
If QoS favors unpopular but more profitable services, it can make the user experience for those services significantly less crappy than the competing more popular services and actually drive shifts in consumer behavior towards the less popular services.
Of course, as this succeeds, it becomes self-defeating over the long run, but, only if your goal is to provide good service to your customers.
Absolutely agree. This goes back to my original comment on the thread in that having a content provider pay for higher priority gives a financial incentive to the network to create congestion (or allow such congestion to occur during the course of normal bandwidth consumption increases over time) in order to collect that revenue. But there is a potential problem here in that content providers are producing applications and content requiring increasing amounts of bandwidth but are not bearing the cost of delivering that content to the end user. If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user. They then release a new version of something that uses more bandwidth (say, going to HD video and then maybe 3D HD at some point) which puts pressure on the ISPs network resources. Do you then increase prices to the consumer in a highly competitive market and run the risk of driving your customers away, do you absorb the cost of required upgrades and run at a loss for a while only to see the applications increase in bandwidth requirements again? Do you try to get the content provider to pay for some of the "shipping" cost? In a pure transit model, the content provider's expenses would go up if they increased their bandwidth utilization which gave them a financial incentive to be innovative in ways of delivering higher quality with the lowest possible bandwidth consumption. As more people move to peering over public IX points, the burden falls on the ISPs internal network to deliver the goods and they have no control at all over the applications themselves. So bandwidth is practically "free" for the content provider and not so free for the eyeball provider. So where a content provider might be forced to upgrade from GigE to 10GigE links at exchange points (maybe adding a blade to a chassis), a service provider might be faced with congestion on potentially thousands of end user links and the gear that interconnect the PoPs. In that light I can see where they might want a fee. But a better way of looking at it is not in prioritizing anyone up, look at it the other way. Imagine an ISP says "if you don't pay us, we are going to prioritize your traffic down". So anyone who pays gets their traffic at the normal default priority, those who don't pay get in the "space available" line. Now a content provider who does not pay the toll sees a drop in users which equates to a possible drop in ad revenue. George
On Mon, Sep 20, 2010 at 09:01:58PM -0700, George Bonser wrote:
But there is a potential problem here in that content providers are producing applications and content requiring increasing amounts of bandwidth but are not bearing the cost of delivering that content to the end user.
Yes they are -- content providers aren't getting their connections to the Internet for free (and if they are, how can I get me some of that?).
If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user.
Say wha? ISPs don't *have* to peer at an IX; if they think that it's cheaper to buy transit from someone than it is to peer, they're more than capable of doing so.
They then release a new version of something that uses more bandwidth (say, going to HD video and then maybe 3D HD at some point) which puts pressure on the ISPs network resources. Do you then increase prices to the consumer in a highly competitive market and run the risk of driving your customers away, do you absorb the cost of required upgrades and run at a loss for a while only to see the applications increase in bandwidth requirements again?
The customer's requesting this traffic, therefore the customer needs a bigger pipe, therefore the customer pays more.
Do you try to get the content provider to pay for some of the "shipping" cost?
Why? It was your customer who requested the traffic be delivered to them. - Matt
Yes they are -- content providers aren't getting their connections to the Internet for free (and if they are, how can I get me some of that?).
If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to
Maybe I wasn't clear. Traffic is moving away from "transit" to direct peering at private exchanges in many cases. Since most exchanges are "flat rate" and aren't all that expensive, it is "practically" free. For example, if I have a 10G connection to an exchange (say Equinix IX, or DEIX in Germany, or LINX in the UK, or PARIX in France, or INIX in Ireland among other) it doesn't cost me any more to send 1G than it does to send 5G of traffic. So if something happens that increases the bandwidth utilization, my monthly cost does not change until I have to change to higher capacity media and that is a step change. the
end user.
Say wha? ISPs don't *have* to peer at an IX; if they think that it's cheaper to buy transit from someone than it is to peer, they're more than capable of doing so.
Transit would have to get extremely cheap to compete with exchange peering. I don't see it getting that low any time soon.
The customer's requesting this traffic, therefore the customer needs a bigger pipe, therefore the customer pays more.
The problem is that maybe the customer is doing nothing different than they have always done. They didn't request more bandwidth. The product they have always used now consumes more bandwidth through no fault of their own. It would be as if you regularly ordered some product every month and the product keeps getting heavier and heavier and the shipping costs go up until the weight is higher than the carrier will ship. You are ordering the same thing you always did, you didn't ask for it to be heavier, the producer decided to make it heavier. But that is not a perfect analogy because a consumer pays a flat monthly "shipping fee" for Internet traffic. The problem comes in when the content providers make it "heavier" or higher bandwidth utilization beyond the control of the customer. Now the customer's pipe is saturated and they aren't doing anything different than what they did before. Or maybe some new product is released that is an out and out bandwidth hog. MOST people using consumer Internet have no idea of things like that nor should they need to. All they know is that now their Internet performs like crap and their ISP wants more money to make it work better. They might feel they have been ripped off. As time goes by, their Internet performs worse and worse, they begin to blame their network provider for that, not the content provider who produces a product that consumes increasing amounts of bandwidth as time goes by. Consider, for example, the number of sites that have streaming media of some sort that begins to play as soon as you land on the page.
- Matt
Yes they are -- content providers aren't getting their connections to the Internet for free (and if they are, how can I get me some of that?).
Maybe I wasn't clear. Traffic is moving away from "transit" to direct peering at private exchanges in many cases. Since most exchanges are "flat rate" and aren't all that expensive, it is "practically" free. For example, if I have a 10G connection to an exchange (say Equinix IX, or DEIX in Germany, or LINX in the UK, or PARIX in France, or INIX in Ireland among other) it doesn't cost me any more to send 1G than it does to send 5G of traffic. So if something happens that increases the bandwidth utilization, my monthly cost does not change until I have to change to higher capacity media and that is a step change. =20
That only happens if both parties choose to peer. Even though the cost per incremental *bit* might appear to be zero, there's a large cost in terms of exchange membership, colocating equipment, etc.
If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user. =20 Say wha? ISPs don't *have* to peer at an IX; if they think that it's cheaper to buy transit from someone than it is to peer, they're more than capable of doing so.
Transit would have to get extremely cheap to compete with exchange peering. I don't see it getting that low any time soon.
I thought I heard some folks were bailing on peering because transit was so cheap. Last time I looked, Equinix Exchange wasn't exactly cheap, it was quite a bit cheaper to run private cross-connects.
=20 The customer's requesting this traffic, therefore the customer needs a bigger pipe, therefore the customer pays more.
The problem is that maybe the customer is doing nothing different than they have always done. They didn't request more bandwidth. The product they have always used now consumes more bandwidth through no fault of their own. It would be as if you regularly ordered some product every month and the product keeps getting heavier and heavier and the shipping costs go up until the weight is higher than the carrier will ship. You are ordering the same thing you always did, you didn't ask for it to be heavier, the producer decided to make it heavier. But that is not a perfect analogy because a consumer pays a flat monthly "shipping fee" for Internet traffic. The problem comes in when the content providers make it "heavier" or higher bandwidth utilization beyond the control of the customer. Now the customer's pipe is saturated and they aren't doing anything different than what they did before. Or maybe some new product is released that is an out and out bandwidth hog.
Okay, so it's kind of like the evolution of the modern road. Years ago, we had cars that resembled horse buggies and dirt or gravel roads. As time passes, cars improve and roads improve. Now we have cars that are capable of 200MPH and the pavers on the Autobahn use lasers to make sure the road is sufficiently smooth that drivers don't wreck at those speeds. At the same time, we no longer have manual laborers doing most of the laying of the roads by hand, even the Germans figured out really quickly that machines were better at it. So on one hand, machinery drives the cost of laying road down, and on the other, increased automobile use and more roads drives the cost of maintaining our system of roads up. Mapped back to the world of NANOG, we need to be aware that the computer of today, the storage technology of today, the home entertainment systems of today, etc., are all much faster and more sophisticated than just what we had ten years ago. We've made it very difficult for users to continue to use older computers: the web browsers are hungrier and piggier, and a 233 MHz 32MB Win98 machine that was perfectly suitable in 1998 is only good for being sent to India for recycling today. It seems unlikely to me that we're going to slow the evolution of modern computing and modern media consumption. Perhaps we need to find a better way to connect people. I know that the legacy communications providers are very hesitant to start replacing their "gravel road" coax with faster stuff, but really, that's the point we're at. In the last decade, the thing that's been dragging us down is that last mile pipe. We already know that it is reasonably economical to arrange for faster access. One just has to look around elsewhere to see what's been done. Other countries are providing speeds of up to 100Mbps to residential. We're still hearing our telcos argue for definitions of "broadband" that are less than 1Mbps. Talk about dirt road lovers.
MOST people using consumer Internet have no idea of things like that nor should they need to. All they know is that now their Internet performs like crap and their ISP wants more money to make it work better. They might feel they have been ripped off. As time goes by, their Internet performs worse and worse, they begin to blame their network provider for that, not the content provider who produces a product that consumes increasing amounts of bandwidth as time goes by. Consider, for example, the number of sites that have streaming media of some sort that begins to play as soon as you land on the page. =20
"Kill HTML5! Kill HTML5!" ... Or maybe let's fix those roads. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Tue, Sep 21, 2010 at 09:31:07AM -0700, George Bonser wrote:
Yes they are -- content providers aren't getting their connections to the Internet for free (and if they are, how can I get me some of that?).
Maybe I wasn't clear. Traffic is moving away from "transit" to direct peering at private exchanges in many cases. [Citation needed]
If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user.
Say wha? ISPs don't *have* to peer at an IX; if they think that it's cheaper to buy transit from someone than it is to peer, they're more than capable of doing so.
Transit would have to get extremely cheap to compete with exchange peering. I don't see it getting that low any time soon.
So it *is* cheaper to peer than to buy transit. Take the money you save from not buying transit and put it towards upgrading your core. - Matt -- Generally the folk who love the environment in vague, frilly ways are at odds with folk who love the environment next to the mashed potatoes. -- Anthony de Boer, in a place that does not exist
On Sep 21, 2010, at 11:01 AM, George Bonser wrote:
If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user.
The counterargument is that the end-user has *already paid* the transit feeds for said content. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Sell your computer and buy a guitar.
On Tue, Sep 21, 2010 at 12:01 AM, George Bonser <gbonser@seven.com> wrote:
But there is a potential problem here in that content providers are producing applications and content requiring increasing amounts of bandwidth but are not bearing the cost of delivering that content to the end user. If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user.
My friend, that is a straw man. ISPs have complete control over who they peer with, the size of the peering pipe they accept and whether that peering session is free or paid. If peering with Netflix will cost you more than you gain, you just don't do it. While there may well be advantages to compelling ISPs to accept peering, that's an entirely different discussion. The network neutrality debate is centered on what you do to packets while they're within your network, not who you choose to directly connect to. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
My friend, that is a straw man. ISPs have complete control over who they peer with, the size of the peering pipe they accept and whether that peering session is free or paid. If peering with Netflix will cost you more than you gain, you just don't do it.
Regards, Bill Herrin
To some extent, yes, it is. There is a certain amount of "devil's advocate" being played on my part to help flesh out the nature of the problem and other ideas from other folks and their insight. Sometimes you just have to run something up the pole and see who shoots at it and what they shoot. I suppose what I am saying is that in general, selling express treatment in the core is probably a bad idea as it gives the bean counters an incentive not to approve capacity increases in order to increase revenue from selling "premium" access. In some cases, prioritizing on the customer edge might be a good idea so your 16yo downloading movies or sharing his porn collection doesn't keep your VIOP phone from working. There is room for innovation to decrease bandwidth utilization at least in the core and at the Internet edge of the provider's network. Multicast, for example, has never really reached its potential for streaming live events such as sporting, news, or live entertainment events. Sometimes the investment in innovation must be caused by feeling some pain if things are left as they are. Currently the ones feeling the pain are not the ones who would have to undertake that investment; there is nothing the ISP or consumer can do to improve the content provider's application. The point I was trying to make is maybe if those content providers did experience some or more financial consequence of increased bandwidth consumption, they would be more sensitive to it and we would all benefit as a result. G
But there is a potential problem here in that content providers are producing applications and content requiring increasing amounts of bandwidth but are not bearing the cost of delivering that content to the end user. If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user. [...] In that light I can see where they might want a fee. But a better way of looking at it is not in prioritizing anyone up, look at it the other way. Imagine an ISP says "if you don't pay us, we are going to prioritize your traffic down". So anyone who pays gets their traffic at the normal default priority, those who don't pay get in the "space available" line. Now a content provider who does not pay the toll sees a drop in users which equates to a possible drop in ad revenue.
There's a huge risk in this. Service providers have to recognize that their customers have already paid for access; when I pay a provider for an "Internet" connection, I am not paying them to deprioritize the destination I'm trying to reach, and that would be an epic fail of "best effort". Content providers already pay fees. No content is generated and served entirely for free. Even in a "free peering" model, electricity costs money, cooling costs money, space costs money, servers cost money, and meeting some network's peering requirements generally involves peering at multiple locations. Content authors usually prefer to be paid, net ops people usually prefer to be paid, system admins usually prefer to be paid, etc. Content providers pay a lot. There are some key bits that people miss about all of this. First off, Internet Service Providers get customers because people want access to all this fantastic content that's out on the Internet. No Yahoo!, no Google, no YouTube, no Facebook, no Netflix, none of that? Can you honestly tell me that customers would keep their subscriptions to a service provider without anywhere to go? Is it the content providers who are getting a free ride? Or is it the Internet Service Providers? Perhaps they're a symbiotic relationship. Next, content providers are generally already paying their own Service Providers for access to the Internet. That might be a Cogent or a Hurricane or a ServerCentral. This covers most of the "small fry". A large guy like Google may have settlement-free peering with eyeball networks, but then again, they've invested an incredible amount of money in being a destination your customers want to get to... the truth of the matter is you, your customer, and Google all benefit from it. Finally, there's a risk that this double-edged sword could slice back at service providers. Content networks often raise funds through advertising. What happens when one day, some network (*cough ESPN360*), decides that a *SERVICE PROVIDER* should pay for the privilege of getting access to their content? I mean, after all, two can play at the game of holding the ISP's subscribers hostage, and in many areas, subscribers do have a choice between at least two service providers, in case their first choice sucks. I don't think we want this, but it could be a natural backlash. What if Google came to you and said "you will pay us a dollar per sub per month, or we will route all your traffic through a 56k link in Timbuktu"? Would most eyeball networks even have a realistic *choice*? At the end of the day, after stripping away all the distractions, the concept of prioritizing traffic looks to me like something that is ultimately intended to squeeze more revenue out of the network, and this happens by not giving the customer some of what they have already paid for: in other words, this happens at the customer's expense. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 9/21/2010 8:12 AM, Joe Greco wrote:
Finally, there's a risk that this double-edged sword could slice back at service providers. Content networks often raise funds through advertising. What happens when one day, some network (*cough ESPN360*), decides that a *SERVICE PROVIDER* should pay for the privilege of getting access to their content? I mean, after all, two can play at the game of holding the ISP's subscribers hostage, and in many areas, subscribers do have a choice between at least two service providers, in case their first choice sucks. I don't think we want this, but it could be a natural backlash. What if Google came to you and said "you will pay us a dollar per sub per month, or we will route all your traffic through a 56k link in Timbuktu"? Would most eyeball networks even have a realistic *choice*?
Yeah, wish that was illegal. The size of the provider determines the cost per capable subscriber, along with other perks, so it's a better deal for the AT&T and Verizons of the world, and sucks for the "We have 13 independent ILECs and each has to have a separate deal." Word is, they designed it with their current cable customers in mind, not the traditional ISP, so it pushes people towards the cable company. I'd accept any ISP net-neutrality arguement for content providers to stop doing that crap and support per user subscription fees. The whole beauty of the Internet video is people can pick and choose and not have to pay for things they don't use (ie, for espn3, I have to account the per customer charges into my bills and pester my customers with espn3 advertising in their bills even if they hate sports or don't watch video/tv/movies). Jack
On Sep 21, 2010, at 9:12 AM, Joe Greco wrote:
But there is a potential problem here in that content providers are producing applications and content requiring increasing amounts of bandwidth but are not bearing the cost of delivering that content to the end user. If the ISPs are directly peering with the content provider at some IX, the content provider gets what amounts to a free ride to the end user. [...] In that light I can see where they might want a fee. But a better way of looking at it is not in prioritizing anyone up, look at it the other way. Imagine an ISP says "if you don't pay us, we are going to prioritize your traffic down". So anyone who pays gets their traffic at the normal default priority, those who don't pay get in the "space available" line. Now a content provider who does not pay the toll sees a drop in users which equates to a possible drop in ad revenue.
There's a huge risk in this.
Service providers have to recognize that their customers have already paid for access; when I pay a provider for an "Internet" connection, I am not paying them to deprioritize the destination I'm trying to reach, and that would be an epic fail of "best effort".
Content providers already pay fees. No content is generated and served entirely for free. Even in a "free peering" model, electricity costs money, cooling costs money, space costs money, servers cost money, and meeting some network's peering requirements generally involves peering at multiple locations. Content authors usually prefer to be paid, net ops people usually prefer to be paid, system admins usually prefer to be paid, etc. Content providers pay a lot.
+1
There are some key bits that people miss about all of this.
First off, Internet Service Providers get customers because people want access to all this fantastic content that's out on the Internet. No Yahoo!, no Google, no YouTube, no Facebook, no Netflix, none of that? Can you honestly tell me that customers would keep their subscriptions to a service provider without anywhere to go? Is it the content providers who are getting a free ride? Or is it the Internet Service Providers? Perhaps they're a symbiotic relationship.
Next, content providers are generally already paying their own Service Providers for access to the Internet. That might be a Cogent or a Hurricane or a ServerCentral. This covers most of the "small fry". A large guy like Google may have settlement-free peering with eyeball networks, but then again, they've invested an incredible amount of money in being a destination your customers want to get to... the truth of the matter is you, your customer, and Google all benefit from it.
Every content provider, large or small, in my experience pays for Internet. Where the checks go to may vary (dark fiber vs colo charges vs ISP payments vs ...), but every content provider has (or has service providers that have) some sort of contractual arrangement with the entities they connect to and has spent money based on that. Sure, those contracts can be mutually renegotiated, prices may go up or down, etc., but letting third parties interfere with these arrangements sets a very dangerous precedent that cannot be good for the Internet as a whole (again, IMO). Regards Marshall
Finally, there's a risk that this double-edged sword could slice back at service providers. Content networks often raise funds through advertising. What happens when one day, some network (*cough ESPN360*), decides that a *SERVICE PROVIDER* should pay for the privilege of getting access to their content? I mean, after all, two can play at the game of holding the ISP's subscribers hostage, and in many areas, subscribers do have a choice between at least two service providers, in case their first choice sucks. I don't think we want this, but it could be a natural backlash. What if Google came to you and said "you will pay us a dollar per sub per month, or we will route all your traffic through a 56k link in Timbuktu"? Would most eyeball networks even have a realistic *choice*?
At the end of the day, after stripping away all the distractions, the concept of prioritizing traffic looks to me like something that is ultimately intended to squeeze more revenue out of the network, and this happens by not giving the customer some of what they have already paid for: in other words, this happens at the customer's expense.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 9/16/2010 8:19 AM, Chris Boyd wrote:
<end user> I DO have a problem with a content provider paying to get priority access on the last mile. I have no particular interest in any of the content that Yahoo provides, but I do have an interest in downloading my Linux updates via torrents. Should I have to go back and bid against Yahoo just so I can get my packets in a timely fashion? </end user>
Depends on what constitutes last mile, really. For me, it would be prioritizing traffic over a customer's saturated DSL line (ie, you paid for and saturated your bandwidth). Although, to be honest, I've been extremely tempted to just prioritize for free. Your case in point, if you are streaming video from Y' and doing a bittorrent, there is still the presumption that you want the bittorrent to play nice and let your video stream. Of course, proper prioritizing really requires both sides working together. I've had more issues with upstreams saturating and killing a video stream than I have with downstreams saturating.
I understand that the last mile is going to be a congestion point, but the idea of allowing a bidding war for priority access for that capacity seems to be a path to madness.
It seems pointless, really. Customer has to request the content, for the priority to matter. It only makes sense in a shared pipe, and that's where bottlenecks shouldn't be (ie, customer A's video shouldn't have precedence over Customer B's p2p (which may be valid WoW updates, iso downloads, etc). Jack
I DO have a problem with a content provider paying to get priority access on the last mile. I have no particular interest in any of the content that Yahoo provides, but I do have an interest in downloading my Linux updates via torrents. Should I have to go back and bid against Yahoo just so I can get my packets in a timely fashion? </end user>
I understand that the last mile is going to be a congestion point, but the idea of allowing a bidding war for priority access for that capacity seems to be a path to madness.
--Chris
Hi Chris, Since prioritization would work ONLY when the link us saturated (congested), without it, nothing is going to work well, not your torrents, not your email, not your browsing. By prioritizing the traffic, the torrents might back off but they would still continue to flow, they wouldn't be completely blocked, they would just slow down. QoS can be a good thing for allowing your VIOP to work while someone else in the home is watching a streaming movie or something. Without it, everything breaks once the circuit is congested.
On Sep 16, 2010, at 10:57 07AM, George Bonser wrote:
Hi Chris,
Since prioritization would work ONLY when the link us saturated (congested), without it, nothing is going to work well, not your torrents, not your email, not your browsing. By prioritizing the traffic, the torrents might back off but they would still continue to flow, they wouldn't be completely blocked, they would just slow down. QoS can be a good thing for allowing your VIOP to work while someone else in the home is watching a streaming movie or something. Without it, everything breaks once the circuit is congested.
Your statement misses the point, which is, *who* gets to decide what traffic is prioritized? And will that prioritization be determined by who is paying my carrier for that prioritization, potentially against my own preferences? For some damn reason, I might *prefer* that my torrent traffic get prioritized over, say, email. Or, I might not appreciate the eventuality that a stream I'm watching on hulu.com stutters because my neighbor's watching a movie on Netflix and it just happens that Netflix has paid my carrier for prioritized traffic. The other point, as mentioned previously, is that paid prioritization doesn't mean a thing unless there's congestion to be managed. It's not a far stretch to see exec-level types seeing the potential financial benefits to, well, ensuring that such congestion does show up in their network in order to create the practical incentives for paid prioritization. -C
Your statement misses the point, which is, *who* gets to decide what traffic is prioritized? And will that prioritization be determined by who is paying my carrier for that prioritization, potentially against my own preferences?
I would say that with standard "run of the mill" consumer service, the provider decides. If you want something custom, that would be reasonable to offer, but you should be expected to pay a bit more for that in order to maintain the non-standard configuration. Maintaining a different configuration for each user would be more expensive for the provider than a "cookie-cutter" solution that makes the internet a better experience for say 85% or more of the people out there. G
On Thu, Sep 16, 2010 at 2:44 PM, George Bonser <gbonser@seven.com> wrote:
Your statement misses the point, which is, *who* gets to decide what traffic is prioritized? And will that prioritization be determined by who is paying my carrier for that prioritization, potentially against my own preferences?
I would say that with standard "run of the mill" consumer service, the provider decides.
George, Will the provider unbundle the components so that it's feasible for a niche vendor to sell me custom connection services? No? Then the provider doesn't get to decide. It's about control. As the customer, the guy with the green, I should have it. A combination of decisions on the provider's part which strips me of control is unacceptable. You want prioritization? Give me unbundling. You don't want to unbundle? Don't mess with my packets. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
Will the provider unbundle the components so that it's feasible for a niche vendor to sell me custom connection services?
No?
Then the provider doesn't get to decide.
It's about control. As the customer, the guy with the green, I should have it. A combination of decisions on the provider's part which strips me of control is unacceptable.
You want prioritization? Give me unbundling. You don't want to unbundle? Don't mess with my packets.
If you want control: Don't buy the cheapest commodity product. Steinar Haug, Nethelp consulting, sthaug@nethelp.no
On Thu, Sep 16, 2010 at 3:28 PM, <sthaug@nethelp.no> wrote:
Will the provider unbundle the components so that it's feasible for a niche vendor to sell me custom connection services?
No?
Then the provider doesn't get to decide.
It's about control. As the customer, the guy with the green, I should have it. A combination of decisions on the provider's part which strips me of control is unacceptable.
You want prioritization? Give me unbundling. You don't want to unbundle? Don't mess with my packets.
If you want control: Don't buy the cheapest commodity product.
When you sell 15 megs as a commodity for $50 and 1.5 megs non-commidity for $700, don't pretend that I have that I have a choice. But then you're writing from Norway, so perhaps you're not up to speed on the situation in NANOG territory. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On 9/16/2010 2:28 PM, sthaug@nethelp.no wrote:
If you want control: Don't buy the cheapest commodity product.
+1 Next we'll be arguing that akamai nodes are evil because they can have better service levels than other sites. The p2p guys are also getting special treatment, as they can grab files faster than the direct download guy. Oh, and provider met google's bandwidth requirements for peering, so their peering with google gives better service to google than yahoo/hotmail; which was unfair to the provider who didn't meet the requirements and has to go the long way around. :P Provider may also have met ll's requirements, so peering accepted there, and here come the better netflix streams. Of course, anywhere a provider has a direct peer, they'll want to prioritize that traffic over any other. True net-neutrality means no provider can have a better service than another. This totally screws with private peering and the variety of requirements, as well as special services (such as akamai nodes). Many of these cases aren't about saturation, but better connectivity between content provider and ISP. Adding money or QOS to the equation is just icing on the cake. Jack
True net-neutrality means no provider can have a better service than another.
This statement is not true - or at least, I am not convinced of its truth. True net neutrality means no provider will artificially de-neutralize their service by introducing destination based priority on congested links.
This totally screws with private peering and the variety of requirements, as well as special services (such as akamai nodes). Many of these cases aren't about saturation, but better connectivity between content provider and ISP. Adding money or QOS to the equation is just icing on the cake.
From a false assumption follows false conclusions.
Why do you feel it's true that net-neutrality treads on private (or even public) peering, or content delivery platforms? In my understanding, they are two separate topics: Net (non)-neutrality is literally about prioritizing different packets on the *same* wire based on whether the destination or source is from an ACL of IPs. IE this link is congested, Netflix sends me a check every month, send their packets before the ones from Hulu and Youtube. The act of sending traffic down a different link directly to a peers' network does not affect the neutrality of either party one iota - in fact, it works to solve the congested link problem (Look! Adding capacity fixed it!). The ethics of path distances, peering relationships and vector routing, while interesting, are out of scope in a discussion of neutrality. An argument which makes this a larger issue encompassing peering and vector routing is, in my opinion, either a straw man or a red herring (depending on how well it's presented) attempt to generate a second technoethical issue in order to defeat the first one. Nathan
On 9/17/2010 4:52 AM, Nathan Eisenberg wrote:
True net-neutrality means no provider can have a better service than another.
This statement is not true - or at least, I am not convinced of its truth. True net neutrality means no provider will artificially de-neutralize their service by introducing destination based priority on congested links.
This is what you want it to mean. If I create a private peer to google, I have de-neutralized their service(destination based priority, even though in both cases, it's the source of the packets we care about) by allowing dedicated bandwidth and lower latency to their cloud. Also, let's not forget that the design of many p2p programs were specifically designed to ignore and bypass congestion controls... ie, screw other apps, I will take every bit of bandwidth I can get. This type of behavior causes p2p to have higher priority than other apps in a network that has no traffic prioritized. While I agree that traffic type prioritization would be preferred over destination based priorities, it often isn't feasible with hardware. Understanding the amount of traffic between your customers and a content provider helps you decide which content providers might be prioritized to give an overall service increase to your customer base. The fact that a content provider would even pay an ISP, is a high indicator that the content provider is sending a high load of traffic to the ISP, and bandwidth constraints are an issue with the service. Video and voice, in particular, should always try and have precedence over p2p, as they completely break and become unusable, where p2p will just be forced to move slower.
From a false assumption follows false conclusions.
Not really. It's not a neutral world. Private peering is by no means neutral. The provider that does enough traffic with google to warrant a private peering will have better service levels than the smaller guy who has to take the public paths. You view net neutrality as customers within an ISP, while I view it as a provider within a network of providers. The levels of service and pricing I can maintain as a rural ISP can't be compared to the metropolitan ISPs. A west coast ISP won't have the same level of service as an east coast ISP when dealing with geographical based content. We could take it to the international scale, where countries don't have equal service levels to content.
Why do you feel it's true that net-neutrality treads on private (or even public) peering, or content delivery platforms? In my understanding, they are two separate topics: Net (non)-neutrality is literally about prioritizing different packets on the *same* wire based on whether the destination or source is from an ACL of IPs. IE this link is congested, Netflix sends me a check every month, send their packets before the ones from Hulu and Youtube. The act of sending traffic down a different link directly to a peers' network does not affect the neutrality of either party one iota - in fact, it works to solve the congested link problem (Look! Adding capacity fixed it!).
So you are saying, it's perfectly okay to improve one service over another by adding bandwidth directly to that service, but it's unacceptable to prioritize it's traffic on congested links (which effectively adds more bandwidth for that service). It's the same thing, using two different methods. If we consider all bandwidth available between the customer and content (and consider latency as well, as it has an effect on the traffic, especially during congestion), a private peer dedicates bandwidth to content the same as prioritizing it's traffic. If anything, the private peer provides even more bandwidth. ISP has 2xDS3 available for bandwidth total. Netflix traffic is 20mb/s. Bandwidth is considered saturated. 1) 45mb public + 45 mb private = 90mb w/ 45mb prioritized traffic due to private peering 2) 90mb public = 90mb w/ 20mb prioritized traffic via destination prioritization (actual usage) It appears that the second is a better deal. The fact that netflix got better service levels was an ISP decision. By using prioritization on shared pipes, it actually gave customers more bandwidth than using separate pipes.
The ethics of path distances, peering relationships and vector routing, while interesting, are out of scope in a discussion of neutrality. An argument which makes this a larger issue encompassing peering and vector routing is, in my opinion, either a straw man or a red herring (depending on how well it's presented) attempt to generate a second technoethical issue in order to defeat the first one.
It's a matter of viewpoint. It's convenient to talk about net-neutrality when it's scoped, but not when we widen the scope. Customer A gets better service than Customer B because he want to a site that had prioritization. Never mind that while they fight over the saturated link, Customer C beat both of them because he was on a separate segment that wasn't saturated. All 3 paid the same amount of money. C > A > B, yet C doesn't fall into this net-neutrality discussion, and the provider, who wants to keep customers, has more C customers than A, and more A customers than B, so B is the most expendable. My viewpoint is that of an ISP, and as such, I think of net-neutrality at a level above some last mile that's saturated at some other ISP. Jack
On Sep 17, 2010, at 6:48 02AM, Jack Bates wrote:
On 9/17/2010 4:52 AM, Nathan Eisenberg wrote:
True net-neutrality means no provider can have a better service than another.
This statement is not true - or at least, I am not convinced of its truth. True net neutrality means no provider will artificially de-neutralize their service by introducing destination based priority on congested links.
This is what you want it to mean. If I create a private peer to google, I have de-neutralized their service(destination based priority, even though in both cases, it's the source of the packets we care about) by allowing dedicated bandwidth and lower latency to their cloud.
Practically, this is not the case. These days, most congestion tends to happen at the customer edge - the cable head-end or the DSL DSLAM, not the backbone or peering points. Also, Google, Yahoo, et al tend to base their peering decisions on technical, not business, standards, which makes sense because peering, above all other interconnect types, is mutually beneficial to both parties. More to the point, even the likes of Comcast won't shut down their peers to Yahoo because Google sends them a check.
Also, let's not forget that the design of many p2p programs were specifically designed to ignore and bypass congestion controls... ie, screw other apps, I will take every bit of bandwidth I can get. This type of behavior causes p2p to have higher priority than other apps in a network that has no traffic prioritized.
While I agree that traffic type prioritization would be preferred over destination based priorities, it often isn't feasible with hardware. Understanding the amount of traffic between your customers and a content provider helps you decide which content providers might be prioritized to give an overall service increase to your customer base.
The fact that a content provider would even pay an ISP, is a high indicator that the content provider is sending a high load of traffic to the ISP, and bandwidth constraints are an issue with the service. Video and voice, in particular, should always try and have precedence over p2p, as they completely break and become unusable, where p2p will just be forced to move slower.
From a false assumption follows false conclusions.
Not really. It's not a neutral world. Private peering is by no means neutral. The provider that does enough traffic with google to warrant a private peering will have better service levels than the smaller guy who has to take the public paths. You view net neutrality as customers within an ISP, while I view it as a provider within a network of providers.
It may not be neutral, but it's hardly discrinatory in the ways that I've seen many of the Non-net-neutrality schemes play out, which seems to be all about *deliberately* - either proactively or via actively deciding to not upgrade capacity - creating congestion in order to create a financial incentive for content providers to have their traffic prioritized. And I do agree, a private peer is definitely one technical means by which this prioritization could happen, but that's not the practice today.
The levels of service and pricing I can maintain as a rural ISP can't be compared to the metropolitan ISPs. A west coast ISP won't have the same level of service as an east coast ISP when dealing with geographical based content. We could take it to the international scale, where countries don't have equal service levels to content.
Why do you feel it's true that net-neutrality treads on private (or even public) peering, or content delivery platforms? In my understanding, they are two separate topics: Net (non)-neutrality is literally about prioritizing different packets on the *same* wire based on whether the destination or source is from an ACL of IPs. IE this link is congested, Netflix sends me a check every month, send their packets before the ones from Hulu and Youtube. The act of sending traffic down a different link directly to a peers' network does not affect the neutrality of either party one iota - in fact, it works to solve the congested link problem (Look! Adding capacity fixed it!).
So you are saying, it's perfectly okay to improve one service over another by adding bandwidth directly to that service, but it's unacceptable to prioritize it's traffic on congested links (which effectively adds more bandwidth for that service). It's the same thing, using two different methods.
If we consider all bandwidth available between the customer and content (and consider latency as well, as it has an effect on the traffic, especially during congestion), a private peer dedicates bandwidth to content the same as prioritizing it's traffic. If anything, the private peer provides even more bandwidth.
ISP has 2xDS3 available for bandwidth total. Netflix traffic is 20mb/s. Bandwidth is considered saturated.
1) 45mb public + 45 mb private = 90mb w/ 45mb prioritized traffic due to private peering
2) 90mb public = 90mb w/ 20mb prioritized traffic via destination prioritization (actual usage)
It appears that the second is a better deal. The fact that netflix got better service levels was an ISP decision. By using prioritization on shared pipes, it actually gave customers more bandwidth than using separate pipes.
The ethics of path distances, peering relationships and vector routing, while interesting, are out of scope in a discussion of neutrality. An argument which makes this a larger issue encompassing peering and vector routing is, in my opinion, either a straw man or a red herring (depending on how well it's presented) attempt to generate a second technoethical issue in order to defeat the first one.
It's a matter of viewpoint. It's convenient to talk about net-neutrality when it's scoped, but not when we widen the scope. Customer A gets better service than Customer B because he want to a site that had prioritization. Never mind that while they fight over the saturated link, Customer C beat both of them because he was on a separate segment that wasn't saturated. All 3 paid the same amount of money. C > A > B, yet C doesn't fall into this net-neutrality discussion, and the provider, who wants to keep customers, has more C customers than A, and more A customers than B, so B is the most expendable.
My viewpoint is that of an ISP, and as such, I think of net-neutrality at a level above some last mile that's saturated at some other ISP.
Jack
On 9/17/2010 10:17 AM, Chris Woodfield wrote:
Also, Google, Yahoo, et al tend to base their peering decisions on technical, not business, standards, which makes sense because peering, above all other interconnect types, is mutually beneficial to both parties. More to the point, even the likes of Comcast won't shut down their peers to Yahoo because Google sends them a check.
I disagree. Minimum throughput for wasting a port on a router is a business reason, not technical. Peering is all about business and equal equity. Not to say that technical reasons don't play a part. Limitations of throughput requires some peering, but there is definitely a business model attached to it to determine the equity of the peers.
And I do agree, a private peer is definitely one technical means by which this prioritization could happen, but that's not the practice today.
Penny saved is a penny earned. Peering is generally cheaper than transit. In addition, it usually provides higher class of service. Money doesn't have to change hands for there to be value attached to the action. At the same time, when money does change hands, the paying party feels they are getting something of value. Is it unfair that I pay streaming sites to get more/earlier video feeds over the free users? I still have to deal with advertisements in some cases, which generates the primary revenue for the streaming site. Why shouldn't a content provider be able to pay for a higher class of service, so long as others are equally allowed to pay for it? Jack
On Sep 17, 2010, at 9:23 09AM, Jack Bates wrote:
Is it unfair that I pay streaming sites to get more/earlier video feeds over the free users? I still have to deal with advertisements in some cases, which generates the primary revenue for the streaming site. Why shouldn't a content provider be able to pay for a higher class of service, so long as others are equally allowed to pay for it?
No, it is definitely not, because *you* are the one paying for priority access for the content *you* feel is worth paying extra for faster access to. This is not the same thing as a content provider paying the carrier for priority access to your DSL line to the detriment of other sites you are interested it. How would you feel if you paid for priority access to hulu.com via this means, only to see your carrier de-prioritize that traffic because they're getting a check from Netflix?
Jack
How would you feel if you paid for priority access to hulu.com via this means, only to see >your carrier de-prioritize that traffic because they're getting a check from Netflix?
Isn't this where "competition/may the best provider win" comes into play? -Drew
On 9/17/2010 11:43 AM, Drew Weaver wrote:
How would you feel if you paid for priority access to hulu.com via this means, only to see>your carrier de-prioritize that traffic because they're getting a check from Netflix?
Isn't this where "competition/may the best provider win" comes into play?
That argument may not work, as there are many non-competitive jurisdictions. Of course, the non-compete areas aren't necessarily where a content provider would pay for such a service. Content provider must see value in the higher class service, which generally means they send a lot of traffic to where they are paying, which implies a lot of customers on the ISP side, which implies a high probability that we are talking metropolitan areas where there is more competition (or a massive NSP which will have a mix of rural and metro). Jack
On 9/17/2010 11:27 AM, Chris Woodfield wrote:
How would you feel if you paid for priority access to hulu.com <http://hulu.com> via this means, only to see your carrier de-prioritize that traffic because they're getting a check from Netflix?
The same as I'd feel if netflix paid them for pop transit which bypassed the congestion (even if it was via mpls-te or dedicated circuit instead of just priorities on a congested link). Netflix apparently felt that there was value in having a higher class of service and paid for it. Of course, I'd be against congested links in my ISP to begin with. I'd move and get a new ISP if I could. If I was stuck, then I'd be stuck. My distaste for my ISP having congested links wouldn't equate to distaste that a content provider paid to have better class of service due to the ISP having poor overall service. If said class of service completely wiped out the bandwidth and caused all normal traffic to be unusable, then the ISP most likely is in violation of their agreement with me (ie, not providing access, as it is unusable). This would be no different than selling off bandwidth to commercial grade customers to the point that consumer grade didn't work at all. Jack
So you are saying, it's perfectly okay to improve one service over another by adding bandwidth directly to that service, but it's unacceptable to prioritize it's traffic on congested links (which effectively adds more bandwidth for that service). It's the same thing, using two different methods.
On TCP/IP networks you cannot prioritize a service and you certainly cannot add bandwidth unless you have an underlying ATM or Frame Relay that has bandwidth in reserve. On a TCP/IP network, QOS features work by deprioritising traffic, either by delaying the traffic or by dropping packets. Many ISPs do deprioritise P2P traffic to prevent it from creating congestion, but that is not something that you can productize. At best you can use it as a feature to encourage customers to use your network. Are you suggesting that ISPs who receive protection money from one service provider, should then deprioritise all the other traffic on their network?
ISP has 2xDS3 available for bandwidth total. Netflix traffic is 20mb/s. Bandwidth is considered saturated.
Now you are talking about circuit capacities well below what ISPs typically use. In fact, two 45Mbps DS3 circuits are less than the 100Mbps Ethernet broadband service that many consumers now use. --Michael Dillon
On 9/17/2010 10:22 AM, Michael Dillon wrote:
On a TCP/IP network, QOS features work by deprioritising traffic, either by delaying the traffic or by dropping packets. Many ISPs do deprioritise P2P traffic to prevent it from creating congestion, but that is not something that you can productize. At best you can use it as a feature to encourage customers to use your network.
It's not just that. Many p2p apps don't play fair, and their nature causes them to exceed other applications in cases of congestion. You adjust priorities to give a better overall experience on average.
Are you suggesting that ISPs who receive protection money from one service provider, should then deprioritise all the other traffic on their network?
Is consumer grade bandwidth not deprioritised to business grade bandwidth? The provider is running a reverse business model, charging the content provider as well for better class of service. It doesn't scale, so it is heavily limited, but so long as the provider offers the same service to anyone (ie, anyone can play in this class of service), it seems to be a fair business practice. What should be illegal is the ability to hurt competitors of services offered by the provider (ie, provider offers voip, so they destroy traffic to other voip carriers). In fact, I think it was considered illegal years ago, though I admit that I didn't follow the case to it's conclusion.
ISP has 2xDS3 available for bandwidth total. Netflix traffic is 20mb/s. Bandwidth is considered saturated.
Now you are talking about circuit capacities well below what ISPs typically use. In fact, two 45Mbps DS3 circuits are less than the 100Mbps Ethernet broadband service that many consumers now use.
1) My logic scales, so I saw no reason to use larger numbers. 2) You must live in the City and are making a bad assumption on available capacities. 3) It's easier for those who don't have 100Mb, 1G, 10G, to grasp smaller numbers. Jack
Jack Bates wrote:
Is consumer grade bandwidth not deprioritised to business grade bandwidth?
No. Today a provider doesn't move packets *within their network* faster or slower based on if the recipient is a consumer or business customer. Today, all providers move all packets as fast as they can be moved on the links each customer has contracted for service on. (If you know of an exception to this practice, today, I'd love to see cites.) The usual congestion point is the end-user customer's line, and the customer can only receive packets as fast as their line allows but all packets are allowed over the customer's line with equal priority. There may also be congestion on backbone ingress lines, but again all packets are allowed over each of those lines with equal priority. Rarely, there is congestion within the network - not by design but (usually) due to equipment failure. Even then, all traffic is (usually) allowed thru with equal priority. I don't know of any networks that intentionally design their networks with interior systems that prioritize traffic thru their network. It doesn't pay. In the long run it's cheaper and easier to simply upgrade capacity than to figure out some way to delay some packets while letting others thru. Prioritization necessarily involves moving some traffic slower (because you can't move traffic faster) than some link (within the provider's network) allows, to allow "priority" traffic to more fully utilize the link while the other (non-priority) traffic is slowed. It effectively creates congestion points within the provider's network, if none existed prior to implementing the prioritization scheme. "I encourage all my competitors to do that." jc
On 9/17/2010 2:18 PM, JC Dill wrote:
Jack Bates wrote:
Is consumer grade bandwidth not deprioritised to business grade bandwidth?
Prioritization necessarily involves moving some traffic slower (because you can't move traffic faster) than some link (within the provider's network) allows, to allow "priority" traffic to more fully utilize the link while the other (non-priority) traffic is slowed. It effectively creates congestion points within the provider's network, if none existed prior to implementing the prioritization scheme. "I encourage all my competitors to do that."
And yet, I'm pretty sure there are providers that have different pipes for business than they do for consumer, and probably riding some of the same physical medium. This creates saturated and unsaturated pipes, which is just as bad or worse than using QOS. The reason I'm pretty sure about it, is business circuits generally are guaranteed, while consumer are not. Jack
Jack Bates wrote:
And yet, I'm pretty sure there are providers that have different pipes for business than they do for consumer, and probably riding some of the same physical medium. This creates saturated and unsaturated pipes, which is just as bad or worse than using QOS. The reason I'm pretty sure about it, is business circuits generally are guaranteed, while consumer are not.
I'm pretty sure you are mistaken. The reason is, it's adding an additional layer of complexity inside the network for no good reason. The only difference between the guaranteed speed provided to business circuits and the not-guaranteed consumer circuits is if they get a reduction in their fees if the ISP can't deliver (business customers), AND they notice the outage, AND they complain and ask for a credit, AND the outage is long enough to trigger the contract clause for reducing the fee. The complaint structure is rigged in favor of the ISP. Further the easiest way to avoid paying out is simply to have enough capacity across their entire network that they don't have capacity related outages. Most outages are the result of equipment failures, and if they have a separate network for business and consumer customers it just makes the outage that much worse for whatever network is affected, leading to more complaints, more refunds (to those customers). "I encourage all my competitors to do that." jc
On Sat, Sep 18, 2010 at 2:34 AM, JC Dill <jcdill.lists@gmail.com> wrote:
Jack Bates wrote:
And yet, I'm pretty sure there are providers that have different pipes for business than they do for consumer, and probably riding some of the same physical medium. This creates saturated and unsaturated pipes, which is just as bad or worse than using QOS. The reason I'm pretty sure about it, is business circuits generally are guaranteed, while consumer are not.
I'm pretty sure you are mistaken. The reason is, it's adding an additional layer of complexity inside the network for no good reason.
Real ISPs have all sorts of different layers of complexity, for lots of reasons ranging from equipment performance to Layer 8 differences to mergers&acquisitions to willingness-to-pay to marketing objectives to historical accident. An ISP that's also a telco-ish carrier will typically offer multiple services at Layer 1, Layer 2, MPLS, Layer 3, and other variants on transport. Copper's different economically from fiber pairs, SONET, Ethernet, CWDM, DWDM, some services get multiplexed by using bundles of copper or fiber, some get multiplexed by using different kinds of wavelength or time division, some get shared by packet-switching, some packet switches are smarter on some transport media than on others, some services will use edge equipment from Brand C or J or A because they were the first or cheapest to get Feature X when it was needed, some services are designed for Layer 9 problems like different taxes on different kinds of access services. An ISP that isn't an end-to-end vertically integrated provider will be buying stuff from other carriers that influences what services they offer, but the integrated providers often do that too. There are some kinds of service where the difference between business-grade and consumer-grade is mainly about options for types of billing, or for guarantees around how fast they'll get a truck to your place to fix things - that's especially common in access networks. Most consumer home internet service is running on DSL or cable modems, and that's going to behave differently than T1 access or 10 Gbps WAN-PHY or LAN-PHY gear. Different priced services may get connected to circuits or boxes that have different amounts of oversubscription. Different protocols give you different feedback mechanisms that affect performance. Or higher-priced services may have measuring mechanisms built in to them or bolted alongside, so that performance problems can generate a trouble ticket faster or get a refund on the bill, and come with a sales person who doesn't really understand how they work but is being pressured to provide 110% uptime. A common design these days is to have an MPLS backbone supporting multiple services including private networks and public internet, and the private networks may get dedicated chunks of the trunking, or may get higher MPLS prioritization. But separately from that, the IP edges may support Diffserv, and maybe the backbones do or maybe they don't, or maybe some parts of the trunking are only accessible to the higher-priority services. And maybe the diffserv gets implemented differently on the equipment that's used for different transmission media, or maybe the box that has the better port density doesn't have as many queues as the lower-density box, or maybe it's different between different port cards with the same vendor. A very common design is that businesses can get diffserv (or the MPLS equivalents) on end-to-end services provided by ISP X, but the peering arrangements with ISP Y don't pass diffserv bits, or pass it but ignore it, or use different sets of bits. It's very frustrating to me as a consumer, because what I'd really like would be for the main bottleneck point (my downstream connection at home) to either respect the diffserv bits set by the senders, or else to give UDP higher priority and TCP lower priority, and put Bittorrent and its ilk in a scavenger class, so VOIP and real-time video work regardless of my web activity and the web gets more priority than BitTorrent. -- ---- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.
Bill Stewart wrote:
A very common design is that businesses can get diffserv (or the MPLS equivalents) on end-to-end services provided by ISP X, but the peering arrangements with ISP Y don't pass diffserv bits, or pass it but ignore it, or use different sets of bits. It's very frustrating to me as a consumer, because what I'd really like would be for the main bottleneck point (my downstream connection at home) to either respect the diffserv bits set by the senders, or else to give UDP higher priority and TCP lower priority, and put Bittorrent and its ilk in a scavenger class, so VOIP and real-time video work regardless of my web activity and the web gets more priority than BitTorrent.
I can understand you wanting this done on YOUR bottleneck, in the connection between the ISP and you. And you want it done to YOUR specifications. That is entirely reasonable. But would you want the ISP doing it elsewhere in the network, and done to their priorities, not yours? (A "one size fits all" congestion prioritization solution.) Further, would you be happy with an ISP that HAS a bottleneck elsewhere in their network - not just in the last mile to your door? IMHO it's stupid for an ISP to intentionally design for and allow bottlenecks to exist within their network. The bottleneck to the end user is currently unavoidable, and users with bandwidth intensive uses might prefer some prioritization (to their own specifications) on that part of the link. Bottlenecks within the ISP network and between ISPs should be avoidable, and should be avoided. Any ISP that fails to mitigate those bottlenecks will quickly find customers streaming to another ISP that will advertise "no network congestion here, no traffic shaping that slows down traffic that might be important to YOU" etc. jc PS. Bill, if you aren't using Sonic, give their Fusion service a look. It's better than Kadu. :-)
<bleeping> $whatever folk. qos is about whose packets to drop. who here is paid to drop packets? if this was $customer-list, i could understand wanting to drop some packets on the link you were too cheap to provision reasonably (which is pretty st00pid in today's pricing environment). but this is a net ops list. randy
IMHO it's stupid for an ISP to intentionally design for and allow bottlenecks to exist within their network. The bottleneck to the end user is currently unavoidable, and users with bandwidth intensive uses might prefer some prioritization (to their own specifications) on that part of the link. Bottlenecks within the ISP network and between ISPs should be avoidable, and should be avoided. Any ISP that fails to mitigate those bottlenecks will quickly find customers streaming to another ISP that will advertise "no network congestion here, no traffic shaping that slows down traffic that might be important to YOU" etc.
jc
I think the extent to which one favors prioritization or not will depend on who they are and what is going on at the moment. If I am an ISP that is not a telecom provider of circuits, I might be more in favor of prioritization. If I am a provider of bandwidth to others, I would be against it as I want to sell bandwidth to them. It might also depend on circumstances that vary from time to time. If an application suddenly appears that becomes wildly popular practically overnight and is a bandwidth hog, it might be difficult to move fast enough to accommodate that usage. I seem to remember that when Napster first appeared, it swamped many networks. If a situation occurs such as a disaster of national or global or even local interest, maybe the sudden demand swamps the existing infrastructure. If I were providing consumer access, I might provide two methods. The first would be no prioritization, just treat everything equally. The second might be a "canned" prioritization profile that a user could elect for application to their connection. This might not prioritize any specific content provider over another so much as prioritize certain protocols over another. So it might prioritize VOIP up, and p2p protocols down as an example. A "value added" situation might be one that allows a user to specify their own prioritization profile for some additional fee. In an emergency situation, a provider might possibly want to have some prioritization profiles "on the shelf" ready to apply if needed. This might prioritize traffic to certain government, emergency, and information services up and traffic to some other services and protocols down. Generally, I would want to see every network have enough bandwidth for every contingency but that is somewhat unrealistic because we don't have a crystal ball. What would be the demand today in the case of another 9/11/01 type of event? I don't think anyone really knows. In that case, not having some prioritization plan in place might render a network completely useless. Having one might allow some services to work at the expense of others. I would rather be connected to a network that would allow access to government sites, news and information sites, email, and voice communications at the expense of, say, gaming, streaming content, gambling, and porn for the duration of the emergency. It would also be better, in my opinion, for networks to have their own emergency plans than to put in place a mechanism where government dictates what gets done and when. You can flee a network that does something you don't like for one that has a plan more in line with your priorities, fleeing a government is more difficult.
It's a matter of viewpoint. It's convenient to talk about net-neutrality when it's scoped, but not when we widen the scope. Customer A gets better service than Customer B because he want to a site that had prioritization. Never mind that while they fight over the saturated link, Customer C beat both of them because he was on a separate segment that wasn't saturated. All 3 paid the same amount of money. C > A > B, yet C doesn't fall into this net-neutrality discussion, and the provider, who wants to keep customers, has more C customers than A, and more A customers than B, so B is the most expendable.
It's convenient to talk about NN when we're talking about NN, and not about the ethical implications of peering with Comcast but not with ATT. There are things that NN is, and there are things that it isn't. There are a good deal of ethical and emotional issues involved, and while they're interesting to opine about, they're difficult to successfully argue. However, from a purely technical perspective, your above example illustrates my point. Customer A and B both lose. Why? Because prioritization and destination based discrimination are not real solutions. Capacity is. Customer A and B have saturation and discrimination. Customer C has capacity. Want to keep A and B (and your reputation)? Add capacity.
My viewpoint is that of an ISP, and as such, I think of net-neutrality at a level above some last mile that's saturated at some other ISP.
I have the same point of view but it appears that we disagree anyways. It must be the case that the perspective does not define the opinion. Appreciated the thinly veiled appeal to authority, though. Capacity is cheap. Discriminatory traffic management for-profit is a fantastically expensive way of killing off your customer base in exchange for short-term revenue opportunities. You MUST construct additional pylons, or the guy that does WILL take your customers. Nathan
On Sep 17, 2010, at 6:48 AM, Jack Bates wrote:
On 9/17/2010 4:52 AM, Nathan Eisenberg wrote:
True net-neutrality means no provider can have a better service than another.
This statement is not true - or at least, I am not convinced of its truth. True net neutrality means no provider will artificially de-neutralize their service by introducing destination based priority on congested links.
This is what you want it to mean. If I create a private peer to google, I have de-neutralized their service(destination based priority, even though in both cases, it's the source of the packets we care about) by allowing dedicated bandwidth and lower latency to their cloud.
No, you have not de-neutralized their service. You have improved access asymmetrically. You haven't de-neutralized their service until you REFUSE to create a private peer with Yahoo on the same terms as Google, even assuming we stick to your rather byzantine definition of neutrality. There is a difference between neutrality and symmetry.
Also, let's not forget that the design of many p2p programs were specifically designed to ignore and bypass congestion controls... ie, screw other apps, I will take every bit of bandwidth I can get. This type of behavior causes p2p to have higher priority than other apps in a network that has no traffic prioritized.
Again, this is not part of the neutrality debate, it is a separate operational concern. Network neutrality is not about making sure every user gets a fair shake from every protocol. It's about making sure that source/destination pairs are not subject to divergent priorities on shared links.
While I agree that traffic type prioritization would be preferred over destination based priorities, it often isn't feasible with hardware. Understanding the amount of traffic between your customers and a content provider helps you decide which content providers might be prioritized to give an overall service increase to your customer base.
You're talking about different kinds of prioritization. Nobody is objecting to the idea of building out capacity and peering to places it makes sense. What people are objecting to is the idea that their upstream provider could take a bribe from a content provider in order to reduce the quality of service to their customers trying to reach other content providers.
The fact that a content provider would even pay an ISP, is a high indicator that the content provider is sending a high load of traffic to the ISP, and bandwidth constraints are an issue with the service. Video and voice, in particular, should always try and have precedence over p2p, as they completely break and become unusable, where p2p will just be forced to move slower.
Not necessarily. It might just mean that the traffic they are sending is sufficiently lucrative that it is worth subsidizing. It might mean that the content provider believes they can gain an (anti-)competitive advantage by reducing the quality of the user experience for subscribers that are going to their competitors. You keep coming back to this anti-p2p-centric rant, but, that's got almost nothing to do with the issue everyone else is attempting to discuss.
From a false assumption follows false conclusions.
Not really. It's not a neutral world. Private peering is by no means neutral. The provider that does enough traffic with google to warrant a private peering will have better service levels than the smaller guy who has to take the public paths. You view net neutrality as customers within an ISP, while I view it as a provider within a network of providers.
Private peering is completely neutral IF it is available on identical terms and conditions to all players. It won't be symmetrical, but, it is neutral. Again, there is a difference between symmetry and neutrality. The world is not symmetrical. There is no reason it cannot or should not be neutral. In fact, there is good argument that being non-neutral is a violation of the Sherman anti-trust act.
The levels of service and pricing I can maintain as a rural ISP can't be compared to the metropolitan ISPs. A west coast ISP won't have the same level of service as an east coast ISP when dealing with geographical based content. We could take it to the international scale, where countries don't have equal service levels to content.
Again, you are talking about symmetry and mistaking that for neutrality. Neutrality is about whether or not everyone faces a consistent set of terms and conditions, not identical service or traffic levels. Neutrality is about letting the customer decide which content they want, not the ISP and expecting the ISP to be a fair broker in connecting customers to content.
Why do you feel it's true that net-neutrality treads on private (or even public) peering, or content delivery platforms? In my understanding, they are two separate topics: Net (non)-neutrality is literally about prioritizing different packets on the *same* wire based on whether the destination or source is from an ACL of IPs. IE this link is congested, Netflix sends me a check every month, send their packets before the ones from Hulu and Youtube. The act of sending traffic down a different link directly to a peers' network does not affect the neutrality of either party one iota - in fact, it works to solve the congested link problem (Look! Adding capacity fixed it!).
So you are saying, it's perfectly okay to improve one service over another by adding bandwidth directly to that service, but it's unacceptable to prioritize it's traffic on congested links (which effectively adds more bandwidth for that service). It's the same thing, using two different methods.
Only so long as you are willing to add bandwidth to the other service(s) on the same terms and conditions as the one service. Yes. The former is adding capacity to meet demand. The latter is not effectively adding bandwidth, it is reducing bandwidth for one to reward the other. In the former case, you are not penalizing other services, you are improving one. In the latter case, you are improving one service at the expense of all others. It's the expense of all others part that people have a problem with.
If we consider all bandwidth available between the customer and content (and consider latency as well, as it has an effect on the traffic, especially during congestion), a private peer dedicates bandwidth to content the same as prioritizing it's traffic. If anything, the private peer provides even more bandwidth.
The private peer doesn't do this by reducing the available bandwidth for the other services.
ISP has 2xDS3 available for bandwidth total. Netflix traffic is 20mb/s. Bandwidth is considered saturated.
1) 45mb public + 45 mb private = 90mb w/ 45mb prioritized traffic due to private peering
2) 90mb public = 90mb w/ 20mb prioritized traffic via destination prioritization (actual usage)
It appears that the second is a better deal. The fact that netflix got better service levels was an ISP decision. By using prioritization on shared pipes, it actually gave customers more bandwidth than using separate pipes.
Fiction. The way this would work in the real world (and what people are objecting to) is that the ISP would transition from 1) 90mb public with no prioritization to 2) 90mb public with N mb prioritized via destination where N is the number of mbps that the destination wanted to pay for. More importantly, it's not the 90mb public circuits where this is the real concern. The real concern is on the shared customer infrastructure side closer to the end-user where it's, say, 45mbps to the DSLAM going form 45mbps public to 45mbps public with 20mbps prioritized for content-provider-A while users trying to use content-provider-B get a degraded experience compared to A if their neighbors are using A. (Hence my belief that this is already a Sherman Anti-Trust issue).
The ethics of path distances, peering relationships and vector routing, while interesting, are out of scope in a discussion of neutrality. An argument which makes this a larger issue encompassing peering and vector routing is, in my opinion, either a straw man or a red herring (depending on how well it's presented) attempt to generate a second technoethical issue in order to defeat the first one.
It's a matter of viewpoint. It's convenient to talk about net-neutrality when it's scoped, but not when we widen the scope. Customer A gets better service than Customer B because he want to a site that had prioritization. Never mind that while they fight over the saturated link, Customer C beat both of them because he was on a separate segment that wasn't saturated. All 3 paid the same amount of money. C > A > B, yet C doesn't fall into this net-neutrality discussion, and the provider, who wants to keep customers, has more C customers than A, and more A customers than B, so B is the most expendable.
No, it's more a matter of failing to understand the difference between neutrality and symmetry. Neutrality means everyone faces the same odds and the same terms and conditions. It means that amongst the other customers sharing the same ISP infrastructure we are all treated fairly and consistently. It does not mean symmetry. It means that you are not artificially penalizing access to content providers that are not paying you in order to prioritize access to content providers that are.
My viewpoint is that of an ISP, and as such, I think of net-neutrality at a level above some last mile that's saturated at some other ISP.
Apparently not an ISP that I would subscribe to. Owen
On 9/17/2010 2:08 PM, Owen DeLong wrote:
Again, you are talking about symmetry and mistaking that for neutrality.
Neutrality is about whether or not everyone faces a consistent set of terms and conditions, not identical service or traffic levels.
Charging content providers for higher class service is perfectly neutral by your definition. So long as you offered the same class of service to all content providers who wished to pay.
Neutrality is about letting the customer decide which content they want, not the ISP and expecting the ISP to be a fair broker in connecting customers to content.
Offering better options to content providers would be perfectly acceptable here, as well, so long as you offer it to all.
The former is adding capacity to meet demand. The latter is not effectively adding bandwidth, it is reducing bandwidth for one to reward the other.
Which is fine, so long as you offer that class of service to all.
The way this would work in the real world (and what people are objecting to) is that the ISP would transition from
1) 90mb public with no prioritization
to
2) 90mb public with N mb prioritized via destination where N is the number of mbps that the destination wanted to pay for.
Except my fictional account follows real world saturation experience historically. What you are giving is considered ideal compared to breaking the 90mb up to allow separate throughput for the service, which I guarantee a provider would do for enough money; given restriction of total available bandwidth.
More importantly, it's not the 90mb public circuits where this is the real concern. The real concern is on the shared customer infrastructure side closer to the end-user where it's, say, 45mbps to the DSLAM going form 45mbps public to 45mbps public with 20mbps prioritized for content-provider-A while users trying to use content-provider-B get a degraded experience compared to A if their neighbors are using A. (Hence my belief that this is already a Sherman Anti-Trust issue).
I think that only qualifies if content-provider-B doesn't care to pay for such a service, but it is available to them.
Neutrality means everyone faces the same odds and the same terms and conditions. It means that amongst the other customers sharing the same ISP infrastructure we are all treated fairly and consistently.
All customers can access the premium and non-premium content the same. ISP based licensing by content providers seems like a bigger scam.
Apparently not an ISP that I would subscribe to.
Nope. You'd probably stick with a saturated bandwidth ISP and gripe about net-neutrality because your service is slightly more piss poor than your neighbors when your neighbor happens to go to a premium site and you don't. I'll stick with not having saturation on shared links. Jack
On Sep 17, 2010, at 1:21 PM, Jack Bates wrote:
On 9/17/2010 2:08 PM, Owen DeLong wrote:
Again, you are talking about symmetry and mistaking that for neutrality.
Neutrality is about whether or not everyone faces a consistent set of terms and conditions, not identical service or traffic levels.
Charging content providers for higher class service is perfectly neutral by your definition. So long as you offered the same class of service to all content providers who wished to pay.
Charging them for higher class service on the circuits which connect directly to them is neutral. Charging them to effect the profile of the circuits directly connected to your other customers is non-neutral.
Neutrality is about letting the customer decide which content they want, not the ISP and expecting the ISP to be a fair broker in connecting customers to content.
Offering better options to content providers would be perfectly acceptable here, as well, so long as you offer it to all.
Again, nobody is opposing offering better connectivity to content providers. What they are opposing is selling content providers the right to screw your customers that choose to use said content providers competitors.
The former is adding capacity to meet demand. The latter is not effectively adding bandwidth, it is reducing bandwidth for one to reward the other.
Which is fine, so long as you offer that class of service to all.
You can't offer that class of service to all, and, even if you do, no, it's no neutral when you do it that way.
The way this would work in the real world (and what people are objecting to) is that the ISP would transition from
1) 90mb public with no prioritization
to
2) 90mb public with N mb prioritized via destination where N is the number of mbps that the destination wanted to pay for.
Except my fictional account follows real world saturation experience historically. What you are giving is considered ideal compared to breaking the 90mb up to allow separate throughput for the service, which I guarantee a provider would do for enough money; given restriction of total available bandwidth.
Total available bandwidth isn't what AT&T is pushing the FCC to allow them to carve up this way. AT&T is pushing for the right to sell (or select) content providers prioritized bandwidth closer to the consumer tail-circuit.
More importantly, it's not the 90mb public circuits where this is the real concern. The real concern is on the shared customer infrastructure side closer to the end-user where it's, say, 45mbps to the DSLAM going form 45mbps public to 45mbps public with 20mbps prioritized for content-provider-A while users trying to use content-provider-B get a degraded experience compared to A if their neighbors are using A. (Hence my belief that this is already a Sherman Anti-Trust issue).
I think that only qualifies if content-provider-B doesn't care to pay for such a service, but it is available to them.
What if the service simply isn't available to content-provider-B because content-provider-A is a relater party to said ISP or said ISP simply chooses not to offer it on a neutral basis? (Which is exactly what AT&T has stated they want to do.)
Neutrality means everyone faces the same odds and the same terms and conditions. It means that amongst the other customers sharing the same ISP infrastructure we are all treated fairly and consistently.
All customers can access the premium and non-premium content the same. ISP based licensing by content providers seems like a bigger scam.
I'm not sure what you mean by "ISP based licensing by content providers".
Apparently not an ISP that I would subscribe to.
Nope. You'd probably stick with a saturated bandwidth ISP and gripe about net-neutrality because your service is slightly more piss poor than your neighbors when your neighbor happens to go to a premium site and you don't. I'll stick with not having saturation on shared links.
Actually, no. I've got good unsaturated service from both of the ISPs providing circuits into my house and from the upstreams that I use their circuits to reach for my real routing. (I'm unusual... I use Comcast and DSL to provide layer 2 transport to colo facilities where I have routers. I then use the routers in the colo facilties to advertise my addresses into BGP and trade my real packets. As far as Comcast and my DSL provider are concerned, I'm just running a whole lot of protocol 43 traffic to a very small set of destinations.) I use the two providers in question because they are, generally, neutral in their approach and do not play funky QoS games with my traffic. Owen
On Sep 17, 2010, at 2:52 AM, Nathan Eisenberg wrote:
True net-neutrality means no provider can have a better service than another.
This statement is not true - or at least, I am not convinced of its truth. True net neutrality means no provider will artificially de-neutralize their service by introducing destination based priority on congested links.
This totally screws with private peering and the variety of requirements, as well as special services (such as akamai nodes). Many of these cases aren't about saturation, but better connectivity between content provider and ISP. Adding money or QOS to the equation is just icing on the cake.
From a false assumption follows false conclusions.
Why do you feel it's true that net-neutrality treads on private (or even public) peering, or content delivery platforms? In my understanding, they are two separate topics: Net (non)-neutrality is literally about prioritizing different packets on the *same* wire based on whether the destination or source is from an ACL of IPs. IE this link is congested, Netflix sends me a check every month, send their packets before the ones from Hulu and Youtube. The act of sending traffic down a different link directly to a peers' network does not affect the neutrality of either party one iota - in fact, it works to solve the congested link problem (Look! Adding capacity fixed it!).
The ethics of path distances, peering relationships and vector routing, while interesting, are out of scope in a discussion of neutrality. An argument which makes this a larger issue encompassing peering and vector routing is, in my opinion, either a straw man or a red herring (depending on how well it's presented) attempt to generate a second technoethical issue in order to defeat the first one.
Nathan
A big part of the problem here is that net neutrality has been given a variety of definitions, some of which I agree with, some of which I don't... Here are my understanding of some of the definitions (along with my basic opinion of each): 1. All potential peers are treated equally. (As much as I'd like to see this happen, it isn't realistic to expect it will happen and any imaginable attempt at legislating it will do more harm than good). 2. Traffic is not artificially prioritized on congested links due to subsidies from the source or destination. Note: This does not include prioritization requested by the link customer. (I think that it is important to have this for the internet to continue as a tool for the democratization of communication. I think that this will require legislative protection). 3. Net neutrality requires an open peering policy. (Personally, I am a fan of open peering policies. However, a network should have the freedom of choice to implement whatever peering policy they wish.) I'm sure there are more definitions floating around. People are welcome to comment on them. These are the ones I have seen take hold amongst various community stakeholders. Owen
On 9/16/2010 2:28 PM, sthaug@nethelp.no wrote:
If you want control: Don't buy the cheapest commodity product.
+1
-1
Next we'll be arguing that akamai nodes are evil because they can have better service levels than other sites. The p2p guys are also getting special treatment, as they can grab files faster than the direct download guy. Oh, and provider met google's bandwidth requirements for peering, so their peering with google gives better service to google than yahoo/hotmail; which was unfair to the provider who didn't meet the requirements and has to go the long way around. :P
Provider may also have met ll's requirements, so peering accepted there, and here come the better netflix streams. Of course, anywhere a provider has a direct peer, they'll want to prioritize that traffic over any other.
True net-neutrality means no provider can have a better service than another. This totally screws with private peering and the variety of requirements, as well as special services (such as akamai nodes). Many of these cases aren't about saturation, but better connectivity between content provider and ISP. Adding money or QOS to the equation is just icing on the cake.
There are some excellent points in this, but I disagree as to the conclusions you seem to be drawing. One could look at peering as an opportunity to do some backdoor prioritization, and there's some legitimacy to that fear. My basic expectation as a customer is that you can provide me with some level of service. Since most service providers are unwilling to actually commit to a level of service, I might take the numbers attached to the service tier you sold me. So if I'm now downloading my latest FreeBSD via BitTorrent, my basic expectation is ultimately that I'll get fair treatment. What's "fair treatment" though? I think at the end of the day, it means that I've got to have reasonable access to the Internet. That means that if I can get packets in and out of your transit without fuss, that's probably good enough. If you've short-circuited things with peering that gives me faster access, that's great too. However, if your transit is 100% saturated for 20% of the day, every day, that's NOT good enough. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 9/17/2010 9:49 AM, Joe Greco wrote:
So if I'm now downloading my latest FreeBSD via BitTorrent, my basic expectation is ultimately that I'll get fair treatment.
And this is always a debate. You might say letting someone with voice or video have queue priority during saturation as being unfair, yet your p2p will work when it's running slower, where as their voice or video might fail and be completely unusable. If a provider has to deal with saturation, they have to make such decisions. Their goal, ideally would be to have a majority of the customers able to do what they need to do during saturation, allowing traffic to slow down that can afford to, and giving priority to traffic that to be usable must maintain certain QOS.
What's "fair treatment" though?
I think at the end of the day, it means that I've got to have reasonable access to the Internet. That means that if I can get packets in and out of your transit without fuss, that's probably good enough. If you've short-circuited things with peering that gives me faster access, that's great too. However, if your transit is 100% saturated for 20% of the day, every day, that's NOT good enough.
I'm all for rules to limit saturation levels. This has nothing to do with neutrality, but to me it is the more important point. Consider telco world and voice communications. I could be wrong, but I seem to recall there be rules as to how long or often or what percentage of customers could experience issues with getting a line out. I'm also a strong believer in enforcing honest business practices. If you sell prioritization to one company, you should offer it to all others. The practice itself doesn't scale, so given an all or nothing, it is a business model that will burn out. The short-circuits and QOS applications are just methods of improving service for a majority of customers (those who use those destinations/services). This means, of course, p2p will usually be the loser. As an ISP, p2p means little to me. The QOS to the sites that hold a majority of my customers captive is the issue. Even without saturation, I have an obligation to see how I can improve video and voice quality in an erratic environment. This includes dealing with things such as microbursts and last mile saturation (which for me isn't shared, but customer's run multiple applications and the goal is to allow that with a smooth policy to assist in keeping one application from butchering the performance of another, ie, p2p killing their video streams from netflix/hulu/cbs/etc). Jack
Will the provider unbundle the components so that it's feasible for a niche vendor to sell me custom connection services?
No?
Then the provider doesn't get to decide.
It's about control. As the customer, the guy with the green, I should have it. A combination of decisions on the provider's part which strips me of control is unacceptable.
You want prioritization? Give me unbundling. You don't want to unbundle? Don't mess with my packets.
If you want control: Don't buy the cheapest commodity product.
You think it won't happen with all the other tiers of commodity products? That seems to imply that you think people who don't want to suffer this sort of thing need to buy something like a T1? Let me just sum it up for you: Get real. Rather than allowing service providers to pick and choose who subscribers can communicate with, we're much more likely to see regulation intervene to enforce reasonable rules. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Fri, 17 Sep 2010 09:13:48 CDT, Joe Greco said:
Rather than allowing service providers to pick and choose who subscribers can communicate with, we're much more likely to see regulation intervene to enforce reasonable rules.
We are indeed likely to see regulation intervene to enforce rules. Whether they're reasonable will likely depend on who wins the "My lobbyist can beat up your lobbyist" battle - and of course, the winning lobbyist will declare the rules reasonable, and the losing lobbyist will issue a press release stating how totally unreasonable the rules are.
In a message written on Thu, Sep 16, 2010 at 09:28:21PM +0200, sthaug@nethelp.no wrote:
If you want control: Don't buy the cheapest commodity product.
Steinar Haug, Nethelp consulting, sthaug@nethelp.no
It may be hard for those in Europe to understand the situation in the US, so let me explain in real numbers. I live in an upper-middle class suburb of a "tier 2" city, large enough it has everything but not a primary market for anyone. Due to a combination of geography, legacy, and government regulations (how licences are granted, specifically) I have two wireline providers, the local "Cable Company" which is Comcast, and the local "Telephone Company", which is AT&T (ex SBC territory, if it matters). There are no land-based wireless (WiFi, LTE, etc) providers in my area. I am not considering satellite viable for a number of reasons, but if you care there are two providers that cover the whole US, as far as I know. I'd link directly to the pages with prices, but due to the fact that the price and service varies with your ZIP code here I can't do that, you have to fill out a set of forms to even see what you can buy. Here are my choices: Comcast: "Performance": 12 down, 2 up with "Powerboost". Norton Security Suite 7 e-mails, each with 10GB. $42.95 per month. "Performance PLUS": 16 down, 2 up with "Powerboost". Norton Security Suite 7 e-mails, each with 10GB. $52.95 per month. Both include a single IP assigned via DHCP, you bring your own CPE or you can rent from them for a few dollars a month. AT&T U-Verse: "Pro": 3 down $41 "Elite": 6 down $46 "Max": 12 down $48 "Max Plus": 18 down $58 "Max Turbo": 24 down $68 Note that the only change with each product is speed. These all require the use of AT&T CPE (and thus I added in the $3 they charge you for it), and come with the same features the AT&T box presents you a private IP space network and does the NAT for you with a single outside IP. Same number of e-mail accounts (but I can't find the number listed anywhere). Also note they don't list upload speeds on the web site at all. NOTE: Both providers offer discounts for bundling with TV or Phone service, and both offer discounts for the first few months for new customers, I have left off all of these, comparing regular price to regular price for Internet only service. That's it, a total list of my "consumer package" choices. Comcast will offer business service, which is the exact same service over the exact same modems and network, except you can have static IP's and get priority support for about $25-30 extra. AT&T won't sell me business service as I live in a residential neighborhood. Beyond that my choice is to order a T1, 1.5 symmetric from a "real" provider. I can get all the static IP's I want, a real SLA, priority support, and so on. I'll have to supply my own CPE, and it will run somewhere between $700 and $900 a month. I hope that helps folks outside the US understand the situation here. There really isn't a lot of choice, 2 providers, and some minor choice in how much speed you want to pay for with each one. I hear rumors of how good it is in Japan, or Korea, or Sweeden, I would love for folks from those places to post their options. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
On Sep 16, 2010, at 11:44 AM, George Bonser wrote:
Your statement misses the point, which is, *who* gets to decide what traffic is prioritized? And will that prioritization be determined by who is paying my carrier for that prioritization, potentially against my own preferences?
I would say that with standard "run of the mill" consumer service, the provider decides. If you want something custom, that would be reasonable to offer, but you should be expected to pay a bit more for that in order to maintain the non-standard configuration. Maintaining a different configuration for each user would be more expensive for the provider than a "cookie-cutter" solution that makes the internet a better experience for say 85% or more of the people out there.
G
The point is that if the provider is deciding based on some third party paying them and thus my neighbors are getting more bandwidth than I am, not because they're paying more, but, because they're choosing to use the services that bribed my provider, then that's not a good thing. Owen
-----Original Message----- From: Owen DeLong [mailto:owen@delong.com] Sent: Thursday, September 16, 2010 2:17 PM To: George Bonser Cc: NANOG list Subject: Re: Did Internet Founders Actually Anticipate Paid,Prioritized Traffic?
<SNIP>
The point is that if the provider is deciding based on some third party paying them and thus my neighbors are getting more bandwidth than I am, not because they're paying more, but, because they're choosing to use the services that bribed my provider, then that's not a good thing.
This actually is a case for net neutrality. I worry about things like ACLS to prevent SPAM or abuse/security will get tarred with the same feathers. - Brian CONFIDENTIALITY NOTICE: This email message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copying, use, disclosure, or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. Thank you.
On Sep 16, 2010, at 11:35 AM, Chris Woodfield wrote:
On Sep 16, 2010, at 10:57 07AM, George Bonser wrote:
Hi Chris,
Since prioritization would work ONLY when the link us saturated (congested), without it, nothing is going to work well, not your torrents, not your email, not your browsing. By prioritizing the traffic, the torrents might back off but they would still continue to flow, they wouldn't be completely blocked, they would just slow down. QoS can be a good thing for allowing your VIOP to work while someone else in the home is watching a streaming movie or something. Without it, everything breaks once the circuit is congested.
Your statement misses the point, which is, *who* gets to decide what traffic is prioritized? And will that prioritization be determined by who is paying my carrier for that prioritization, potentially against my own preferences? For some damn reason, I might *prefer* that my torrent traffic get prioritized over, say, email. Or, I might not appreciate the eventuality that a stream I'm watching on hulu.com stutters because my neighbor's watching a movie on Netflix and it just happens that Netflix has paid my carrier for prioritized traffic.
The other point, as mentioned previously, is that paid prioritization doesn't mean a thing unless there's congestion to be managed. It's not a far stretch to see exec-level types seeing the potential financial benefits to, well, ensuring that such congestion does show up in their network in order to create the practical incentives for paid prioritization.
-C Yep... If you don't believe that will happen, I refer you to Enron vs. California ISO and the lovely changes to the electricity market in California around that time.
Owen
On Sep 16, 2010, at 10:57 AM, George Bonser wrote:
I DO have a problem with a content provider paying to get priority access on the last mile. I have no particular interest in any of the content that Yahoo provides, but I do have an interest in downloading my Linux updates via torrents. Should I have to go back and bid against Yahoo just so I can get my packets in a timely fashion? </end user>
I understand that the last mile is going to be a congestion point, but the idea of allowing a bidding war for priority access for that capacity seems to be a path to madness.
--Chris
Hi Chris,
Since prioritization would work ONLY when the link us saturated (congested), without it, nothing is going to work well, not your torrents, not your email, not your browsing. By prioritizing the traffic, the torrents might back off but they would still continue to flow, they wouldn't be completely blocked, they would just slow down. QoS can be a good thing for allowing your VIOP to work while someone else in the home is watching a streaming movie or something. Without it, everything breaks once the circuit is congested.
It depends. If you're talking about prioritization of the end link, then, that's one thing... If the ISP wants to implement prioritization there based on the END USER's preferences, that's a nice value-add service. If you're talking about the aggregation point of several customer's links, then, prioritizing customer A's Yahoo traffic because Yahoo paid over customer B's torrent traffic when customer A and B have paid the same for their connection is not so good, IMHO. Owen
I DO have a problem with a content provider paying to get priority access on the last mile. I have no particular interest in any of the content that Yahoo provides, but I do have an interest in downloading my Linux updates via torrents. Should I have to go back and bid against Yahoo just so I can get my packets in a timely fashion? </end user> =20 I understand that the last mile is going to be a congestion point, but the idea of allowing a bidding war for priority access for that capacity seems to be a path to madness. =20 --Chris
Hi Chris,
Since prioritization would work ONLY when the link us saturated (congested), without it, nothing is going to work well, not your torrents, not your email, not your browsing. By prioritizing the traffic, the torrents might back off but they would still continue to flow, they wouldn't be completely blocked, they would just slow down. QoS can be a good thing for allowing your VIOP to work while someone else in the home is watching a streaming movie or something. Without it, everything breaks once the circuit is congested.
The problem with this theory is that it /sounds/ nice - but the reality is that eventually ISP's will use it to justify deprioritizing one customer's traffic over another, i.e. because your neighbor is doing realtime video and you're doing bittorrent, because their networks are not sufficiently beefy to handle all the traffic their customers may generate at once. If you're spending $60/month for an Internet connection, though, and your neighbor's spending the same, why would your ISP be permitted to determine that your traffic was less valuable? You want prioritization on a customer's link? Fine, allow for that, let the customer decide what should have priority, and I've certainly got zero problem with that. However, the moment we talk "paid" prioritization, we get into all sorts of troubling issues. What happens if YouTube doesn't want to pay for "paid" prioritization of their traffic? Does my ISP decide to route them through Timbuktu in order to punish them, effectively holding me as a hostage until YouTube pays up? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
George Bonser wrote:
I believe a network should be able to sell priotitization at the edge, but not in the core. I have no problem with Y!, for example, paying a network to be prioritized ahead of bit torrent on the segment to the end
Considering yahoo (as any other big freemailer) is (unwillingly for the most part) facilitating a lot of spam traffic this would mean a lot of spam traffic gets prioritised. I can see that as an undesirable and unfortunate side effect. Regards, Jeroen -- http://goldmark.org/jeff/stupid-disclaimers/ http://linuxmafia.com/~rick/faq/plural-of-virus.html
participants (47)
-
Barry Shein
-
Bill Stewart
-
Brett Frankenberger
-
Brian Johnson
-
Bruce Williams
-
Chris Boyd
-
Chris Woodfield
-
Cian Brennan
-
Daniel Seagraves
-
Dave Sparro
-
Dobbins, Roland
-
Drew Weaver
-
Florian Weimer
-
Fred Baker
-
George Bonser
-
gordon b slater
-
Hank Nussbacher
-
Jack Bates
-
Jamie Bowden
-
JC Dill
-
Jeffrey Lyon
-
Jeroen van Aart
-
Joe Greco
-
Joe Provo
-
Joel Jaeggli
-
Julien Gormotte
-
Justin Horstman
-
Leo Bicknell
-
Mark Smith
-
Marshall Eubanks
-
Matthew Palmer
-
Michael Dillon
-
Michael Painter
-
Michael Thomas
-
Nathan Eisenberg
-
Owen DeLong
-
Patrick Giagnocavo
-
Randy Bush
-
Rodrick Brown
-
Sean Donelan
-
Steven Bellovin
-
sthaug@nethelp.no
-
Tony Varriale
-
Valdis.Kletnieks@vt.edu
-
William Allen Simpson
-
William Herrin
-
William McCall