BitTorrent swarms have a deadly bite on broadband nets
http://www.multichannel.com/article/CA6332098.html The short answer: Badly. Based on the research, conducted by Terry Shaw, of CableLabs, and Jim Martin, a computer science professor at Clemson University, it only takes about 10 BitTorrent users bartering files on a node (of around 500) to double the delays experienced by everybody else. Especially if everybody else is using "normal priority" services, like e-mail or Web surfing, which is what tech people tend to call "best-effort" traffic. Adding more network bandwidth doesn't improve the network experience of other network users, it just increases the consumption by P2P users. That's why you are seeing many universities and enterprises spending money on traffic shaping equipment instead of more network bandwidth.
Note that this is from 2006. Do you have a link to the actual paper, by Terry Shaw, of CableLabs, and Jim Martin of Clemson ? Regards Marshall On Oct 21, 2007, at 1:03 PM, Sean Donelan wrote:
http://www.multichannel.com/article/CA6332098.html
The short answer: Badly. Based on the research, conducted by Terry Shaw, of CableLabs, and Jim Martin, a computer science professor at Clemson University, it only takes about 10 BitTorrent users bartering files on a node (of around 500) to double the delays experienced by everybody else. Especially if everybody else is using "normal priority" services, like e-mail or Web surfing, which is what tech people tend to call "best-effort" traffic.
Adding more network bandwidth doesn't improve the network experience of other network users, it just increases the consumption by P2P users. That's why you are seeing many universities and enterprises spending money on traffic shaping equipment instead of more network bandwidth.
On Sun, 21 Oct 2007 13:03:11 -0400 (EDT) Sean Donelan <sean@donelan.com> wrote:
http://www.multichannel.com/article/CA6332098.html
The short answer: Badly. Based on the research, conducted by Terry Shaw, of CableLabs, and Jim Martin, a computer science professor at Clemson University, it only takes about 10 BitTorrent users bartering files on a node (of around 500) to double the delays experienced by everybody else. Especially if everybody else is using "normal priority" services, like e-mail or Web surfing, which is what tech people tend to call "best-effort" traffic.
Adding more network bandwidth doesn't improve the network experience of other network users, it just increases the consumption by P2P users. That's why you are seeing many universities and enterprises spending money on traffic shaping equipment instead of more network bandwidth.
This result is unsurprising and not controversial. TCP achieves fairness *among flows* because virtually all clients back off in response to packet drops. BitTorrent, though, uses many flows per request; furthermore, since its flows are much longer-lived than web or email, the latter never achieve their full speed even on a per-flow basis, given TCP's slow-start. The result is fair sharing among BitTorrent flows, which can only achieve fairness even among BitTorrent users if they all use the same number of flows per request and have an even distribution of content that is being uploaded. It's always good to measure, but the result here is quite intuitive. It also supports the notion that some form of traffic engineering is necessary. The particular point at issue in the current Comcast situation is not that they do traffic engineering but how they do it. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Steven M. Bellovin wrote:
This result is unsurprising and not controversial. TCP achieves fairness *among flows* because virtually all clients back off in response to packet drops. BitTorrent, though, uses many flows per request; furthermore, since its flows are much longer-lived than web or email, the latter never achieve their full speed even on a per-flow basis, given TCP's slow-start. The result is fair sharing among BitTorrent flows, which can only achieve fairness even among BitTorrent users if they all use the same number of flows per request and have an even distribution of content that is being uploaded.
It's always good to measure, but the result here is quite intuitive. It also supports the notion that some form of traffic engineering is necessary. The particular point at issue in the current Comcast situation is not that they do traffic engineering but how they do it.
Dare I say it, it might be somewhat informative to engage in a priority queuing exercise like the Internet-2 scavenger service. In one priority queue goes all the normal traffic and it's allowed to use up to 100% of link capacity, in the other queue goes the traffic you'd like to deliver at lower priority, which given an oversubscribed shared resource on the edge is capped at some percentage of link capacity beyond which performance begins to noticably suffer... when the link is under-utilized low priority traffic can use a significant chunk of it. When high-priority traffic is present it will crowd out the low priority stuff before the link saturates. Now obviously if high priority traffic fills up the link then you have a provisioning issue. I2 characterized this as worst effort service. apps and users could probably be convinced to set dscp bits themselves in exchange for better performance of interactive apps and control traffic vs worst effort services data transfer. Obviously there's room for a discussion of net-neutrality in here someplace. However the closer you do this to the cmts the more likely it is to apply some locally relevant model of fairness.
--Steve Bellovin, http://www.cs.columbia.edu/~smb
On Sun, 21 Oct 2007 19:31:09 -0700 Joel Jaeggli <joelja@bogus.com> wrote:
Steven M. Bellovin wrote:
This result is unsurprising and not controversial. TCP achieves fairness *among flows* because virtually all clients back off in response to packet drops. BitTorrent, though, uses many flows per request; furthermore, since its flows are much longer-lived than web or email, the latter never achieve their full speed even on a per-flow basis, given TCP's slow-start. The result is fair sharing among BitTorrent flows, which can only achieve fairness even among BitTorrent users if they all use the same number of flows per request and have an even distribution of content that is being uploaded.
It's always good to measure, but the result here is quite intuitive. It also supports the notion that some form of traffic engineering is necessary. The particular point at issue in the current Comcast situation is not that they do traffic engineering but how they do it.
Dare I say it, it might be somewhat informative to engage in a priority queuing exercise like the Internet-2 scavenger service.
In one priority queue goes all the normal traffic and it's allowed to use up to 100% of link capacity, in the other queue goes the traffic you'd like to deliver at lower priority, which given an oversubscribed shared resource on the edge is capped at some percentage of link capacity beyond which performance begins to noticably suffer... when the link is under-utilized low priority traffic can use a significant chunk of it. When high-priority traffic is present it will crowd out the low priority stuff before the link saturates. Now obviously if high priority traffic fills up the link then you have a provisioning issue.
I2 characterized this as worst effort service. apps and users could probably be convinced to set dscp bits themselves in exchange for better performance of interactive apps and control traffic vs worst effort services data transfer.
And if you think about these p2p rate limiting devices a bit more broadly, all they really are are traffic classification and QoS policy enforcement devices. If you can set dscp bits with them for certain applications and switch off the policy enforcement feature ...
Obviously there's room for a discussion of net-neutrality in here someplace. However the closer you do this to the cmts the more likely it is to apply some locally relevant model of fairness.
--Steve Bellovin, http://www.cs.columbia.edu/~smb
-- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that. But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Joel Jaeggli Sent: Sunday, October 21, 2007 9:31 PM To: Steven M. Bellovin Cc: Sean Donelan; nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets Steven M. Bellovin wrote:
This result is unsurprising and not controversial. TCP achieves fairness *among flows* because virtually all clients back off in response to packet drops. BitTorrent, though, uses many flows per request; furthermore, since its flows are much longer-lived than web or email, the latter never achieve their full speed even on a per-flow basis, given TCP's slow-start. The result is fair sharing among BitTorrent flows, which can only achieve fairness even among BitTorrent users if they all use the same number of flows per request and have an even distribution of content that is being uploaded.
It's always good to measure, but the result here is quite intuitive. It also supports the notion that some form of traffic engineering is necessary. The particular point at issue in the current Comcast situation is not that they do traffic engineering but how they do it.
Dare I say it, it might be somewhat informative to engage in a priority queuing exercise like the Internet-2 scavenger service. In one priority queue goes all the normal traffic and it's allowed to use up to 100% of link capacity, in the other queue goes the traffic you'd like to deliver at lower priority, which given an oversubscribed shared resource on the edge is capped at some percentage of link capacity beyond which performance begins to noticably suffer... when the link is under-utilized low priority traffic can use a significant chunk of it. When high-priority traffic is present it will crowd out the low priority stuff before the link saturates. Now obviously if high priority traffic fills up the link then you have a provisioning issue. I2 characterized this as worst effort service. apps and users could probably be convinced to set dscp bits themselves in exchange for better performance of interactive apps and control traffic vs worst effort services data transfer. Obviously there's room for a discussion of net-neutrality in here someplace. However the closer you do this to the cmts the more likely it is to apply some locally relevant model of fairness.
--Steve Bellovin, http://www.cs.columbia.edu/~smb
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
A system that wasn't P2P-centric could be interesting, though making it P2P-centric would be easier, I'm sure. ;-) The idea that Internet data flows would ever stop probably doesn't work out well for the average user. What about a system that would /guarantee/ a low amount of data on a low priority queue, but would also provide access to whatever excess capacity was currently available (if any)? We've already seen service providers such as Virgin UK implementing things which essentially try to do this, where during primetime they'll limit the largest consumers of bandwidth for 4 hours. The method is completely different, but the end result looks somewhat similar. The recent discussion of AU service providers also talks about providing a baseline service once you've exceeded your quota, which is a simplified version of this. Would it be better for networks to focus on separating data classes and providing a product that's actually capable of quality-of-service style attributes? Would it be beneficial to be able to do this on an end-to-end basis (which implies being able to QoS across ASN's)? The real problem with the "throw more bandwidth" solution is that at some point, you simply cannot do it, since the available capacity on your last mile simply isn't sufficient for the numbers you're selling, even if you are able to buy cheaper upstream bandwidth for it. Perhaps that's just an argument to fix the last mile. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
How about a system where I tell my customers that for a given plan X at price Y they get U bytes of "high priority" upload per month (or day or whatever) and after that all their traffic is low priority until the next cycle starts. Now here's the fun part. They can mark the priority on the packets they send (diffserv/TOS) and decide what they want treated as high priority and what they want treated as not-so-high priority. If I'm a low usage customer with no p2p applications, maybe I can mark ALL my traffic high priority all month long and not run over my limit. If I run p2p, I can choose to set my p2p software to send all it's traffic marked low priority if I want to, and save my high priority traffic quote for more important stuff. Maybe the default should be high priority so that customers who do nothing but are light users get the best service. low priority upstream traffic gets dropped in favor of high priority, but users decide what's important to them. If I want all my stuff to be high priority, maybe there's a metered plan I can sign up for so I don't have any hard cap on high priority traffic each month but I pay extra over a certain amount. This seems like it would be reasonable and fair and p2p wouldn't have to be singled out. Any thoughts? On 10/22/07, Joe Greco <jgreco@ns.sol.net> wrote:
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
A system that wasn't P2P-centric could be interesting, though making it P2P-centric would be easier, I'm sure. ;-)
The idea that Internet data flows would ever stop probably doesn't work out well for the average user.
What about a system that would /guarantee/ a low amount of data on a low priority queue, but would also provide access to whatever excess capacity was currently available (if any)?
We've already seen service providers such as Virgin UK implementing things which essentially try to do this, where during primetime they'll limit the largest consumers of bandwidth for 4 hours. The method is completely different, but the end result looks somewhat similar. The recent discussion of AU service providers also talks about providing a baseline service once you've exceeded your quota, which is a simplified version of this.
Would it be better for networks to focus on separating data classes and providing a product that's actually capable of quality-of-service style attributes?
Would it be beneficial to be able to do this on an end-to-end basis (which implies being able to QoS across ASN's)?
The real problem with the "throw more bandwidth" solution is that at some point, you simply cannot do it, since the available capacity on your last mile simply isn't sufficient for the numbers you're selling, even if you are able to buy cheaper upstream bandwidth for it.
Perhaps that's just an argument to fix the last mile.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and
That's a fair plan. Simple me came up with this one, Don't say you offer 3mb if you only offer 20k. Simple enough, I think a big problem is that sales is saying they offer all this bandwidth, but the reality is no one gets it. You can blame P2P all you want, but realistically if users are offered say 3MB then they have the right to expect it. Its not their fault or the networks fault if its not realistic. You could say that you have no way of knowing how many users are on the network but that's not true, I bet you could figure out how many users you can handle at what bandwidth guarantee. Sorry if this seems simplistic, but hey its fun to make things simple :-) even if it can be unrealistic a bit. ________________________________ From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Dorn Hetzel Sent: Wednesday, October 24, 2007 8:12 AM To: Joe Greco Cc: frnkblk@iname.com; nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets How about a system where I tell my customers that for a given plan X at price Y they get U bytes of "high priority" upload per month (or day or whatever) and after that all their traffic is low priority until the next cycle starts. Now here's the fun part. They can mark the priority on the packets they send (diffserv/TOS) and decide what they want treated as high priority and what they want treated as not-so-high priority. If I'm a low usage customer with no p2p applications, maybe I can mark ALL my traffic high priority all month long and not run over my limit. If I run p2p, I can choose to set my p2p software to send all it's traffic marked low priority if I want to, and save my high priority traffic quote for more important stuff. Maybe the default should be high priority so that customers who do nothing but are light users get the best service. low priority upstream traffic gets dropped in favor of high priority, but users decide what's important to them. If I want all my stuff to be high priority, maybe there's a metered plan I can sign up for so I don't have any hard cap on high priority traffic each month but I pay extra over a certain amount. This seems like it would be reasonable and fair and p2p wouldn't have to be singled out. Any thoughts? On 10/22/07, Joe Greco <jgreco@ns.sol.net> wrote: then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
A system that wasn't P2P-centric could be interesting, though making it P2P-centric would be easier, I'm sure. ;-) The idea that Internet data flows would ever stop probably doesn't work out well for the average user. What about a system that would /guarantee/ a low amount of data on a low priority queue, but would also provide access to whatever excess capacity was currently available (if any)? We've already seen service providers such as Virgin UK implementing things which essentially try to do this, where during primetime they'll limit the largest consumers of bandwidth for 4 hours. The method is completely different, but the end result looks somewhat similar. The recent discussion of AU service providers also talks about providing a baseline service once you've exceeded your quota, which is a simplified version of this. Would it be better for networks to focus on separating data classes and providing a product that's actually capable of quality-of-service style attributes? Would it be beneficial to be able to do this on an end-to-end basis (which implies being able to QoS across ASN's)? The real problem with the "throw more bandwidth" solution is that at some point, you simply cannot do it, since the available capacity on your last mile simply isn't sufficient for the numbers you're selling, even if you are able to buy cheaper upstream bandwidth for it. Perhaps that's just an argument to fix the last mile. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples. ------------------------------------------------------------------------ ------------------------- ------------------------- CONFIDENTIALITY AND SECURITY NOTICE The contents of this message and any attachments may be privileged, confidential and proprietary and also may be covered by the Electronic Communications Privacy Act. This message is not intended to be used by, and should not be relied upon in any way by, any third party. If you are not an intended recipient, please inform the sender of the transmission error and delete this message immediately without reading, disseminating, distributing or copying the contents. Citadel makes no assurances that this e-mail and any attachments are free of viruses and other harmful code.
The key thing is that it can't be too complicated for the subscriber. What you've described is already too difficult for the masses to consume. The scavenger class, as has been described in other postings, is probably the simplest way to implement things. Let the application developers take care of the traffic marking and expose priorities in the GUI, and the marketing from the MSO needs to be "$xx.xx per month for general use internet, with unlimited bulk traffic for $y.yy". Of course, the MSOs wouldn't say that the first category excludes bulk traffic, or mention caps or upstream limitations or P2P control because that would be bad for marketing. Frank From: Dorn Hetzel [mailto:dhetzel@gmail.com] Sent: Wednesday, October 24, 2007 8:12 AM To: Joe Greco Cc: frnkblk@iname.com; nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets How about a system where I tell my customers that for a given plan X at price Y they get U bytes of "high priority" upload per month (or day or whatever) and after that all their traffic is low priority until the next cycle starts. Now here's the fun part. They can mark the priority on the packets they send (diffserv/TOS) and decide what they want treated as high priority and what they want treated as not-so-high priority. If I'm a low usage customer with no p2p applications, maybe I can mark ALL my traffic high priority all month long and not run over my limit. If I run p2p, I can choose to set my p2p software to send all it's traffic marked low priority if I want to, and save my high priority traffic quote for more important stuff. Maybe the default should be high priority so that customers who do nothing but are light users get the best service. low priority upstream traffic gets dropped in favor of high priority, but users decide what's important to them. If I want all my stuff to be high priority, maybe there's a metered plan I can sign up for so I don't have any hard cap on high priority traffic each month but I pay extra over a certain amount. This seems like it would be reasonable and fair and p2p wouldn't have to be singled out. Any thoughts? On 10/22/07, Joe Greco <jgreco@ns.sol.net> wrote:
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
A system that wasn't P2P-centric could be interesting, though making it P2P-centric would be easier, I'm sure. ;-) The idea that Internet data flows would ever stop probably doesn't work out well for the average user. What about a system that would /guarantee/ a low amount of data on a low priority queue, but would also provide access to whatever excess capacity was currently available (if any)? We've already seen service providers such as Virgin UK implementing things which essentially try to do this, where during primetime they'll limit the largest consumers of bandwidth for 4 hours. The method is completely different, but the end result looks somewhat similar. The recent discussion of AU service providers also talks about providing a baseline service once you've exceeded your quota, which is a simplified version of this. Would it be better for networks to focus on separating data classes and providing a product that's actually capable of quality-of-service style attributes? Would it be beneficial to be able to do this on an end-to-end basis (which implies being able to QoS across ASN's)? The real problem with the "throw more bandwidth" solution is that at some point, you simply cannot do it, since the available capacity on your last mile simply isn't sufficient for the numbers you're selling, even if you are able to buy cheaper upstream bandwidth for it. Perhaps that's just an argument to fix the last mile. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers. That might dramatically shrink you 'addressable market', not to mention your job market ... :) Roderick S. Beck Director of EMEA Sales Hibernia Atlantic 1, Passage du Chantier, 75012 Paris http://www.hiberniaatlantic.com Wireless: 1-212-444-8829. Landline: 33-1-4346-3209. French Wireless: 33-6-14-33-48-97. AOL Messenger: GlobalBandwidth rod.beck@hiberniaatlantic.com rodbeck@erols.com ``Unthinking respect for authority is the greatest enemy of truth.'' Albert Einstein.
On Wed, Oct 24, 2007, Rod Beck wrote:
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
You'd be surprised; users in the Australian market have had to get used to knowing how much bandwidth they use. People are adaptable. Get used to it. :) Adrian
That misses the point. They are probably being forced to adapt by a monopoly or a quasi-monopoly or by the fact that transport into Australia is extremely expensive. The situation outside of Australia is quite different. A DS3 from Sydney to LA is worth about 10 DS3s NYC/London. It is not impossible to move people to these price schemes, but in a market with many providers, it is highly risky. A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users. Roderick S. Beck Director of EMEA Sales Hibernia Atlantic 1, Passage du Chantier, 75012 Paris http://www.hiberniaatlantic.com Wireless: 1-212-444-8829. Landline: 33-1-4346-3209. French Wireless: 33-6-14-33-48-97. AOL Messenger: GlobalBandwidth rod.beck@hiberniaatlantic.com rodbeck@erols.com ``Unthinking respect for authority is the greatest enemy of truth.'' Albert Einstein.
On Wed, Oct 24, 2007, Rod Beck wrote:
That misses the point. They are probably being forced to adapt by a monopoly or a quasi-monopoly or by the fact that transport into Australia is extremely expensive. The situation outside of Australia is quite different. A DS3 from Sydney to LA is worth about 10 DS3s NYC/London.
How's that missing the point? The market might not accept it outright but people can and have adapted in areas where traffic charging and knowing how much you've downloaded is the norm.
A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users.
.. until someone builds a better network, and then they're stuck? Oh wait thats right, America also has monopolised last-mile delivery networks which are coincidentally the ones having the trouble? Hm! Adrian
On 24-okt-2007, at 17:39, Rod Beck wrote:
A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users.
That's not going to work in the long run. Just my podcasts are about 10 GB a month. You only have to wait until there's more HD video available online and it gets easier to get at for most people to see bandwidth use per customer skyrocket. There are much worse things than having customers that like using your service as much as they can.
On Oct 25, 2007, at 6:49 AM, Iljitsch van Beijnum wrote:
On 24-okt-2007, at 17:39, Rod Beck wrote:
A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users.
That's not going to work in the long run. Just my podcasts are about 10 GB a month. You only have to wait until there's more HD video available online and it gets easier to get at for most people to see bandwidth use per customer skyrocket.
To me, it is ironic that some of the same service providers who refused to consider enabling native multicast for video are now complaining of the consequences of video going by unicast. They can't say they weren't warned.
There are much worse things than having customers that like using your service as much as they can.
Indeed. Regards Marshall
On 24-okt-2007, at 17:39, Rod Beck wrote:
A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users.
That's not going to work in the long run. Just my podcasts are about 10 GB a month. You only have to wait until there's more HD video available online and it gets easier to get at for most people to see bandwidth use per customer skyrocket. There are much worse things than having customers that like using your service as much as they can. Oh, let me be clear. I don't know if it will work long term. But businessmen like simple rules of thumb and flat rate for the masses and banishing the rest will be the default strategy. The real question is whether a pricing/service structure can be devised that allows the mass market providers to make money off the problematic heavy users. If so, then you will get a tiered structure: flat rate for the masses and a more expensive service for the Bandwidth Hogs. Actually, there are not many worse things than customers that use your service so much that they ruin your business model. Yes, I believe the industry needs to reach accomodation with the Bandwidth Hogs because they will drive the growth, and if it is profitable growth, then all parties benefit. But you are only going to get the Bandwidth Addicts to pay more is by banishing them from flat services. They won't go gently into the night. In fact, I am sure how profitable are the Addicts given the stereotype of the 20 something ... - R.
Back in the dawn of the public internet this same sort of thing was argued fiercely on lists like com-priv (commercialization and privatization of the internet.) It was usually around flat rate vs bandwidth charging. My take was that bandwidth pricing lets you buy as much pipe as you might ever need, like 100mb/s or more SOHO, but only pay for what you use, which seemed rational if the technology supported that. Flat-rate pricing encourages you to guess the most bandwidth you'll ever need in advance and only pay for that. In theory hybrid models could exist (variable, on-demand bandwidth shaping and all that, it's pretty easy in the p-p wireless world.) What's happened is the worst of both worlds where vendors are selling end-users flat-rate pipes (think, for example, 20mb/s FTTH for under $100/mo) but wishing customers would use it as if it were priced per bit. This is a business model dislocation. It reminds me of the time, back in my heartier young man days, when I'd frequent an all you could eat buffet nearby and finally the owner tossed me out after I overstayed my welcome one day, I'd sit there doing school work and make trips to the buffet every so often, saying "yes, that's ALL you can eat, now get OUTTA here!!!" -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Login: Nationwide Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
On Wed, 24 Oct 2007, Adrian Chadd wrote:
You'd be surprised; users in the Australian market have had to get used to knowing how much bandwidth they use.
People are adaptable. Get used to it. :)
Likewise, people seem to complain about anything. Even Australians seem to like to complain. Get used to it :-) http://www.computerworld.com.au/index.php/id;1929779828 Again, is there no alternative between such extremely low data caps on everyone and extreme usage by a a few?
The problem isn't a particular type of traffic in isolation, its usually the impact of one network user's traffic on all the other network user's traffic sharing the same network. Network Quotas for Individuals - A better answer to the P2P bandwidth problem? http://www.greatplains.net/research/workshops/2004%20Annual%20Meeting/Networ... Can ISPs and P2P users co-operate for improved performance: http://www.net.t-labs.tu-berlin.de/papers/AFS-CISPP2PSCIP-07.pdf P4P: Proactive Provider Assistance for P2P http://cs-www.cs.yale.edu/homes/yong/publications/P4PVision_P4PWG.ppt
On Oct 24, 2007, at 1:28 PM, Sean Donelan wrote:
The problem isn't a particular type of traffic in isolation, its usually the impact of one network user's traffic on all the other network user's traffic sharing the same network.
Network Quotas for Individuals - A better answer to the P2P bandwidth problem? http://www.greatplains.net/research/workshops/2004%20Annual% 20Meeting/Network.Quotas2.ppt
This link has a newer version of the presentation slides. http://www.greatplains.net/conference/Network-Quotas.ppt We have since increased the Internet 1 bandwidth purchased and have increased the Residence Hall Quotas to 1 GigaByte per day. --- Bruce Curtis bruce.curtis@ndsu.edu Certified NetAnalyst II 701-231-8527 North Dakota State University
people manage to count stuff they use when they pay for it. minutes(cell), kwh(electricity), gallons(gas), etc. people have managed to figure out cell phone plans where they get N minutes included and then pay extra over that. the only users this would affect are those that upload a lot, because noone else should run over their "premium upload limit" and have their upload traffic reclassified as not-high priority. if bytes are too tiny, maybe count it in tunes, or cds, or web pages(the mythical average web page:) ) or bananas, whatever the marketing folks can live with. call it all a free extra premium service so noone feels bad :) the main idea is that everyone on plan X gets premium service on their first Y bytes/month of upload by default, but if they know more then they can mark some traffic so it doesn't use up their premium quota but gets worse service. if they do nothing, then all their upload is premium until they run out of premium, which the median user never should. On 10/24/07, Rod Beck <Rod.Beck@hiberniaatlantic.com> wrote:
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
That might dramatically shrink you 'addressable market', not to mention your job market ...
:)
Roderick S. Beck Director of EMEA Sales Hibernia Atlantic 1, Passage du Chantier, 75012 Paris http://www.hiberniaatlantic.com Wireless: 1-212-444-8829. Landline: 33-1-4346-3209. French Wireless: 33-6-14-33-48-97. AOL Messenger: GlobalBandwidth rod.beck@hiberniaatlantic.com rodbeck@erols.com ``Unthinking respect for authority is the greatest enemy of truth.'' Albert Einstein.
On Wed, 24 Oct 2007 15:44:53 BST, Rod Beck said:
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications.
Note that in many/most cases, the person signing the agreement and paying the bill (the parental units) are not the ones actually consuming the bandwidth (the offspring). The *consumer* of the bandwidth may very well have a *very* good idea of exactly how many movies/albums they've pulled down this month, and would much prefer if the bill-payer was totally in the dark about it....
On 24-okt-2007, at 16:44, Rod Beck wrote:
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
Users more or less know what a gigabyte is, because when they download too many of them, it fills up their drive. If the limits are high enough that only actively using high-bandwidth apps has any danger of going over them, the people using those apps will find the time to educate themselves. It's not that hard: an hour of video conferencing (500 kbps) is 450 MB, downloading a gigabyte is.. 1 GB.
Iljitsch van Beijnum wrote:
On 24-okt-2007, at 16:44, Rod Beck wrote:
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
Users more or less know what a gigabyte is, because when they download too many of them, it fills up their drive. If the limits are high enough that only actively using high-bandwidth apps has any danger of going over them, the people using those apps will find the time to educate themselves. It's not that hard: an hour of video conferencing (500 kbps) is 450 MB, downloading a gigabyte is.. 1 GB.
But then that same 1GB can be sent back up to P2P clients any multiple of times. When this happens the customer no longer has any idea how much data they transferred because "well I just left it on and.....". Really, it shouldn't matter how much traffic a user generates/downloads so long as QoS makes sure that people who want real stuff get it and are not killed by the guy down the street seeding the latest Harry Potter movie. If people are worried about transit and infrastructure costs then again, implement QoS and fix the transit/infrastructure to use it. That way you can limit your spending on transit for example to a fixed amount and QoS will manage it for you. -- Leigh You owe the oracle an encrypted Peer to Peer detector.
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
Actually, it sounds a lot like the Electric7 tariffs found in the UK for electricity. These are typically used by low income people who have less education than the average population. And yet they can understand the concept of saving money by using more electricity at night. I really think that a two-tiered QOS system such as the scavenger suggestion is workable if the applications can do the marking. Has anyone done any testing to see if DSCP bits are able to travel unscathed through the public Internet? --Michael Dillon P.S. it would be nice to see QoS be recognized as a mechanism for providing a degraded quality of service instead of all the "first class" marketing puffery.
On Thu, 25 Oct 2007 02:33:35 BST, michael.dillon@bt.com said:
I really think that a two-tiered QOS system such as the scavenger suggestion is workable if the applications can do the marking. Has anyone done any testing to see if DSCP bits are able to travel unscathed through the public Internet?
Given the bad track record for PMTU 'frag needed' ICMP, ECN, and anybody in 69/8, 70/8, 71/8, I'll make the prediction that DSCP bits are mangled in too many ways to make effective use of them, and we can expect a 3-4 year effort to get stuff cleaned up before it works as intended.
On 25-okt-2007, at 3:33, <michael.dillon@bt.com> <michael.dillon@bt.com> wrote:
I really think that a two-tiered QOS system such as the scavenger suggestion is workable if the applications can do the marking. Has anyone done any testing to see if DSCP bits are able to travel unscathed through the public Internet?
Sure, Apple has. I don't think they intended to, though. http://www.mvldesign.com/video_conference_tutorial.html Search for "DSCP" or "Comcast" on that page.
Are you thinking of scavenger on the upload or download? Because it's just upload, it's only the subscriber's provider that needs to concern themselves with their maintaining the tags -- they will do the necessary traffic engineering to ensure it's not 'damaging' the upstream of their other subscribers. If it's download, that's a whole other ball of wax, and not what drove Comcast to do what they're doing, and not the apparent concern of at least North American ISPs today. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of michael.dillon@bt.com Sent: Wednesday, October 24, 2007 8:34 PM To: nanog@merit.edu Subject: RE: BitTorrent swarms have a deadly bite on broadband nets
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
Actually, it sounds a lot like the Electric7 tariffs found in the UK for electricity. These are typically used by low income people who have less education than the average population. And yet they can understand the concept of saving money by using more electricity at night. I really think that a two-tiered QOS system such as the scavenger suggestion is workable if the applications can do the marking. Has anyone done any testing to see if DSCP bits are able to travel unscathed through the public Internet? --Michael Dillon P.S. it would be nice to see QoS be recognized as a mechanism for providing a degraded quality of service instead of all the "first class" marketing puffery.
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
"Actually, it sounds a lot like the Electric7 tariffs found in the UK for electricity. These are typically used by low income people who have less education than the average population. And yet they can understand the concept of saving money by using more electricity at night. I really think that a two-tiered QOS system such as the scavenger suggestion is workable if the applications can do the marking. Has anyone done any testing to see if DSCP bits are able to travel unscathed through the public Internet? --Michael Dillon P.S. it would be nice to see QoS be recognized as a mechanism for providing a degraded quality of service instead of all the "first class" marketing puffery." It is not question of whether you approve of the marketing puffery or not. By the way, telecom is an industry that has used tiered pricing schemes extensively, both in the 'voice era' and in the early dialup industry. In the early 90s there were dial up pricing plans that rewarded customers for limiting their activity to the evening and weekends. MCI, one of the early long distance voice entrants, had all sorts of discounts, including weekend and evening promotions. Interestingly enough, although those schemes are clearly attractive from an efficiency standpoint, the entire industry have shifted towards flat rate pricing for both voice and data. To dismiss that move as purely driven by marketing strikes me as misguided. That have to be real costs involved for such a system to fall apart.
Actually, it sounds a lot like the Electric7 tariffs found in the UK for electricity. These are typically used by low income people who have less education than the average population. And yet they can understand the concept of saving money by using more electricity at night.
I can't comment on MPLS or DSCP bits but the concept of night-time on the internet I found interesting. This would be a localized event as night moved around the earth. If the scheduling feature in many of the fileshare applications were preset to run full bore during late night hours and back off to 1/4 speed during the day I wonder how that might affect both the networks and the ISPs. Since the far side of the planet would be on the opposite schedule from each other, that might also help to localize the traffic from fileshare networks. Seems to me a programmer setting a default schedule in an application is far simpler than many of the other suggestions I've seen for solving this problem. Geo. George Roettger Netlink Services
On Thu, 25 Oct 2007, Geo. wrote:
Seems to me a programmer setting a default schedule in an application is far simpler than many of the other suggestions I've seen for solving this problem.
End users do not have any interest in saving ISP upstream bandwidth, their interest is to get as much as they can, when they want/need it. So solving a bandwidth crunch by trying to make end user applications behave in an ISP friendly manner is a concept that doesn't play well with reality. Congestion should be at the individual customer access, not in the distribution, not at the core. -- Mikael Abrahamsson email: swmike@swm.pp.se
Seems to me a programmer setting a default schedule in an application is far simpler than many of the other suggestions I've seen for solving this problem.
End users do not have any interest in saving ISP upstream bandwidth,
they also have no interest in learning so setting defaults in popular software, for example RFC1918 space zones in MS DNS server, can make all the difference in the world. This way, the bulk of filesharing would have the defaults set to minimize use during peak periods and still allow the freedom on a per user basis to change that. Most would not simply because they don't know about it. The effects of such a default could be considerable. Also if this default stepping back during peak times only affected upload speeds, the user would never notice, in fact if they did notice they would probably like that it allows them more bandwidth for browsing and sending email during the hours they are likely to use it. I fail to see a downside? Geo.
IN fairness, most P2P applications such as bittorrent already have the settings there, they are not setup by default. Also, they do limit the amount of dl and ul based on the bandwidth the user sets up. The application is setup to handle it, the users usually just set the bandwidth all the way up and ignore it. -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Geo. Sent: Thursday, October 25, 2007 3:11 PM To: nanog@merit.edu Subject: RE: BitTorrent swarms have a deadly bite on broadband nets
Seems to me a programmer setting a default schedule in an application is far simpler than many of the other suggestions I've seen for solving this problem.
End users do not have any interest in saving ISP upstream bandwidth,
they also have no interest in learning so setting defaults in popular software, for example RFC1918 space zones in MS DNS server, can make all the difference in the world. This way, the bulk of filesharing would have the defaults set to minimize use during peak periods and still allow the freedom on a per user basis to change that. Most would not simply because they don't know about it. The effects of such a default could be considerable. Also if this default stepping back during peak times only affected upload speeds, the user would never notice, in fact if they did notice they would probably like that it allows them more bandwidth for browsing and sending email during the hours they are likely to use it. I fail to see a downside? Geo. ------------------------------------------------------------------------------------------------- ------------------------- CONFIDENTIALITY AND SECURITY NOTICE The contents of this message and any attachments may be privileged, confidential and proprietary and also may be covered by the Electronic Communications Privacy Act. This message is not intended to be used by, and should not be relied upon in any way by, any third party. If you are not an intended recipient, please inform the sender of the transmission error and delete this message immediately without reading, disseminating, distributing or copying the contents. Citadel makes no assurances that this e-mail and any attachments are free of viruses and other harmful code.
Rod Beck wrote:
The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.
"Actually, it sounds a lot like the Electric7 tariffs found in the UK for electricity. These are typically used by low income people who have less education than the average population. And yet they can understand the concept of saving money by using more electricity at night.
And actually a lot of networks do this with DPI boxes limiting P2P throughput during the day and increasing or removing the limit at night. -- Leigh
On 10/22/2007 at 3:02 PM, "Frank Bulk" <frnkblk@iname.com> wrote:
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
How does one "squash P2P?" How fast will BitTorrent start hiding it's trivial to spot ".BitTorrent protocol" banner in the handshakes? How many P2P protocols are already blocking/shaping evasive? It seems to me is what hurts the ISPs is the accompanying upload streams, not the download (or at least the ISP feels the same download pain no matter what technology their end user uses to get the data[0]). Throwing more bandwidth does not scale to the number of users we are talking about. Why not suck up and go with the economic solution? Seems like the easy thing is for the ISPs to come clean and admit their "unlimited" service is not and put in upload caps and charge for overages. [0] Or is this maybe P2P's fault only in the sense that it makes so much more content available that there is more for end-users to download now than ever before. B¼information contained in this e-mail message is confidential, intended only for the use of the individual or entity named above. If the reader of this e-mail is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any review, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail in error, please contact postmaster@globalstar.com
... Why not suck up and go with the economic solution? Seems like the easy thing is for the ISPs to come clean and admit their "unlimited" service is not and put in upload caps and charge for overages.
Who will be the first? If there *is* competition in the marketplace, the cable company does not want to be the first to say "We limit you" (even if it is true, and has always been true, for some values of truth). This is not a technical problem (telling of the truth), it is a marketing issue. In case it has escaped anyone on this list, I will assert that marketings strengths have never been telling the truth, the whole truth, and nothing but the truth. I read the fine print in my broadband contract. It states that ones mileage (speed) will vary, and the download/upload speeds are maximum only (and lots of other caveats and protections for the provider; none for me, that I recall). But most people do not read the fine contract, but only see the TV advertisements for cable with the turtle, or the flyers in the mail with a cheap price for DSL (so you do not forget, order before midnight tonight!).
On Mon, Oct 22, 2007 at 05:16:08PM -0700, Crist Clark wrote:
It seems to me is what hurts the ISPs is the accompanying upload streams, not the download (or at least the ISP feels the same download pain no matter what technology their end user uses to get the data[0]). Throwing more bandwidth does not scale to the number of users we are talking about. Why not suck up and go with the economic solution? Seems like the easy thing is for the ISPs to come clean and admit their "unlimited" service is not and put in upload caps and charge for overages.
[I've been trying to stay out of this thread, as I consider it unproductive, but here goes...] What hurts ISPs is not upstream traffic. Most access providers are quite happy with upstream traffic, especially if they manage their upstream caps carefully. Careful management of outbound traffic and an active peer-to-peer customer base, is good for ratios -- something that access providers without large streaming or hosting farms can benefit from. What hurt these access providers, particularly those in the cable market, was a set of failed assumptions. The Internet became a commodity, driven by this web thing. As a result, standards like DOCSIS developed, and bandwidth was allocated, frequently in an asymmetric fashion, to access customers. We have lots of asymmetric access technologies, that are not well suited to some new applications. I cannot honestly say I share Sean's sympathy for Comcast, in this case. I used to work for a fairly notorious provider of co-location services, and I don't recall any great outpouring of sympathy on this list when co-location providers ran out of power and cooling several years ago. I /do/ recall a large number of complaints and the wailing and gnashing of teeth, as well as a lot of discussions at NANOG (both the general session and the hallway track) about the power and cooling situation in general. These have continued through this last year. If the MSOs, their vendors, and our standards bodies in general, have made a failed set of assumptions about traffic ratios and volume in access networks, I don't understand why consumers should be subject to arbitrary changes in policy to cover engineering mistakes. It would be one thing if they simply reduced the upstream caps they offered, it is quite another to actively interfere with some protocols and not others -- if this is truly about upstream capacity, I would expect the former, not the latter. If you read Comcast's services agreement carefully, you'll note that the activity in question isn't mentioned. It only comes up in their Use Policy, something they can and have amended on the fly. It does not appear in the agreement itself. If one were so inclined, one might consider this at least slightly dishonest. Why make a consumer enter into an agreement, which refers to a side agreement, and then update it at will? Can you reasonably expect Joe Sixpack to read and understand what is both a technical and legal document? I would not personally feel comfortable forging RSTs, amending a policy I didn't actually bother to include in my service agreement with my customers, and doing it all to shift the burden for my, or my vendor's engineering assumptions onto my customers -- but perhaps that is why I am an engineer, and not an executive. As an aside, before all these applications become impossible to identify, perhaps it's time for cryptographically authenticated RST cookies? Solving the forging problems might head off everything becoming an encrypted pile of goo on tcp/443.
Information contained in this e-mail message is confidential, intended only for the use of the individual or entity named above. If the reader of this e-mail is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any review, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail in error, please contact postmaster@globalstar.com
Someone toss this individual a gmail invite...please! --msa
On Mon, 22 Oct 2007, Majdi S. Abbas wrote:
What hurt these access providers, particularly those in the cable market, was a set of failed assumptions. The Internet became a commodity, driven by this web thing. As a result, standards like DOCSIS developed, and bandwidth was allocated, frequently in an asymmetric fashion, to access customers. We have lots of asymmetric access technologies, that are not well suited to some new applications.
This doesn't explain why many universities, most with active, symmetric ethernet switches in residential dorms, have been deploying packet shaping technology for even longer than the cable companies. If the answer was as simple as upgrading everyone to 100Mbps symmetric ethernet, or even 1Gbps symmetric ethernet, then the university resnet's would be in great shape. Ok, maybe the greedy commercial folks screwed up and deserve what they got; but why are the nobel non-profit universities having the same problems?
On Tue, Oct 23, 2007, Sean Donelan wrote:
On Mon, 22 Oct 2007, Majdi S. Abbas wrote:
What hurt these access providers, particularly those in the cable market, was a set of failed assumptions. The Internet became a commodity, driven by this web thing. As a result, standards like DOCSIS developed, and bandwidth was allocated, frequently in an asymmetric fashion, to access customers. We have lots of asymmetric access technologies, that are not well suited to some new applications.
This doesn't explain why many universities, most with active, symmetric ethernet switches in residential dorms, have been deploying packet shaping technology for even longer than the cable companies. If the answer was as simple as upgrading everyone to 100Mbps symmetric ethernet, or even 1Gbps symmetric ethernet, then the university resnet's would be in great shape.
Ok, maybe the greedy commercial folks screwed up and deserve what they got; but why are the nobel non-profit universities having the same problems?
because off the shell p2p stuff doesn't seem to pick up on internal peers behind the great NAT that I've seen dorms behind? :P Adrian
Hi All, I am looking for hosting facilities for about 10-20 racks and Internet transit with good local connectivity in Jordan, can anybody help? Thanks, Leigh Porter UK Broaband/PCCW
On Tue, 23 Oct 2007, Sean Donelan wrote:
Ok, maybe the greedy commercial folks screwed up and deserve what they got; but why are the nobel non-profit universities having the same problems?
Because if you look at a residential population with ADSL2+ and 10/10 or 100/100 respectively, the upload/download ratios are reversed, from 1:2 with ADSL2+ (double the amount of download to upload), to 2:1 (double the amount of upload to download). In my experience, the amount of download is approximately the same in both cases, which gives that the upload factor changes 1:4 with the access media symmetry. Otoh, long term savings (several years) on operational costs still make residential ethernet a better deal since experience is that "it just works" as opposed to ADSL2+ where you have a very disturbing signal environment where customers are impacting each other which leads to a lot of customer calls regarding poor quality and varying speeds/bit errors over time. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Tue, 23 Oct 2007 00:35:21 EDT, Sean Donelan said:
This doesn't explain why many universities, most with active, symmetric ethernet switches in residential dorms, have been deploying packet shaping technology for even longer than the cable companies. If the answer was as simple as upgrading everyone to 100Mbps symmetric ethernet, or even 1Gbps symmetric ethernet, then the university resnet's would be in great shape.
If I didn't know better, I'd say Sean was trolling me, but I'll bite anyhow. ;) Actually, upgrading everybody to 100BaseT makes the problem worse, because then if everybody cranks it up at once, the problem moves from "need upstream links that are $PRICY" into the "need upstream links that are $NOEXIST". We have some 9,000+ students resident on campus. Essentially every single one has a 100BaseT jack, and we're working on getting to Gig-E across the board over the next few years. That leaves us two choices on the upstream side - statistical mux effects (and emulating said effects via traffic shaping), or find a way to trunk 225 40GigE links together. And that's just 9,000 customers - if we were a provider the size of most cable companies, we'd *really* be in trouble. Fortunately, statistical mux effects and a little bit of port-agnostic traffic shaping (you go over a well-publicized upload byte limit for a 24 hour span, you get magically turned into a 56k dialup), we fit quite nicely into a single gig-E link and a 622mbit link. Now if any of you guys have a lead on an affordable way to get 225 40GigE's from here to someplace that can *take* 225 40Gig-E's... ;)
On Tue, 23 Oct 2007, Valdis.Kletnieks@vt.edu wrote:
Now if any of you guys have a lead on an affordable way to get 225 40GigE's from here to someplace that can *take* 225 40Gig-E's... ;)
http://www.educause.edu/ir/library/pdf/EPO0611.pdf It does not cost all that much, relatively, to upgrade a network once the basic wiring is in place . that.s the big original cost. For example, a university campus in the Midwest that serves 14,000 students and faculty, recently estimated it would cost about $150 per port (per end user) to replicate their current 100 Mbps network for a five year period, or about $30 a year per user. To upgrade to 1000 Mbps (1 gigabit) it would cost $250, or about $50 per year. University campuses are like small towns or suburban neighborhoods. Once cable companies and companies like Verizon make their initial fiber investment, the relative cost of upgrading bandwidth to customers is small.
I'm not claiming that squashing P2P is easy, but apparently Comcast has been successfully enough to generate national attention, and the bandwidth shaping providers are not totally a lost cause. The reality is that copper-based internet access technologies: dial-up, DSL, and cable modems have made the design-based trade off that there is substantially more downstream than upstream. With North American DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at 256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK). Even BPON and GPON follow that same asymmetrical track. And the reality is that most residential internet access patterns reflect that (whether it's a cause or contributor, I'll let others debate that). Generally ISPs have been reluctant to pursue usage-based models because it adds an undesirable cost and isn't as attractive a marketing tool to attract customers. Only in business models where bandwidth (local, transport, or otherwise) is expensive has usage-based billing become a reality. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Crist Clark Sent: Monday, October 22, 2007 7:16 PM To: nanog@merit.edu Subject: RE: BitTorrent swarms have a deadly bite on broadband nets
On 10/22/2007 at 3:02 PM, "Frank Bulk" <frnkblk@iname.com> wrote:
I wonder how quickly applications and network gear would implement QoS support if the major ISPs offered their subscribers two queues: a default queue, which handled regular internet traffic but squashed P2P, and then a separate queue that allowed P2P to flow uninhibited for an extra $5/month, but then ISPs could purchase cheaper bandwidth for that.
But perhaps at the end of the day Andrew O. is right and it's best off to have a single queue and throw more bandwidth at the problem.
How does one "squash P2P?" How fast will BitTorrent start hiding it's trivial to spot ".BitTorrent protocol" banner in the handshakes? How many P2P protocols are already blocking/shaping evasive? It seems to me is what hurts the ISPs is the accompanying upload streams, not the download (or at least the ISP feels the same download pain no matter what technology their end user uses to get the data[0]). Throwing more bandwidth does not scale to the number of users we are talking about. Why not suck up and go with the economic solution? Seems like the easy thing is for the ISPs to come clean and admit their "unlimited" service is not and put in upload caps and charge for overages. [0] Or is this maybe P2P's fault only in the sense that it makes so much more content available that there is more for end-users to download now than ever before. B¼information contained in this e-mail message is confidential, intended only for the use of the individual or entity named above. If the reader of this e-mail is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any review, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail in error, please contact postmaster@globalstar.com
In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk wrote:
The reality is that copper-based internet access technologies: dial-up, DSL, and cable modems have made the design-based trade off that there is substantially more downstream than upstream. With North American DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at 256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK). Even BPON and GPON follow that same asymmetrical track. And the reality is that most residential internet access patterns reflect that (whether it's a cause or contributor, I'll let others debate that).
Having now seen the cable issue described in technical detail over and over, I have a question. At the most recent Nanog several people talked about 100Mbps symmetric access in Japan for $40 US. This leads me to two questions: 1) Is that accurate? 2) What technology to the use to offer the service at that price point? 3) Is there any chance US providers could offer similar technologies at similar prices, or are there significant differences (regulation, distance etc) that prevent it from being viable? -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
On Oct 22, 2007, at 9:55 PM, Leo Bicknell wrote:
Having now seen the cable issue described in technical detail over and over, I have a question.
At the most recent Nanog several people talked about 100Mbps symmetric access in Japan for $40 US.
This leads me to two questions:
1) Is that accurate?
2) What technology to the use to offer the service at that price point?
3) Is there any chance US providers could offer similar technologies at similar prices, or are there significant differences (regulation, distance etc) that prevent it from being viable?
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ AR2007082801990.html The Washington Post article claims that: "Japan has surged ahead of the United States on the wings of better wire and more aggressive government regulation, industry analysts say. The copper wire used to hook up Japanese homes is newer and runs in shorter loops to telephone exchanges than in the United States. ..." a) Dense, urban area (less distance to cover) b) Fresh new wire installed after WWII c) Regulatory environment that forced telecos to provide capacity to Internet providers Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms. Nice article. Makes you wish... -Dave
David Andersen wrote:
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/AR2007082801... <snip> Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms.
Nice article. Makes you wish...
For the days when AT&T ran all the phones? I don't think so...
On Oct 22, 2007, at 11:02 PM, Jeff Shultz wrote:
David Andersen wrote:
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ AR2007082801990.html <snip> Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms. Nice article. Makes you wish...
For the days when AT&T ran all the phones? I don't think so...
For an environment that encouraged long-term investments with high payoff instead of short term profits. For symmetric 100Mbps residential broadband. But no - I was as happy as everyone else when the CLECs emerged and provided PRI service at 1/3rd the rate of the ILECs, and I really don't care to return to the days of having to rent a telephone from Ma Bell. :) But it's not clear that you can't have both, though doing it in the US with our vastly larger land area is obviously much more difficult. The same thing happened with the CLECs, really -- they provided great, advanced service to customers in major metropolitan areas where the profits were sweet, and left the outlying, low-profit areas to the ILECs. Universal access is a tougher nut to crack. -Dave
Once upon a time, David Andersen <dga@cs.cmu.edu> said:
But no - I was as happy as everyone else when the CLECs emerged and provided PRI service at 1/3rd the rate of the ILECs
Not only was that CLEC service concetrated in higher-density areas, the PRI prices were often not based in reality. There were a bunch of CLECs with dot.com-style business plans (and they're no longer around). Lucent was practically giving away switches and switch management (and lost big $$$ because of it). CLECs also sold PRIs to ISPs based on reciprocal compensation contracts with the ILECs that were based on incorrect assumptions (that most calls would be from the CLEC to the ILEC); rates based on that were bound to increase as those contracts expired. Back when dialup was king, CLECs selling cheap PRIs to ISPs seemed like a sure-fire way to print money. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On Monday 22 October 2007 19:20, David Andersen wrote:
Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms.
Recent? NTT started building the FTC buildout in the mid-90s. At least that's when the plans were first discussed. They took a bold leap back when most people were waffling about WANs and Bellcore was saying SMDS was going to be the way of the future. Now they reap the benefits, while some of us are left behind in the bandwidth ghettos of North America. :-( cheers, --dr -- World Security Pros. Cutting Edge Training, Tools, and Techniques Tokyo, Japan November 29/30 - 2007 http://pacsec.jp pgpkey http://dragos.com/ kyxpgp
On Oct 23, 2007, at 9:33 AM, Dragos Ruiu wrote:
On Monday 22 October 2007 19:20, David Andersen wrote:
Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms.
Recent?
NTT started building the FTC buildout in the mid-90s. At least that's when the plans were first discussed. They took a bold leap back when most people were waffling about WANs and Bellcore was saying SMDS was going to be the way of the future. Now they reap the benefits, while some of us are left behind in the bandwidth ghettos of North America. :-(
Actually rollout didn't begin until 2002. TV
cheers, --dr
-- World Security Pros. Cutting Edge Training, Tools, and Techniques Tokyo, Japan November 29/30 - 2007 http://pacsec.jp pgpkey http://dragos.com/ kyxpgp
I did consulting work for NTT in 2001 and 2002 and visited their Tokyo headquarters twice. NTT has two ILEC divisions, NTT East and NTT West. The ILEC management told me in conversations that there was no money in fiber-to-the-home; the entire rollout was due to government pressure and was well below a competitive rate of return. Similarly, NTT kept staff they did not need becuase the government wanted to maintain high employment in Japan and avoid the social stress that results from massive layoffs. You should not assume that 'Japanese capitalism' works like American capitalism. It doesn't. NTT only reveals financial statistics at the aggregate level; the cross subsidies between divisions is completely hidden and this enables them to pursue the government's social objectives. Moreover, it is not clear that you should desire broadband rollout at any cost. Presumably broadband access should be justified as satisfying some net benefit criterion (benefits minus costs). A better model is the French model which generates very high broadband penetration rates and is economically rational. France has successfully forced the ILEC to open up the central offices and you now have two highly successful and publicly traded DSL providers, Neuf Cegetel and Free. The US effort failed because of silly arguments based on the equally silly notion that private property is an absolute right and that forcing the ILECs to share facilities even when they are receiving a fair return of return in a form of 'confiscation'. As always, these comments are mine and not the position of Hibernia Atlantic. Roderick S. Beck Director of EMEA Sales Hibernia Atlantic 1, Passage du Chantier, 75012 Paris http://www.hiberniaatlantic.com Wireless: 1-212-444-8829. Landline: 33-1-4346-3209. French Wireless: 33-6-14-33-48-97. AOL Messenger: GlobalBandwidth rod.beck@hiberniaatlantic.com rodbeck@erols.com ``Unthinking respect for authority is the greatest enemy of truth.'' Albert Einstein.
Yup, matches my experience (designing/deploying AOL's swan song JP network infrastructure) during the same period. The "ILECs" were artifacts of the Japanese regulators' 1997 effort to relieve the last mile facilities death grip on services, ala the (1984) US MFJ / AT&T breakup. The new c. 2001 "pressure" was possible because the Japanese gov was still NTT's largest shareholder. The same "pressure" that prompted FTTH rollout also delivered the metro and access facilities unbundling that begat YahooBB and ubiquitous 20-100Mbps to the home over conventional facilities -- the latter mandate being similar in form the US Telecom Act of 1996, with the minor exception that it actually worked there... TV On Oct 23, 2007, at 9:42 AM, Rod Beck wrote:
I did consulting work for NTT in 2001 and 2002 and visited their Tokyo headquarters twice. NTT has two ILEC divisions, NTT East and NTT West. The ILEC management told me in conversations that there was no money in fiber-to-the-home; the entire rollout was due to government pressure and was well below a competitive rate of return. Similarly, NTT kept staff they did not need becuase the government wanted to maintain high employment in Japan and avoid the social stress that results from massive layoffs. You should not assume that 'Japanese capitalism' works like American capitalism. It doesn't. NTT only reveals financial statistics at the aggregate level; the cross subsidies between divisions is completely hidden and this enables them to pursue the government's social objectives.
Moreover, it is not clear that you should desire broadband rollout at any cost. Presumably broadband access should be justified as satisfying some net benefit criterion (benefits minus costs).
A better model is the French model which generates very high broadband penetration rates and is economically rational. France has successfully forced the ILEC to open up the central offices and you now have two highly successful and publicly traded DSL providers, Neuf Cegetel and Free.
The US effort failed because of silly arguments based on the equally silly notion that private property is an absolute right and that forcing the ILECs to share facilities even when they are receiving a fair return of return in a form of 'confiscation'.
As always, these comments are mine and not the position of Hibernia Atlantic.
Roderick S. Beck Director of EMEA Sales Hibernia Atlantic 1, Passage du Chantier, 75012 Paris http://www.hiberniaatlantic.com Wireless: 1-212-444-8829. Landline: 33-1-4346-3209. French Wireless: 33-6-14-33-48-97. AOL Messenger: GlobalBandwidth rod.beck@hiberniaatlantic.com rodbeck@erols.com ``Unthinking respect for authority is the greatest enemy of truth.'' Albert Einstein.
I did consulting work for NTT in 2001 and 2002 and visited their Tokyo = headquarters twice. NTT has two ILEC divisions, NTT East and NTT West. = The ILEC management told me in conversations that there was no money in = fiber-to-the-home; the entire rollout was due to government pressure and = was well below a competitive rate of return. Similarly, NTT kept staff = they did not need becuase the government wanted to maintain high = employment in Japan and avoid the social stress that results from = massive layoffs.
Mmm hmm. That sounds somewhat like the system we were promised here in America. We were told by the ILEC's that it was going to be very expensive and that they had little incentive to do it, so we offered them a package of incentives - some figure as much as $200 billion worth. See http://www.newnetworks.com/broadbandscandals.htm
You should not assume that 'Japanese capitalism' works = like American capitalism.
That could well be; it appears that American capitalism is much better at lobbying the political system. They eventually found ways way to take their money and run without actually delivering on the promises they made. I'll bet the American system paid out a lot better for a lot less work. Anyways, it's clear to me that any high bandwidth deployment is an immense investment for a society, and one of the really interesting meta-questions is whether or not such an investment will still be paying off in ten years, or twenty, or... The POTS network, which merely had to transmit voice, and never had to deal with substantial growth of the underlying bandwidth (mainly moving from analog to digital trunks, which "increased" but then fixed the bandwidth), was a long-term investment that has paid off for the telcos over the years, even if there was a lot of wailing along the way. However, one of the notable things about data is that our needs have continued to grow. Twenty years ago, a 9600 bps Internet connection might have served a large community, where it was mostly used for messaging and an occasional interactive session. Fifteen years ago, a 14.4 bps was a nice connection for a single user. Ten years ago, a 1Mbps connection was pretty sweet (maybe a bit less for DSL, a bit more for cable). Things pretty much go awry at that point, and we no longer see such impressive progression in average end-user Internet connection speeds. This didn't stop speed increases elsewhere, but it did put the brakes on rapid increases here. If we had received the promised FTTH network, we'd have speeds of up to 45Mbps, which would definitely be in-line with previous growth (and the growth of computing and storage technologies). At a LAN networking level, we've gone from 10Mbps to 100Mbps to 1Gbps as the standard ethernet interface that you might find on computers and networking devices. So the question is, had things gone differently, would 45Mbps still be adequate? And would it be adequate in 10 or 20 years? And what effect would that have had overall? Certainly it would be a driving force for continued rapid growth in both networking and Internet technologies. As has been noted here in the past, current Ethernet (40G/100G) standards efforts haven't been really keeping pace with historical speed growth trends. Has the failure to deploy true high-speed broadband in a large and key market such as the US resulted in less pressure on vendors by networks for the next generations of high-speed networking? Or, getting back to the actual situation here in the US, what implications does the continued evolution of US broadband have for other network operators? As the ILEC's and cablecos continue to grow and dominate the end-user Internet market, what's the outlook on other independent networks, content providers, etc.? The implications of the so-called net neutrality issues are just one example of future issues. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Here's timely article: "KDDI says 900k target for fibre users 'difficult'" http://www.telegeography.com/cu/article.php?article_id=20215&email=html Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of David Andersen Sent: Monday, October 22, 2007 9:21 PM To: Leo Bicknell Cc: nanog@merit.edu Subject: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets) On Oct 22, 2007, at 9:55 PM, Leo Bicknell wrote:
Having now seen the cable issue described in technical detail over and over, I have a question.
At the most recent Nanog several people talked about 100Mbps symmetric access in Japan for $40 US.
This leads me to two questions:
1) Is that accurate?
2) What technology to the use to offer the service at that price point?
3) Is there any chance US providers could offer similar technologies at similar prices, or are there significant differences (regulation, distance etc) that prevent it from being viable?
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ AR2007082801990.html The Washington Post article claims that: "Japan has surged ahead of the United States on the wings of better wire and more aggressive government regulation, industry analysts say. The copper wire used to hook up Japanese homes is newer and runs in shorter loops to telephone exchanges than in the United States. ..." a) Dense, urban area (less distance to cover) b) Fresh new wire installed after WWII c) Regulatory environment that forced telecos to provide capacity to Internet providers Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms. Nice article. Makes you wish... -Dave
Frank Bulk wrote:
Here's timely article: "KDDI says 900k target for fibre users 'difficult'" http://www.telegeography.com/cu/article.php?article_id=20215&email=html
KDDI isn't the only ftfth provider... NTT east/west (flets), usen, softbank/yahooBB and others all play in that space. 100/100 from softbank appears to be ~7200 yen while 50/12 dsl is about 4500 yen if you have a phone line as well... ;) Obviously if you live out in the boonies like Jared, even in japan your options are pretty slow. The Onsen I visited in fuji-hakone 2 years ago had only 3Mb/s for example.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of David Andersen Sent: Monday, October 22, 2007 9:21 PM To: Leo Bicknell Cc: nanog@merit.edu Subject: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)
On Oct 22, 2007, at 9:55 PM, Leo Bicknell wrote:
Having now seen the cable issue described in technical detail over and over, I have a question.
At the most recent Nanog several people talked about 100Mbps symmetric access in Japan for $40 US.
This leads me to two questions:
1) Is that accurate?
2) What technology to the use to offer the service at that price point?
3) Is there any chance US providers could offer similar technologies at similar prices, or are there significant differences (regulation, distance etc) that prevent it from being viable?
http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ AR2007082801990.html
The Washington Post article claims that:
"Japan has surged ahead of the United States on the wings of better wire and more aggressive government regulation, industry analysts say. The copper wire used to hook up Japanese homes is newer and runs in shorter loops to telephone exchanges than in the United States.
..."
a) Dense, urban area (less distance to cover)
b) Fresh new wire installed after WWII
c) Regulatory environment that forced telecos to provide capacity to Internet providers
Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms.
Nice article. Makes you wish...
-Dave
A lot of the MDUs and apartment buildings in Japan are doing fiber to the basement and then VDSL or VDSL2 in the building, or even Ethernet. That's how symmetrical bandwidth is possible. Considering that much of the population does not live in high-rises, this doesn't easily apply to the U.S. population. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Leo Bicknell Sent: Monday, October 22, 2007 8:55 PM To: nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk wrote:
The reality is that copper-based internet access technologies: dial-up, DSL, and cable modems have made the design-based trade off that there is substantially more downstream than upstream. With North American DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at 256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK). Even BPON and GPON follow that same asymmetrical track. And the reality is that most residential internet access patterns reflect that (whether it's a cause or contributor, I'll let others debate that).
Having now seen the cable issue described in technical detail over and over, I have a question. At the most recent Nanog several people talked about 100Mbps symmetric access in Japan for $40 US. This leads me to two questions: 1) Is that accurate? 2) What technology to the use to offer the service at that price point? 3) Is there any chance US providers could offer similar technologies at similar prices, or are there significant differences (regulation, distance etc) that prevent it from being viable? -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
According to http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossi... Comcast's blocking affects connections to non-Comcast users. This means that they're trying to manage their upstream connections, not the local loop. For Comcast's own position, see http://bits.blogs.nytimes.com/2007/10/22/comcast-were-delaying-not-blocking-...
On Tue, Oct 23, 2007 at 03:13:42AM +0000, Steven M. Bellovin wrote:
According to http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossi... Comcast's blocking affects connections to non-Comcast users. This means that they're trying to manage their upstream connections, not the local loop.
Disagree - despite Comcast's size, there's more "Internet" outside of them than on-net. Even with decent knobs, these devices are more blunt instruments than anyone would like. See my previous comments regarding allowing the on-net to on-net (or within region, or whatever BGP community you use...) such that transfers with better RTT to complete quicker. Everyone who is commenting on "This tracker/client does $foo to behave" is missing the point - would one rather have the traffic snooped further to see if such and such tracker/client is in use? And pay for the admin overhead required to keep those non-automatable lists updated? Adrian hit it on the head regarding the generations of kittens romping free... While I expect end-users to miss the boat that providers use stat-mux calculations to build and price their networks, I'm floored to see the sentiment on NANOG. No edge provider of geographic scope/scale will survive if 1:1 ratios were built and priced accordingly. Perhaps the M&A colonialism era is coming to a close and smaller, regional nation- states... erm last-mile providers will be the entities to grow with satisfied customers? Cheers, Joe -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
In a message written on Tue, Oct 23, 2007 at 10:34:00AM -0400, Joe Provo wrote:
While I expect end-users to miss the boat that providers use stat-mux calculations to build and price their networks, I'm floored to see the sentiment on NANOG. No edge provider of geographic scope/scale will survive if 1:1 ratios were built and priced accordingly. Perhaps the M&A colonialism era is coming to a close and smaller, regional nation- states... erm last-mile providers will be the entities to grow with satisfied customers?
I'm not sure NANOGers are missing the boat, just bemoaning the economics of the situation and some companies choices. As an example, if I believe http://en.wikipedia.org/wiki/DOCSIS (as I'm no cable export): DOCSIS 1.x, 10.24Mbps upstream. With this providers regularly offered 384-768k upload speeds to customers. DOCSIS 3.0, 122Mbps upstream. That's about 12x. Applying the 12x to the original upload speed that's 4.6-9.2Mbps upload speed per user. And yet, today most of the major national providers don't over more than 1Mbps of upload speed in their fastest packages. Perhaps the real issue here is that broadband providers don't have enough diversity in their products. Picking on an unnamed cable provider and looking at their web site I can get: 4M down, 384k up. $39. 6M down, 768k up. $49. 8M down, 768k up. $59. That's their entire portfolio of residential services. How about a $99 package with 10M down, 3M up? How about $5 per meg download, $20 per meg upload, pick any combination of speeds you want where both are under 20Mbps? And why-o-why are they still giving me modems? Is not the stack of 5 that I already have enough waste? How much of my service charge goes to replacing equipment over and over because it's "how they work". (For instance I moved, and got a new modem with the new install, same make and model as the old modem, which they didn't want back.) So, while NANOGers may float the idea of 1:1, what I think really honks them off is that the current standard (4M down, 384k up) is 1:10, and I think they feel it's time it became more like 1:4 (4M down, 1M up), and that seems to be completely within reach of the technology. Which leaves the only thing holding it up being big company management and marketing. I will point out, one of the smaller providers on the Wikipedia page under US, CableVision, is said to have 30Mbps down 5Mbps up. That's 1:6, at a heck of a lot higher speeds. I think most people here would be quite happy with that offering. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
We're living in a DOCSIS 1.1 and 2.0 world, which gives us 40 down, 9 up in a best case. Considering that there are ~4 upstream ports for every downstream port, the MSOs are already operating their network in a 40:36 or almost 1:1 ratio. It's just that upstream is a much more precious item that that they can't afford to fill up on a particular node, and most people find download speeds much more important most of the time. We'll talk about DOCSIS 3.0 in a year from now and see how it's being deployed. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Leo Bicknell Sent: Tuesday, October 23, 2007 11:05 AM To: Joe Provo Cc: nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets In a message written on Tue, Oct 23, 2007 at 10:34:00AM -0400, Joe Provo wrote:
While I expect end-users to miss the boat that providers use stat-mux calculations to build and price their networks, I'm floored to see the sentiment on NANOG. No edge provider of geographic scope/scale will survive if 1:1 ratios were built and priced accordingly. Perhaps the M&A colonialism era is coming to a close and smaller, regional nation- states... erm last-mile providers will be the entities to grow with satisfied customers?
I'm not sure NANOGers are missing the boat, just bemoaning the economics of the situation and some companies choices. As an example, if I believe http://en.wikipedia.org/wiki/DOCSIS (as I'm no cable export): DOCSIS 1.x, 10.24Mbps upstream. With this providers regularly offered 384-768k upload speeds to customers. DOCSIS 3.0, 122Mbps upstream. That's about 12x. Applying the 12x to the original upload speed that's 4.6-9.2Mbps upload speed per user. And yet, today most of the major national providers don't over more than 1Mbps of upload speed in their fastest packages. Perhaps the real issue here is that broadband providers don't have enough diversity in their products. Picking on an unnamed cable provider and looking at their web site I can get: 4M down, 384k up. $39. 6M down, 768k up. $49. 8M down, 768k up. $59. That's their entire portfolio of residential services. How about a $99 package with 10M down, 3M up? How about $5 per meg download, $20 per meg upload, pick any combination of speeds you want where both are under 20Mbps? And why-o-why are they still giving me modems? Is not the stack of 5 that I already have enough waste? How much of my service charge goes to replacing equipment over and over because it's "how they work". (For instance I moved, and got a new modem with the new install, same make and model as the old modem, which they didn't want back.) So, while NANOGers may float the idea of 1:1, what I think really honks them off is that the current standard (4M down, 384k up) is 1:10, and I think they feel it's time it became more like 1:4 (4M down, 1M up), and that seems to be completely within reach of the technology. Which leaves the only thing holding it up being big company management and marketing. I will point out, one of the smaller providers on the Wikipedia page under US, CableVision, is said to have 30Mbps down 5Mbps up. That's 1:6, at a heck of a lot higher speeds. I think most people here would be quite happy with that offering. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Dear Frank, As representing a network service provider who works with affiliates with cable, DSL, fiber, and wireless networks I think I can help you answer some of your questions. Our company supports many cable company, and since I started working here is March I have learned much about cable. First the speeds listed are for the optimal configuration (which requires a clean cable plant and free bands). In a Docsis 1.x environment you only get 10.240 Mbps if you are using a 3.2 MHz wide channel and using 16qam modulation. My experience is that many of the affiliate networks I work with do not have that configuration. I have been working with one provider upgrading them from a 1.6 MHz qpsk config to 3.2 MHz 16qam (in Docsis 1.x you only have qpsk and 16qam modulation types.) For a 30 Mbps Docsis 2.0 configuration, you need to have a 6.4MHz wide channel with 64qam modulation. 64qam is more sensitive to noise than 16qam and much more sensitive than qpsk. Also the 6.4MHz channel is wider and eats up spectrum realestate, particularly if you use more than one upstream frequency. The other problem with using 6.4MHz upstreams is that any Docsis 1.x modems that you still have out there will not work at all with it, and all modems on that node must be Docsis 2.0. If a provider uses a 3.2MHz channel with 64qam they can let a docsis 1.x 16qam subchannel coexist on the same frequency. Using a 3.2MHz channel with 64qam only gives a theoretical (not counting overhead) limit of 15.360 Mbps. Docsis 3.0 acheives most of its speed by bonding channels together. Note that each downstream channel is a possible TV channel, and the cable company will have to determine the value of using a given channel for TV or data. Docsis 2.0/3.0 also can use 128qam, but many modems do not support it nor are many cable plants clean enough either. You mentioned upstreams and ports. Cable unlike DSL is a shared resorce meaning that the 10 Mbps or 30 Mbps upstream is shared by everyone on the node. Also note that the 27Meg/45Meg downstream is shared by all users connected to that downstream. Many CMTS's are in a 1:4 or 1:6 configuration which means that there are 4 or 6 upstreams to every downstream. Remember that everyone in that node shares the bandwidth. What I normally see as a general limit is about 500-1000 customers per blade. That means that there are between 83-250 people per upstream port. If the speed was distributed evenly (an not over subscribed) each person would get at most 180Kbps down and 120Kbps up. Docsis 3.0 will help with speeds, but providers may not want to give up TV channels for extra data speeds. While many cable systems have more room for uploads, many others do not. When an older configuration works most people are reluctant to change, and many cable technicians do not like locating and troubleshooting noise issues. Sorry for making things muddy, but cable is not the best long-term solution for speed. Fiber to the home/node is much more attractive and quite a few people are offering it. -- Brian Raaen Network Engineer braaen@zcorum.com On Tuesday 23 October 2007 23:46, Frank Bulk wrote:
We're living in a DOCSIS 1.1 and 2.0 world, which gives us 40 down, 9 up in a best case. Considering that there are ~4 upstream ports for every downstream port, the MSOs are already operating their network in a 40:36 or almost 1:1 ratio. It's just that upstream is a much more precious item that that they can't afford to fill up on a particular node, and most people find download speeds much more important most of the time.
We'll talk about DOCSIS 3.0 in a year from now and see how it's being deployed.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Leo Bicknell Sent: Tuesday, October 23, 2007 11:05 AM To: Joe Provo Cc: nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets
In a message written on Tue, Oct 23, 2007 at 10:34:00AM -0400, Joe Provo wrote:
While I expect end-users to miss the boat that providers use stat-mux calculations to build and price their networks, I'm floored to see the sentiment on NANOG. No edge provider of geographic scope/scale will survive if 1:1 ratios were built and priced accordingly. Perhaps the M&A colonialism era is coming to a close and smaller, regional nation- states... erm last-mile providers will be the entities to grow with satisfied customers?
I'm not sure NANOGers are missing the boat, just bemoaning the economics of the situation and some companies choices.
As an example, if I believe http://en.wikipedia.org/wiki/DOCSIS (as I'm no cable export):
DOCSIS 1.x, 10.24Mbps upstream. With this providers regularly offered 384-768k upload speeds to customers.
DOCSIS 3.0, 122Mbps upstream. That's about 12x. Applying the 12x to the original upload speed that's 4.6-9.2Mbps upload speed per user.
And yet, today most of the major national providers don't over more than 1Mbps of upload speed in their fastest packages.
Perhaps the real issue here is that broadband providers don't have enough diversity in their products. Picking on an unnamed cable provider and looking at their web site I can get:
4M down, 384k up. $39. 6M down, 768k up. $49. 8M down, 768k up. $59.
That's their entire portfolio of residential services. How about a $99 package with 10M down, 3M up? How about $5 per meg download, $20 per meg upload, pick any combination of speeds you want where both are under 20Mbps?
And why-o-why are they still giving me modems? Is not the stack of 5 that I already have enough waste? How much of my service charge goes to replacing equipment over and over because it's "how they work". (For instance I moved, and got a new modem with the new install, same make and model as the old modem, which they didn't want back.)
So, while NANOGers may float the idea of 1:1, what I think really honks them off is that the current standard (4M down, 384k up) is 1:10, and I think they feel it's time it became more like 1:4 (4M down, 1M up), and that seems to be completely within reach of the technology. Which leaves the only thing holding it up being big company management and marketing.
I will point out, one of the smaller providers on the Wikipedia page under US, CableVision, is said to have 30Mbps down 5Mbps up. That's 1:6, at a heck of a lot higher speeds. I think most people here would be quite happy with that offering.
-- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Brian: I do operate a DOCSIS network as part of my job, so I do have an inkling in regards to its network design. =) "For a 30 Mbps Docsis 2.0 configuration, you need to have a 6.4MHz widechannel with 64qam modulation." You don't *need* to operate at that wide or high a modulation for DOCSIS 2.0, though it's advantageous. In speaking with a Motorola SE on this, he's seen very few MSOs operate their upstream at 6.4 MHz/64-QAM, though that's they're goal. The modulations between 16-QAM and 64-QAM are generally being avoided. It's my understanding that the same MSOs who are moving toward the channel bonding of DOCSIS 3.0 are also doing digital simulcast and/or digital switching, because of those spectrum concerns. This makes moving toward channel bonding a concerted effort on their part. If they are successful, I think they can achieve sufficient speeds in comparison to fiber -- at least they are in a better position than those doing FTTN or FTTC, just as SBC/BellSouth. BTW, in N.A. it's a 6 MHz wide carrier, not 6.4 MHz. Regards, Frank -----Original Message----- From: Brian Raaen [mailto:braaen@zcorum.com] Sent: Wednesday, October 24, 2007 7:53 AM To: frnkblk@iname.com Cc: nanog@nanog.org Subject: Re: BitTorrent swarms have a deadly bite on broadband nets Dear Frank, As representing a network service provider who works with affiliates with cable, DSL, fiber, and wireless networks I think I can help you answer some of your questions. Our company supports many cable company, and since I started working here is March I have learned much about cable. First the speeds listed are for the optimal configuration (which requires a clean cable plant and free bands). In a Docsis 1.x environment you only get 10.240 Mbps if you are using a 3.2 MHz wide channel and using 16qam modulation. My experience is that many of the affiliate networks I work with do not have that configuration. I have been working with one provider upgrading them from a 1.6 MHz qpsk config to 3.2 MHz 16qam (in Docsis 1.x you only have qpsk and 16qam modulation types.) For a 30 Mbps Docsis 2.0 configuration, you need to have a 6.4MHz wide channel with 64qam modulation. 64qam is more sensitive to noise than 16qam and much more sensitive than qpsk. Also the 6.4MHz channel is wider and eats up spectrum realestate, particularly if you use more than one upstream frequency. The other problem with using 6.4MHz upstreams is that any Docsis 1.x modems that you still have out there will not work at all with it, and all modems on that node must be Docsis 2.0. If a provider uses a 3.2MHz channel with 64qam they can let a docsis 1.x 16qam subchannel coexist on the same frequency. Using a 3.2MHz channel with 64qam only gives a theoretical (not counting overhead) limit of 15.360 Mbps. Docsis 3.0 acheives most of its speed by bonding channels together. Note that each downstream channel is a possible TV channel, and the cable company will have to determine the value of using a given channel for TV or data. Docsis 2.0/3.0 also can use 128qam, but many modems do not support it nor are many cable plants clean enough either. You mentioned upstreams and ports. Cable unlike DSL is a shared resorce meaning that the 10 Mbps or 30 Mbps upstream is shared by everyone on the node. Also note that the 27Meg/45Meg downstream is shared by all users connected to that downstream. Many CMTS's are in a 1:4 or 1:6 configuration which means that there are 4 or 6 upstreams to every downstream. Remember that everyone in that node shares the bandwidth. What I normally see as a general limit is about 500-1000 customers per blade. That means that there are between 83-250 people per upstream port. If the speed was distributed evenly (an not over subscribed) each person would get at most 180Kbps down and 120Kbps up. Docsis 3.0 will help with speeds, but providers may not want to give up TV channels for extra data speeds. While many cable systems have more room for uploads, many others do not. When an older configuration works most people are reluctant to change, and many cable technicians do not like locating and troubleshooting noise issues. Sorry for making things muddy, but cable is not the best long-term solution for speed. Fiber to the home/node is much more attractive and quite a few people are offering it. -- Brian Raaen Network Engineer braaen@zcorum.com On Tuesday 23 October 2007 23:46, Frank Bulk wrote:
We're living in a DOCSIS 1.1 and 2.0 world, which gives us 40 down, 9 up
a best case. Considering that there are ~4 upstream ports for every downstream port, the MSOs are already operating their network in a 40:36 or almost 1:1 ratio. It's just that upstream is a much more precious item
in that
that they can't afford to fill up on a particular node, and most people find download speeds much more important most of the time.
We'll talk about DOCSIS 3.0 in a year from now and see how it's being deployed.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Leo Bicknell Sent: Tuesday, October 23, 2007 11:05 AM To: Joe Provo Cc: nanog@merit.edu Subject: Re: BitTorrent swarms have a deadly bite on broadband nets
In a message written on Tue, Oct 23, 2007 at 10:34:00AM -0400, Joe Provo wrote:
While I expect end-users to miss the boat that providers use stat-mux calculations to build and price their networks, I'm floored to see the sentiment on NANOG. No edge provider of geographic scope/scale will survive if 1:1 ratios were built and priced accordingly. Perhaps the M&A colonialism era is coming to a close and smaller, regional nation- states... erm last-mile providers will be the entities to grow with satisfied customers?
I'm not sure NANOGers are missing the boat, just bemoaning the economics of the situation and some companies choices.
As an example, if I believe http://en.wikipedia.org/wiki/DOCSIS (as I'm no cable export):
DOCSIS 1.x, 10.24Mbps upstream. With this providers regularly offered 384-768k upload speeds to customers.
DOCSIS 3.0, 122Mbps upstream. That's about 12x. Applying the 12x to the original upload speed that's 4.6-9.2Mbps upload speed per user.
And yet, today most of the major national providers don't over more than 1Mbps of upload speed in their fastest packages.
Perhaps the real issue here is that broadband providers don't have enough diversity in their products. Picking on an unnamed cable provider and looking at their web site I can get:
4M down, 384k up. $39. 6M down, 768k up. $49. 8M down, 768k up. $59.
That's their entire portfolio of residential services. How about a $99 package with 10M down, 3M up? How about $5 per meg download, $20 per meg upload, pick any combination of speeds you want where both are under 20Mbps?
And why-o-why are they still giving me modems? Is not the stack of 5 that I already have enough waste? How much of my service charge goes to replacing equipment over and over because it's "how they work". (For instance I moved, and got a new modem with the new install, same make and model as the old modem, which they didn't want back.)
So, while NANOGers may float the idea of 1:1, what I think really honks them off is that the current standard (4M down, 384k up) is 1:10, and I think they feel it's time it became more like 1:4 (4M down, 1M up), and that seems to be completely within reach of the technology. Which leaves the only thing holding it up being big company management and marketing.
I will point out, one of the smaller providers on the Wikipedia page under US, CableVision, is said to have 30Mbps down 5Mbps up. That's 1:6, at a heck of a lot higher speeds. I think most people here would be quite happy with that offering.
-- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
In a message written on Mon, Oct 22, 2007 at 10:20:49PM -0400, David Andersen wrote:
The Washington Post article claims that: [snip]
b) Fresh new wire installed after WWII
I have to wonder what percentage of the population is using phone lines installed before WWII? I live in a suburb that didn't exist 20 years ago other than maybe 50 buildings around the train depot. My neighborhood did not exist 10 years ago, it was a cow pasture. Where's all this old cable? While I'm sure you can find some row houses in $big_city that have old copper I find it hard to believe that "pre WWII wire" is holding us back. Wasn't it Sprint back in like 1982 or 1984 made a big deal about their entire long haul network being converted to fiber? In a message written on Mon, Oct 22, 2007 at 09:44:34PM -0500, Frank Bulk wrote:
A lot of the MDUs and apartment buildings in Japan are doing fiber to the basement and then VDSL or VDSL2 in the building, or even Ethernet. That's how symmetrical bandwidth is possible. Considering that much of the population does not live in high-rises, this doesn't easily apply to the U.S. population.
While the US does not have as high a percentage in high rises, let's look at the part that is "in the right place". What percentage of US high rises have fiber to the basement and high speed Internet offered to residents? Shouldn't NYC be on par with Tokyo by this point? Chicago? Miami? Doesn't the same model work for low rise apartments, the kind found in suburbia all across the US? Why don't any of them have building provided services, rather relying on cable modems for ADSL all the way back to the CO? Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 23, 2007, at 3:20 PM, Leo Bicknell wrote:
In a message written on Mon, Oct 22, 2007 at 10:20:49PM -0400, David Andersen wrote:
The Washington Post article claims that: [snip]
b) Fresh new wire installed after WWII
I have to wonder what percentage of the population is using phone lines installed before WWII?
I live in a suburb that didn't exist 20 years ago other than maybe 50 buildings around the train depot. My neighborhood did not exist 10 years ago, it was a cow pasture. Where's all this old cable?
While I'm sure you can find some row houses in $big_city that have old copper I find it hard to believe that "pre WWII wire" is holding us back. Wasn't it Sprint back in like 1982 or 1984 made a big deal about their entire long haul network being converted to fiber?
In a message written on Mon, Oct 22, 2007 at 09:44:34PM -0500, Frank Bulk wrote:
A lot of the MDUs and apartment buildings in Japan are doing fiber to the basement and then VDSL or VDSL2 in the building, or even Ethernet. That's how symmetrical bandwidth is possible. Considering that much of the population does not live in high-rises, this doesn't easily apply to the U.S. population.
Ever been in an earthquake in Japan? The population density is indeed much higher, but it's not primarily because of concentration in very large highrises, but rather because of much smaller floorspace per capita, and no yards to speak of. You're mixing JP up with places like HK and KR... TV
While the US does not have as high a percentage in high rises, let's look at the part that is "in the right place".
What percentage of US high rises have fiber to the basement and high speed Internet offered to residents? Shouldn't NYC be on par with Tokyo by this point? Chicago? Miami?
Doesn't the same model work for low rise apartments, the kind found in suburbia all across the US? Why don't any of them have building provided services, rather relying on cable modems for ADSL all the way back to the CO?
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
-- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iD8DBQFHHf7PUHTO4sHEFsERArHEAJ9rx0Qu5NxAMZVLxQUllnMT+KpUdQCeNZsG DtZM0QTtIajf1Hp7AEFVjjo= =sMNp -----END PGP SIGNATURE-----
--- On Tue, 10/23/07, Leo Bicknell <bicknell@ufp.org> wrote:
While I'm sure you can find some row houses in $big_city that have old copper I find it hard to believe that "pre WWII wire" is holding us back. Wasn't it Sprint back in like 1982 or 1984 made a big deal about their entire long haul network being converted to fiber?
You can also find them in $Medium_City - Washington DC has all kinds of old copper(aside: I just removed 4 old, unused 66 blocks from my home - I have no idea what the previous owners did with all that...). As a reference data point, consider the number of houses with aluminum electrical wiring - there is a brisk business for electricians in replacing that, and those houses were unlikely to have high-quality phone wires laid to them. Also, I've dealt with a whole lot of tall buildings in some large cities where the conduits are quite full, such that technicans routinely reuse currently-in-use pairs.
What percentage of US high rises have fiber to the basement and high speed Internet offered to residents? Shouldn't NYC be on par with Tokyo by this point? Chicago? Miami?
See above conduit issues. There are certainly opportunities for a canny provider, but the difficulty is figuring out how to get customers to shop on quantity rather than on price, because reusing the existing build will almost always be cheaper than doing an overbuild. The incumbent doesn't have much incentive - they're already capturing the money there, and a challenger would need to be both better and cheaper. That's possible, but not easy.
Doesn't the same model work for low rise apartments, the kind found in suburbia all across the US? Why don't any of them have building provided services, rather relying on cable modems for ADSL all the way back to the CO?
If the number of prospective customers per fiber termination is lower than the density required to make a profit on the service anytime soon, there is little incentive to do an overbuild. David Barak Need Geek Rock? Try The Franchise: http://www.listentothefranchise.com __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
Well, Verizon seems to be making heavy bets on replacing significant chunks of old copper plant with FTTH. Here's a recent FiOS announcement: Linkname: Verizon discovers symmetry, offers 20/20 symmetrical FiOS service URL: http://arstechnica.com/news.ars/post/20071023-verizon-discovers-symmetry-off... -- Henry Yen Aegis Information Systems, Inc. Senior Systems Programmer Hicksville, New York
On Wednesday 24 October 2007 05:36, Henry Yen wrote:
On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
Well, Verizon seems to be making heavy bets on replacing significant chunks of old copper plant with FTTH. Here's a recent FiOS announcement:
Linkname: Verizon discovers symmetry, offers 20/20 symmetrical FiOS service URL: http://arstechnica.com/news.ars/post/20071023-verizon-discovers-symmetry-of fers-2020-symmetrical-fios-service.html
While probably more "good" than "bad", it is my understanding that when Verizon (and others) provide FTTH (fiber to the home) they "cut" or physically disconnect all other connections to that residence..... so much for any "choice"... -- Larry Smith SysAd ECSIS.NET sysad@ecsis.net
In the future, people are not going to believe that we permitted this to happen. Coming soon: your plumbing will be disconnected. But never fear: an Evian vending machine will delivered to every deserving household... TV On Oct 24, 2007, at 2:39 PM, Larry Smith wrote:
On Wednesday 24 October 2007 05:36, Henry Yen wrote:
On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
Well, Verizon seems to be making heavy bets on replacing significant chunks of old copper plant with FTTH. Here's a recent FiOS announcement:
Linkname: Verizon discovers symmetry, offers 20/20 symmetrical FiOS service URL: http://arstechnica.com/news.ars/post/20071023-verizon-discovers- symmetry-of fers-2020-symmetrical-fios-service.html
While probably more "good" than "bad", it is my understanding that when Verizon (and others) provide FTTH (fiber to the home) they "cut" or physically disconnect all other connections to that residence..... so much for any "choice"...
-- Larry Smith SysAd ECSIS.NET sysad@ecsis.net
While probably more "good" than "bad", it is my understanding that when Verizon (and others) provide FTTH (fiber to the home) they "cut" or physically disconnect all other connections to that residence..... so much for any "choice"...
At least around here, if you tell the installer you have an alarm system they'll leave the copper alone, since your alarm system is generally using phone lines with no dial tone to connect to the monitoring station. -- Dave Pooser, ACSA Manager of Information Services Alford Media http://www.alfordmedia.com
On Wednesday 24 October 2007 05:36, Henry Yen wrote:
On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
Well, Verizon seems to be making heavy bets on replacing significant chunks of old copper plant with FTTH. Here's a recent FiOS announcement:
Linkname: Verizon discovers symmetry, offers 20/20 symmetrical FiOS service URL: http://arstechnica.com/news.ars/post/20071023-verizon-discovers-symmetry-of fers-2020-symmetrical-fios-service.html
While probably more "good" than "bad", it is my understanding that when Verizon (and others) provide FTTH (fiber to the home) they "cut" or physically disconnect all other connections to that residence..... so much for any "choice"... Exactly. And because they installed fiber, the FCC has ruled that they do not have to provide unbundled network elements to competitors. I expect that when you look at the population of broadband users, it is only a tiny percentage that really need fiber to their residence. Let's remember that one of the main reasons that broadband displaced dial up was that it is always available and does not interfer with phone service. - R.
On Wed, 24 Oct 2007, Rod Beck wrote:
On Wednesday 24 October 2007 05:36, Henry Yen wrote:
On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
Well, Verizon seems to be making heavy bets on replacing significant chunks of old copper plant with FTTH. Here's a recent FiOS announcement:
Linkname: Verizon discovers symmetry, offers 20/20 symmetrical FiOS service URL: http://arstechnica.com/news.ars/post/20071023-verizon-discovers-symmetry-of fers-2020-symmetrical-fios-service.html
While probably more "good" than "bad", it is my understanding that when Verizon (and others) provide FTTH (fiber to the home) they "cut" or physically disconnect all other connections to that residence..... so much for any "choice"...
Exactly. And because they installed fiber, the FCC has ruled that they do not have to provide unbundled network elements to competitors.
It's this last bit that seems to be leading to lots of complaints, and it's the earlier pricing of "unbundled network elements" at or above the cost of complete service packages that many CLECs and competitive ISPs blamed for their demise. Some like to see big conspiracies here, but I'm not convinced that it wasn't just a matter of bad planning on the parts of the ISPs and CLECs, perhaps brought on by bad incentives in the law. The US government decided there should be a competitive market for phone services. They were concerned about the big advantage in already built out infrastructure the incumbent phone companies had -- infrastructure that had been built with money from their monopolies -- so they required them to "share." This meant it was pretty easy to start a DSL company that used the ILEC's copper, but seemed to provide little incentive for new telecom companies to build their own last mile infrastructure. Once the ILECs caught on to the importance of this new Internet thing, that meant the ISPs and the new phone companies were entirely dependent on their biggest competitor for services they needed to keep functioning. The new providers were vulnerable on all sorts of fronts controlled by their established competitors -- pricing, installation procedures, service quality, repair times, service availability, etc. The failure of the new entrants seems almost inevitable, and given that they hadn't actually built any infrastructure, they didn't leave behind much of anything for those with better plans to buy out of bankruptcy. I don't think this was what was intended. My impression is that the wholesale copper was supposed to be a temporary bridge to allow the new entrants time to build infrastructure of their own. That's why the rules about sharing didn't apply to infrastructure built by the ILECs later. But new entrants building their own infrastructure generally didn't happen. Instead, the end-user ISP operators I was dealing with at the time generally seemed outraged that the evil phone companies, which should have been there to sell wholesale services to them, were instead competing in their markets. Unfortunately for them, the phone companies not only undercut them on cost, but generally built better networks. Given the impending obsolescence of the phone companies' traditional businesses, what else would the phone companies have been expected to do? The exception to this was the cable companies. They already had some physical plant of their own, but they invested a lot of money in a lot of new construction. Many of them didn't do financially well on the deals, but even those who ran out of money left behind infrastructure that is now effectively competing. This isn't to say the original encouragement of CLECs using ILEC copper in the 1996 telecommunications act wasn't without benefits. I rather doubt the ILECs would have gotten as interested in DSL as they did, if there hadn't been the threat of losing the business to competition. But given that improvements in speed since the initial crushing of the upstarts have been mostly limited to trying to match the capabilities of the cable companies, perhaps it wasn't the best strategy for the long term. If those who want to compete need to build some infrastructure of their own, and if anybody is successful in doing so, that should have a much bigger impact in terms of putting long term pressure on the ILECs to provide better service. -Steve
Exactly. And because they installed fiber, the FCC has ruled that they do not have to provide unbundled network elements to competitors.
It's this last bit that seems to be leading to lots of complaints, and it's the earlier pricing of "unbundled network elements" at or above the cost of complete service packages that many CLECs and competitive ISPs blamed for their demise. Some like to see big conspiracies here, but I'm not convinced that it wasn't just a matter of bad planning on the parts of the ISPs and CLECs, perhaps brought on by bad incentives in the law. I don't think this was what was intended. My impression is that the wholesale copper was supposed to be a temporary bridge to allow the new entrants time to build infrastructure of their own. That's why the rules about sharing didn't apply to infrastructure built by the ILECs later. But new entrants building their own infrastructure generally didn't happen. Instead, the end-user ISP operators I was dealing with at the time generally seemed outraged that the evil phone companies, which should have been there to sell wholesale services to them, were instead competing in their markets. Unfortunately for them, the phone companies not only undercut them on cost, but generally built better networks. Given the impending obsolescence of the phone companies' traditional businesses, what else would the phone companies have been expected to do? The exception to this was the cable companies. They already had some physical plant of their own, but they invested a lot of money in a lot of new construction. Many of them didn't do financially well on the deals, but even those who ran out of money left behind infrastructure that is now effectively competing. This isn't to say the original encouragement of CLECs using ILEC copper in the 1996 telecommunications act wasn't without benefits. I rather doubt the ILECs would have gotten as interested in DSL as they did, if there hadn't been the threat of losing the business to competition. But given that improvements in speed since the initial crushing of the upstarts have been mostly limited to trying to match the capabilities of the cable companies, perhaps it wasn't the best strategy for the long term. If those who want to compete need to build some infrastructure of their own, and if anybody is successful in doing so, that should have a much bigger impact in terms of putting long term pressure on the ILECs to provide better service. That's where I disagree. The economic argument is that it is more efficient to share the Last Mile subject to rate of return constraints than for a dozen carriers to build their own Last Mile facilities. In fact, it is extremely naive to think that long term all these carriers would actually build their own Last Mile facilities. It is not economically sustainable or efficent to have massive overbuilding. Simply put, if the ILEC loses a customer to the competition, why not use the ILEC copper pair to reach that customer? Given copper pairs do have the ability to provide the services most residential customers want (except for a bloggers who insist every needs a 10 gig wave to their home), why waste scare econonomic resources to do overbuilding? In Europe unbundling has worked well and led to a highly competitive market where no such market would exist in its absence. All of this suggests that the problem was not the 1996 Telecom Act, but the ability of the incumbents to use the Courts to undermine (which they did quite successfully) and a lack of political will. You can't get away with bizarre legal interpretations on this side of the Atlantic like you can in the States. If European regulatory agencies want unbundling, they get it and the PTTs make sure it works or they are subject to more than Mickey Mouse fines a la FCC. And there is no expectation that this a stop gap measure. Unbundling will exist as long the competitors want to exist. Regards, Roderick.
On Oct 24, 2007, at 8:11 PM, Steve Gibbard wrote:
On Wed, 24 Oct 2007, Rod Beck wrote:
On Wednesday 24 October 2007 05:36, Henry Yen wrote:
On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today? Greenfield should be the easiest, and major builders like Pulte, Centex and the like should be eager to offer it; but don't.
Well, Verizon seems to be making heavy bets on replacing significant chunks of old copper plant with FTTH. Here's a recent FiOS announcement:
Linkname: Verizon discovers symmetry, offers 20/20 symmetrical FiOS service URL: http://arstechnica.com/news.ars/post/20071023-verizon-discovers- symmetry-of fers-2020-symmetrical-fios-service.html
While probably more "good" than "bad", it is my understanding that when Verizon (and others) provide FTTH (fiber to the home) they "cut" or physically disconnect all other connections to that residence..... so much for any "choice"...
Exactly. And because they installed fiber, the FCC has ruled that they do not have to provide unbundled network elements to competitors.
It's this last bit that seems to be leading to lots of complaints, and it's the earlier pricing of "unbundled network elements" at or above the cost of complete service packages that many CLECs and competitive ISPs blamed for their demise. Some like to see big conspiracies here, but I'm not convinced that it wasn't just a matter of bad planning on the parts of the ISPs and CLECs, perhaps brought on by bad incentives in the law.
The US government decided there should be a competitive market for phone services. They were concerned about the big advantage in already built out infrastructure the incumbent phone companies had -- infrastructure that had been built with money from their monopolies -- so they required them to "share." This meant it was pretty easy to start a DSL company that used the ILEC's copper, but seemed to provide little incentive for new telecom companies to build their own last mile infrastructure. Once the ILECs caught on to the importance of this new Internet thing, that meant the ISPs and the new phone companies were entirely dependent on their biggest competitor for services they needed to keep functioning. The new providers were vulnerable on all sorts of fronts controlled by their established competitors -- pricing, installation procedures, service quality, repair times, service availability, etc. The failure of the new entrants seems almost inevitable, and given that they hadn't actually built any infrastructure, they didn't leave behind much of anything for those with better plans to buy out of bankruptcy.
Consider the implications of this line of reasoning. A rational would-be competitor should expect to build out a new, completely independent parallel (national) facilities platform as the price of admission to the market. Since we've abandoned all faith in the use of of laws or regulation to discipline the incumbent, we should expect each successive national overbuild to be accomplished in a "very hostile" environment (Robert De Niro's role in the movie "Brazil" comes to mind here). A rational new entrant should plan to deliver service that is "substitutable" -- i.e., can compete on cost, capacity, and performance terms -- for services delivered over one or more incumbent optical fiber networks -- artifacts of previous attempts to enter the market. The minimum activation requirements for the new/ latest access facilities platform will create an additional increment of transport capacity that is "vast" ("infinite" would be only a slight exaggeration) relative to all conceivable end user demand for the foreseeable future. The existence of (n) other near-infinite increments of parallel/"substitutable" access transport capacity should not be considered when assessing the expected demand for this new capacity. A rational investor should understand that capex committed to this new venture could well be a total loss, but should be reassured that the new nth increment of near-infinite capacity that they help to create will be useful in some way to whomever subsequently buys it up for pennies on the dollar. The existence of (n) other near-infinite increments of parallel access transport capacity should not be considered when estimating the relative merits of this or future access facility investments. Every household will become equivalent to a core urban data center, with multiple independent entrance facilities -- unless of course the new platform owner determines that it would be it more rational to rip the new facilities -- or the old facilities -- out. (Any apparent similarity between this arrangement and Mao's Great Leap Forward-era backyard blast furnaces is purely coincidental). A rational government should welcome the vast increase in investment created by successive attempts by would-be competitors to enter the market, and by the liberation from all responsibility for the causes and consequences. The FCC can be dismantled, but cashiered former telco regulators can find new employment in the booming network facilities construction sector -- or perhaps in the new Resolution Trust Corporation that will handle the administration/liquidation of successive would-be facilities-based competitors -- and the financial institutions that bankrolled them. The current "incentive problem" is basically a one-time problem. However, the first full optical network platform that reaches households will be the last one, and the problem of access segment market power will be with us forever. History and the existence of 200 + other simultaneous experiments in national network economics provide abundant information on how to solve (or fail to solve) the problem. But maybe "Brazil" really is the right reference... TV
I don't think this was what was intended. My impression is that the wholesale copper was supposed to be a temporary bridge to allow the new entrants time to build infrastructure of their own. That's why the rules about sharing didn't apply to infrastructure built by the ILECs later. But new entrants building their own infrastructure generally didn't happen. Instead, the end-user ISP operators I was dealing with at the time generally seemed outraged that the evil phone companies, which should have been there to sell wholesale services to them, were instead competing in their markets. Unfortunately for them, the phone companies not only undercut them on cost, but generally built better networks. Given the impending obsolescence of the phone companies' traditional businesses, what else would the phone companies have been expected to do?
The exception to this was the cable companies. They already had some physical plant of their own, but they invested a lot of money in a lot of new construction. Many of them didn't do financially well on the deals, but even those who ran out of money left behind infrastructure that is now effectively competing.
This isn't to say the original encouragement of CLECs using ILEC copper in the 1996 telecommunications act wasn't without benefits. I rather doubt the ILECs would have gotten as interested in DSL as they did, if there hadn't been the threat of losing the business to competition. But given that improvements in speed since the initial crushing of the upstarts have been mostly limited to trying to match the capabilities of the cable companies, perhaps it wasn't the best strategy for the long term. If those who want to compete need to build some infrastructure of their own, and if anybody is successful in doing so, that should have a much bigger impact in terms of putting long term pressure on the ILECs to provide better service.
-Steve
participants (36)
-
Adrian Chadd
-
Barry Shein
-
Brian Raaen
-
Bruce Curtis
-
Buhrmaster, Gary
-
Carpenter, Jason
-
Chris Adams
-
Crist Clark
-
Dave Pooser
-
David Andersen
-
David Barak
-
Dorn Hetzel
-
Dragos Ruiu
-
Frank Bulk
-
Frank Bulk - iNAME
-
Geo.
-
Henry Yen
-
Iljitsch van Beijnum
-
Jeff Shultz
-
Joe Greco
-
Joe Provo
-
Joel Jaeggli
-
Larry Smith
-
Leigh Porter
-
Leo Bicknell
-
Majdi S. Abbas
-
Mark Smith
-
Marshall Eubanks
-
michael.dillon@bt.com
-
Mikael Abrahamsson
-
Rod Beck
-
Sean Donelan
-
Steve Gibbard
-
Steven M. Bellovin
-
Tom Vest
-
Valdis.Kletnieks@vt.edu