On Mon, Oct 22, 2007 at 05:16:08PM -0700, Crist Clark wrote:
It seems to me is what hurts the ISPs is the accompanying upload streams, not the download (or at least the ISP feels the same download pain no matter what technology their end user uses to get the data[0]). Throwing more bandwidth does not scale to the number of users we are talking about. Why not suck up and go with the economic solution? Seems like the easy thing is for the ISPs to come clean and admit their "unlimited" service is not and put in upload caps and charge for overages.
[I've been trying to stay out of this thread, as I consider it unproductive, but here goes...] What hurts ISPs is not upstream traffic. Most access providers are quite happy with upstream traffic, especially if they manage their upstream caps carefully. Careful management of outbound traffic and an active peer-to-peer customer base, is good for ratios -- something that access providers without large streaming or hosting farms can benefit from. What hurt these access providers, particularly those in the cable market, was a set of failed assumptions. The Internet became a commodity, driven by this web thing. As a result, standards like DOCSIS developed, and bandwidth was allocated, frequently in an asymmetric fashion, to access customers. We have lots of asymmetric access technologies, that are not well suited to some new applications. I cannot honestly say I share Sean's sympathy for Comcast, in this case. I used to work for a fairly notorious provider of co-location services, and I don't recall any great outpouring of sympathy on this list when co-location providers ran out of power and cooling several years ago. I /do/ recall a large number of complaints and the wailing and gnashing of teeth, as well as a lot of discussions at NANOG (both the general session and the hallway track) about the power and cooling situation in general. These have continued through this last year. If the MSOs, their vendors, and our standards bodies in general, have made a failed set of assumptions about traffic ratios and volume in access networks, I don't understand why consumers should be subject to arbitrary changes in policy to cover engineering mistakes. It would be one thing if they simply reduced the upstream caps they offered, it is quite another to actively interfere with some protocols and not others -- if this is truly about upstream capacity, I would expect the former, not the latter. If you read Comcast's services agreement carefully, you'll note that the activity in question isn't mentioned. It only comes up in their Use Policy, something they can and have amended on the fly. It does not appear in the agreement itself. If one were so inclined, one might consider this at least slightly dishonest. Why make a consumer enter into an agreement, which refers to a side agreement, and then update it at will? Can you reasonably expect Joe Sixpack to read and understand what is both a technical and legal document? I would not personally feel comfortable forging RSTs, amending a policy I didn't actually bother to include in my service agreement with my customers, and doing it all to shift the burden for my, or my vendor's engineering assumptions onto my customers -- but perhaps that is why I am an engineer, and not an executive. As an aside, before all these applications become impossible to identify, perhaps it's time for cryptographically authenticated RST cookies? Solving the forging problems might head off everything becoming an encrypted pile of goo on tcp/443.
Information contained in this e-mail message is confidential, intended only for the use of the individual or entity named above. If the reader of this e-mail is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any review, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail in error, please contact postmaster@globalstar.com
Someone toss this individual a gmail invite...please! --msa