On 26 okt 2007, at 18:29, Sean Donelan wrote:
And generating packets with false address information is more acceptable? I don't buy it.
When a network is congested, someone is going to be upset about any possible response.
That doesn't mean all possible responses are equally acceptable. There are three reasons why what Comcast does is worse than some other things they could do: 1. They're not clearly saying what they're doing 2. They inject packets that pretend to come from someone else 3. There is nothing the user can do to work within the system
Using a TCP RST is probably more "transparent" than using some other clever active queue management technique to drop particular packets from the network.
With shaping/policing I still get to transmit a certain amount of data. With sending RSTs in some cases and not others there's nothing I can do to use the service, even at a moderate level, if I'm unlucky. Oh, and let me add: 4. It won't work in the long run, it just means people will have to use IPsec with their peer-to-peer apps to sniff out the fake RSTs
If Comcast had used Sandvine's other capabilities to inspect and drop particular packets, would that have been more acceptable?
Depends. But it all has to start with them making public what service level users can expect.
Add more capacity (i.e. what do you do in the mean time, people want something now)
Since you can't know on which path the capacity is needed, it's impossible to build enough of it to cover all possible eventualities. So even though Comcast probably needs to increase capacity, that doesn't solve the fundamental problem.
Raise prices (i.e. discourage additional use)
Higher flat fee pricing doesn't discourage additional use. I'd say it encourages it: if I have to pay this much, I'll make sure I get my money's worth!
People are going to gripe no matter what. One week they are griping about ISPs not doing anything, the next week they are griping about ISPs doing something.
Guess what: sometimes the gripes are legitimate. On 26 okt 2007, at 17:24, Sean Donelan wrote:
The problem is not bandwidth, its shared congestion points.
While that is A problem, it's not THE problem. THE problem is that Comcast can't deliver the service that customers think they're buying.
However, I think a better idea instead of trying to eliminate all shared congestion points everywhere in a packet network would be for the TCP protocol magicians to develop a TCP-multi-flow congestion avoidance which would share the available capacity better between all of the demand at the various shared congestion points in the network.
The problem is not with TCP: TCP will try to get the most out of the available bandwidth that it sees, which is the only reasonable behavior for such a protocol. You can easily get a bunch of TCP streams to stay within a desired bandwidth envelope by dropping the requisite number of packets. Techniques such as RED will create a reasonable level of fairness between high and low bandwidth flows. What you can't easily do by dropping packets without looking inside of them is favoring certain applications or making sure that low-volume users get a better service level than low-volume users. Those are issues that I don't think can reasonably be shoehorned into TCP congestion management. However, we do have a technique that was created for exactly this purpose: diffserv. Yes, it's unfortunate that diffserv is the same technology that would power a non-neutral internet, but that doesn't mean that ANY use of diffserv is automatically at odds with net neutrality principles. Diffserv is just a tool; like all tools, it can be used in different ways. For good and evil, if you will.
Isn't the Internet supposed be a "dumb" network with "smart" hosts? If the hosts act dumb, is the network forced to act smart?
It's not the intelligence that's the problem, but the incentive structure. Iljitsch