Re: Can P2P applications learn to play fair on networks?
On Sat, 27 Oct 2007, Mohacsi Janos wrote:
Agreed. Measures, like NAT, spoofing based accelerators, quarantining computers are developed for fairly small networks. No for 1Gbps and above and 20+ sites/customers.
"small" is a relative term. Hong Kong is already selling 1Gbps access links to residential customers, and once upon a time 56Kbps was a big backbone network. Last month folks were complaining about ISPs letting everything through the networks, this month people are complaining that ISPs aren't letting everything through the networks. Does this mean next month we will be back the other direction again. Why artificially keep access link speeds low just to prevent upstream network congestion? Why can't you have big access links?
On Sat, 27 Oct 2007, Sean Donelan wrote:
Why artificially keep access link speeds low just to prevent upstream network congestion? Why can't you have big access links?
You're the one that says that statistical overbooking doesn't work, not anyone else. Since I know people that offer 100/100 to residential users that upstream this with GE/10GE in their networks and they are happy with it, I don't agree with you about the problem description. For statistical overbooking to work, a good rule of thumb is that the upstream can never be more than half full normally, and each customer cannot have more access speed than 1/10 of the speed of the upstream capacity. So for example, you can have a large number of people with 100/100 uplinked with gig as long as that gig ring doesn't carry more than approx 500 meg peak 5 minute average and it'll work just fine. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:
Why artificially keep access link speeds low just to prevent upstream network congestion? Why can't you have big access links?
You're the one that says that statistical overbooking doesn't work, not anyone else.
If you performed a simple Google search, you would have discovered many universities around the world having similar problems. The university network engineers are saying adding capacity alone isn't solving their problems.
Since I know people that offer 100/100 to residential users that upstream this with GE/10GE in their networks and they are happy with it, I don't agree with you about the problem description.
Since I know poeple that offer 100/100 to university dorms, and are having problems with GE and even 10 GE depending on the size of the dorms, if you did a Google search you would find the problem.
For statistical overbooking to work, a good rule of thumb is that the upstream can never be more than half full normally, and each customer cannot have more access speed than 1/10 of the speed of the upstream capacity.
So for example, you can have a large number of people with 100/100 uplinked with gig as long as that gig ring doesn't carry more than approx 500 meg peak 5 minute average and it'll work just fine.
1. You are assuming traffic mixes don't change. 2. You are assuming traffix mixes on every network are the same. If you restrict demand, statistical multiplexing works. The problem is how do you restrict demand? What happens when 10 x 100/100 users drive demand on your GigE ring to 99%? What happens when P2P become popular and 30% of your subscribers use P2P? What happens when 80% of your subscribers use P2P? What happens with 100% of your subscribers use P2P? TCP "friendly" flows voluntarily restrict demand by backing off when they detect congestion. The problem is TCP assumes single flows, not grouped flows used by some applications.
On Sun, 28 Oct 2007, Sean Donelan wrote:
If you performed a simple Google search, you would have discovered many universities around the world having similar problems.
The university network engineers are saying adding capacity alone isn't solving their problems.
You're welcome to provide proper technical links. I'm looking for ones that say that 10GE didn't solve their problem, not the ones saying "we upgraded from T3 to OC3 from our campus of 30k student dorms connected with 100/100 and it's still overloaded", because that's just silly. I had someone send me one that contradicts your opinion: http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
Since I know poeple that offer 100/100 to university dorms, and are having problems with GE and even 10 GE depending on the size of the dorms, if you did a Google search you would find the problem.
Please provide links. I tried googling for instance for <university capacity problem p2p 10ge> and didn't find anything useful.
1. You are assuming traffic mixes don't change. 2. You are assuming traffix mixes on every network are the same.
I'm using real world data from swedish ISPs, each with tens of thousands of residential users, including the university ones. I tend to think we have one of the highest internet per capita usages in the world unless someone can give me data that says something else.
If you restrict demand, statistical multiplexing works. The problem is how do you restrict demand?
By giving people 10/10 instead of your network can't handle 100/100. Or you create a management system that checks port usage and limits the heavy users to 10/10, or you use microflow policing to limit uploads to 10, especially at times of congestion. There are numerous ways of doing it that doesn't involve sending RST:s to customer TCP sessions or other ways of spoofing traffic.
What happens when 10 x 100/100 users drive demand on your GigE ring to 99%? What happens when P2P become popular and 30% of your subscribers use P2P? What happens when 80% of your subscribers use P2P? What happens with 100% of your subscribers use P2P?
If 100% of the userbase use p2p, then traffic patterns will change and more content will be local.
TCP "friendly" flows voluntarily restrict demand by backing off when they detect congestion. The problem is TCP assumes single flows, not grouped flows used by some applications.
TCP assumes all flows are created equal, and doesn't take into account that a single user can use hundreds of flows, that's correct. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:
If you performed a simple Google search, you would have discovered many universities around the world having similar problems.
The university network engineers are saying adding capacity alone isn't solving their problems.
You're welcome to provide proper technical links. I'm looking for ones that say that 10GE didn't solve their problem, not the ones saying "we upgraded from T3 to OC3 from our campus of 30k student dorms connected with 100/100 and it's still overloaded", because that's just silly.
In the mean time: http://www.d.umn.edu/itss/resnet/bandwidth.html Second, we know based on experience that it won't work just to double our bandwidth. It won't work to triple our bandwidth (at triple the cost). Based on studies, we'd likely need to increase the bandwidth by a factor of ten or more. And based on our analysis of the traffic that is filling the ResNet pipe, we'd be buying that bandwidth to provide more access to file-sharing programs, not to meet academic needs. http://www.educause.edu/ir/library/powerpoint/MAC0402.pps Astronomic growth of P2P pegs Resnet bandwidth at whatever cap happens to be in place Good Users impacted as well as P2P users http://www.denison.edu/offices/computing/policies/packet_shaping.html To make it even more difficult of a challenge, a number of popular applications like Kazaa, BitTorrent, and other "peer-to-peer" file sharing applications intentionally try to capitalize on all available bandwidth the system the software is running on has at its fingertips. If our internet traffic was not shaped to ensure equitable use a very small number of systems could easily clog our internet connection making it unusable. http://uwadmnweb.uwyo.edu/BENICE/Bandwidth.asp TSS decided to up the campus bandwidth from 10 to 30 Mbps. That fall all the students returned and for some really strange reason, they ate up every bit of the old and new bandwidth. The rest of campus was crippled. The ResNetters were filling the 30 Mbps outbound pipe 24 hours a day, every day. [...] In December of 2001, TSS implemented a new scheme called packet-shaping, which looks at the types of traffic going through and only slows the traffic going to and from file-sharing programs. And of course, if you still believe just adding bandwidth will solve the problems ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z
And of course, if you still believe just adding bandwidth will solve the problems
Joe St. Sauver probably said it best when he pointed out in slide 5 here <http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
the "N-body" problem can be a complex problem to try to solve except via an iterative and incremental process. I expect that is why sometimes adding capacity works and sometimes it doesn't. This is the sort of situation that benefits from having an architectural vision which all the independent actors (n-bodies) can work towards. A lot of P2P development work in the past has treated the Internet as a kind of black box which the P2P software attempts to reverse engineer or treat simplistically as a set of independent paths with varying latencies. If P2P software relied on an ISP middlebox to mediate the transfers, then each middlebox could optimize the local situation by using a whole smorgasbord of tools. They could kill rogue sessions that don't use the middle box by using RSTs or simply triggering the ISP's OSS to set up ACLs etc. They could tell the P2P endpoints how many flows are allowed, maximum flowrate during specific timewindows, etc. This doesn't mean that all the bytes need to flow through the middleboxes, merely that P2P clients cooperate with the middleboxes when opening sockets/sessions. --Michael Dillon
michael.dillon@bt.com schrieb:
If P2P software relied on an ISP middlebox to mediate the transfers, then each middlebox could optimize the local situation by using a whole smorgasbord of tools.
Are there any examples of middleware being adopted by the market? To me, it looks like the clear trend is away from using ISP-provided applications and services, towards pure packet pushing (cf. HTTP proxies, proprietary information services). I'm highly sceptical that users would want to adopt any software that ties them more to their ISP, not less. Stefan
If P2P software relied on an ISP middlebox to mediate the
That and the fact that an ISP would be aiding and abetting illegal activities, in the eyes of the RIAA and MPAA. That's not to say that technically it would not be better, but that it will never happen due to political and legal issues, IMO. Fred Reimer, CISSP Senior Network Engineer Coleman Technologies, Inc. -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Stefan Bethke Sent: Monday, October 29, 2007 8:37 AM To: michael.dillon@bt.com Cc: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? michael.dillon@bt.com schrieb: transfers,
then each middlebox could optimize the local situation by using a whole smorgasbord of tools.
Are there any examples of middleware being adopted by the market? To me, it looks like the clear trend is away from using ISP-provided applications and services, towards pure packet pushing (cf. HTTP proxies, proprietary information services). I'm highly sceptical that users would want to adopt any software that ties them more to their ISP, not less. Stefan
On Mon, 29 Oct 2007, Fred Reimer wrote:
That and the fact that an ISP would be aiding and abetting illegal activities, in the eyes of the RIAA and MPAA. That's not to say that technically it would not be better, but that it will never happen due to political and legal issues, IMO.
As always consult your own legal advisor, however in the USA DMCA 512(b) probably makes caching by ISPs legal. ISPs have not been shy about using the CDA and DMCA to protect themselves from liability. Although caching has been very popular outside the USA, in particular in countries with very expensive trans-oceanic circuits, in the USA caching is mostly a niche service for ISPs. The issue in the USA is more likely the cost of operating and maintaing the caching systems are more expensive than the operational cost of the bandwidth in the USA. Despite some claims from people that ISPs should just shovel packets, some US ISPs have used various caching systems for a decade. It would be a shame to make Squid illegal for ISPs to use.
The RIAA is specifically going after P2P networks. As far as I know, they are not going after Squid users/hosts. Although they may have at one point, it has never made the popular media as their effort against the P2P networks has. I'm not talking about caching at all anyway. I'm talking about what was suggested, that ISP's play an active role in helping their users locate "local" hosts to grab files from, rather than just anywhere out on the Internet. I think that is quite different than configuring a transparent proxy. Don't ask me why, it's not a technical or even necessarily a legal question (and IANAL anyway). It's more of a perception issue with the RIAA. If you work at an ISP ask your legal counsel if this would be a good idea. I doubt they would say yes. Fred Reimer, CISSP Senior Network Engineer Coleman Technologies, Inc. 954-298-1697 -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Sean Donelan Sent: Monday, October 29, 2007 12:34 PM To: nanog@merit.edu Subject: RE: Can P2P applications learn to play fair on networks? On Mon, 29 Oct 2007, Fred Reimer wrote:
That and the fact that an ISP would be aiding and abetting illegal activities, in the eyes of the RIAA and MPAA. That's not to say that technically it would not be better, but that it will never happen due to political and legal issues, IMO.
As always consult your own legal advisor, however in the USA DMCA 512(b) probably makes caching by ISPs legal. ISPs have not been shy about using the CDA and DMCA to protect themselves from liability. Although caching has been very popular outside the USA, in particular in countries with very expensive trans-oceanic circuits, in the USA caching is mostly a niche service for ISPs. The issue in the USA is more likely the cost of operating and maintaing the caching systems are more expensive than the operational cost of the bandwidth in the USA. Despite some claims from people that ISPs should just shovel packets, some US ISPs have used various caching systems for a decade. It would be a shame to make Squid illegal for ISPs to use.
michael.dillon@bt.com wrote:
And of course, if you still believe just adding bandwidth will solve the problems
Joe St. Sauver probably said it best when he pointed out in slide 5 here <http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
the "N-body" problem can be a complex problem to try to solve except via an iterative and incremental process.
If P2P software relied on an ISP middlebox to mediate the transfers, then each middlebox could optimize the local situation by using a whole smorgasbord of tools. They could kill rogue sessions that don't use the middle box by using RSTs or simply triggering the ISP's OSS to set up ACLs etc. They could tell the P2P endpoints how many flows are allowed, maximum flowrate during specific timewindows, etc.
When we put the application intelligence in the network. We have to upgrade the network to support new applications. I believe that's a mistake from the application innovation angle. Describing more accurately to the endpoints the properties of the network(s) to which they are attached is something that is perhaps desirable. most work in this area is historically done in the transport area, but congestion control is not really the only angle from which to approach the problem. Host's treat network's as black boxes because they don't really have any other choice in the matter.
--Michael Dillon
When we put the application intelligence in the network. We have to upgrade the network to support new applications. I believe that's a mistake from the application innovation angle.
Putting middleboxes into an ISP is not the same thing as putting intelligence into the network. Think Akamai for instance.
Describing more accurately to the endpoints the properties of the network(s) to which they are attached is something that is perhaps desirable. most work in this area is historically done in the transport area, but congestion control is not really the only angle from which to approach the problem.
If the work focuses on making a P2P protocol that knows about ASNums and leverages middleboxes sitting in an ISP's network, then you would have a framework that can be used for more than just congestion control.
Host's treat network's as black boxes because they don't really have any other choice in the matter.
A router is a host that learns about the network topology by means of routing protocols, and then adjusts its behavior accordingly. Why can't other hosts similarly learn about the topology and adjust their behavior? --Michael Dillon
There's a large "installed" based of asymmetric speed internet access links. Considering that even BPON and GPON solutions are designed for asymmetric use, too, it's going to take a fiber-based Active Ethernet solution to transform access links to change the residential experience to something symmetrical. (I'm making the underlying presumption that copper-based symmetric technologies will not become part of residential broadband market any time in the near future, if ever.) Until the time that we are all FTTH, ISPs will continue to manage their customer's upstream links. Regards, Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Sean Donelan Sent: Saturday, October 27, 2007 6:31 PM To: Mohacsi Janos Cc: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? On Sat, 27 Oct 2007, Mohacsi Janos wrote:
Agreed. Measures, like NAT, spoofing based accelerators, quarantining computers are developed for fairly small networks. No for 1Gbps and above and 20+ sites/customers.
"small" is a relative term. Hong Kong is already selling 1Gbps access links to residential customers, and once upon a time 56Kbps was a big backbone network. Last month folks were complaining about ISPs letting everything through the networks, this month people are complaining that ISPs aren't letting everything through the networks. Does this mean next month we will be back the other direction again. Why artificially keep access link speeds low just to prevent upstream network congestion? Why can't you have big access links?
participants (7)
-
Frank Bulk
-
Fred Reimer
-
Joel Jaeggli
-
michael.dillon@bt.com
-
Mikael Abrahamsson
-
Sean Donelan
-
Stefan Bethke