Mikael Abrahamsson wrote: Ethernet: Peak almost twice upload as download. Average is 2.5-3 times more upload than download. ADSL 8M/800k: Peak twice the amount download as upload Average is 1.3-1.5 more download than upload Upload bw usage is almost flat over time Download bw peak is approx double the average level.
I have similar figures.
My interpretation of this is that p2p networks are quite intelligent in using the available bandwidth,
All the available egress bandwidth, indeed. It is very unusual to see a p2p broadband client maxed out on the downstream. But what is also true is that the bugger the upstream, the better what comes as p2p downstream, as most p2p systems have something that rewards heavy uploaders.
and that Copyright holders only solution is a "content crunch" due to providers limiting their users upload potential due to heavy usage, such as capping the amount of bandwidth allowed per month or alike.
I agree, but I see it as a bottom line matter, not a Copyright matter. It does cost a lot of money to build/upgrade the network to support big bandwidth and the corresponding transit.
Walter De Smedt wrote: It makes more sense to introduce more differentiation in product offerings (pricing) where BW 'hoggers' pay more than the 'moderate' users instead of a general price increase. The net effect should be that profits increase, either by reduced network load or by higher revenues from product differentiation.
Agree in principle.
Caveat: this might be an utopian vision ;-)
Let's say we don't have the tools yet for this. This is why today we speed limits, because speed limits are the poor man's bandwidth management.
what if P2P applications would employ encryption schemes (e.g. IPSec) - this would render signature-based recognition useless.
p2p apps _will_ use encryption sooner or later, if not to fool signature-based recognition systems to try to dissimulate the piracy behind encryption.
Are there any p2P systems which optimize traffic by localizing it, when possible?
It's not possible today. Optimizing traffic by localizing it would imply that out of two possible sources, one local and one remote, you use only the local one. This goes against speed: most p2p clients will download simultaneously from both sources (from all available sources for that matter). Optimizing traffic by localizing it requires that the supply exceeds the demand, which we don't have today. Besides, there is another form of localizing that happens on campuses, it's called a blank CD-R :-) Michel.