In a message written on Sat, Oct 20, 2007 at 07:12:35PM -0500, Joe Greco wrote:
In a message written on Fri, Oct 19, 2007 at 03:21:09PM -0400, Joe Provo wr= ote:
Content is irrelevent. BT is a protocol-person's dream and an ISP nightmare. The bulk of the slim profit margin exists in taking=20 advantage of stat-mux oversubscription. BT blows that out of the=20 water.
I'm a bit confused by your statement. Are you saying it's more cost effective for ISP's to carry downloads thousands of miles across the US before giving them to the end user than it is to allow a local end user to "upload" them to other local end users?
It's quite possible that I've completely missed it, but I hadn't seen many examples of P2P protocols where any effort was made to locate "local" users and prefer them. In some cases, this may happen due to the type of content, but I'd guess it to be rare. Am I missing some new development?
Most P2P clients favor the "faster" sources. Faster is some sort of combination of lower latency and/or higher bandwidth. This tends to favor local clients, however can be quickly skewed by other factors.
If it isn't being transferred locally, then the ISP is being stuck with the pain of carrying a download thousands of miles, probably from a peering (or worse, transit) with another ISP that has also had to carry it some distance.
But back the the original premise. If say, Linux is being distributed both from a central web site, and via P2P: 1) Central web site. All but the one ISP with the web site will have the traffic going over peering or worse transit, and will often be carrying them thousands of miles from the central point. 2) P2P. Has a good chance at least some seeders will be on the same network, avoiding peering and transits for some fraction of the traffic. Has a good chance the seeders are closer to the user than the web site, perhaps even on the same cable segment. I think the more interesting thing here is overall rate limit. Let's compare a central web site with a 1Gbps connection for 10,000 downloaders, or a P2P model where there are 10,000 downloaders, 5,000 of which are willing to serve content (obviously starting with 1-5 seeders, and slowly growing as people download it. Even if provers only offer 1Mbp/sec of upload, those 5,000 content providers can put an aggregate 5Gbps into the network, where as the central server can only put a aggregate 1Gbps into the network. So, while the bit*mile cost may be lower in the P2P case, the peek bit rate is higher (which users like, faster downloads); and since ISP's are forced to size their network for peak rate to insure user satisfaction the "cost" of P2P is higher, even though the bit mile cost is lower. I think. At least, that's my guess from Joe's statement, I'd like him to elaborate. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org