I fail to figure out the necessary mathematics where topology information would bring superior results compared to the usual greedy algorithms where data is requested from the peers where it seems to be flowing at the best rates. If local peers with sufficient upstream bandwidth exist, majority of the data blocks are already retrieved from them.
First, it's not a mathematical issue. It is a network operational issue where ISPs have bandwidth caps and enforce them by traffic shaping when thresholds are exceeded. And secondly, there are cases where it is not in the ISP's best interest for P2P clients to retrieve files from the client with the lowest RTT.
In many locales ISP's tend to limit the available upstream on their consumer connections, usually causing more distant bits to be delivered instead.
Yep, it's a game of whack-a-mole.
I think the most important metric to study is the number of times the same piece of data is transmitted in a defined time period and try to figure out how to optimize for that.
Or P2P developers could stop fighting ISPs and treating the Internet as an amorphous cloud, and build something that will be optimal for the ISPs, the end users, and the network infrastructure.
The p2p world needs more high-upstream "proxies" to make it more effective.
That is essentially a cache, just like NNTP news servers or Squid web proxies. But rather than making a special P2P client that caches and proxies and fiddles with stuff, why not take all the network intelligence code out of the client and put it into a topology guru that runs in your local ISP's high-upstream infrastructure. Chances are that many ISPs will put a few P2P caching clients in the same rack as this guru if it pays them to take traffic off one direction of the last-mile, or if it pays them to ensure that files hang around locally longer than they do naturally, thus saving on their upstream/peering traffic.
Is there a problem that needs to be solved that is not solved by Akamai's of the world already?
Akamai is a commercial service that content senders can contract with to achieve the same type of multicasting (called Content Delivery Network) as a P2P network provides to end users. ISPs don't provide Akamai service to their hosting customers, but they do provide those customers with web service, mail service, FTP service, etc. I am suggesting that there is a way for ISPs to provide a generic BitTorrent P2P service to any customer who wants to send content (or receive content). It would allow heavy P2P users to evade the crude traffic shaping which tends to be off on the 1st day of the month, then gets turned on at a threshold and stays on until the end of the month. Most ISPs can afford to let users take all they can eat during non-peak hours without congesting the network. Even an Australian ISP could use this type of system because they would only open local peering connections during off-peak, not the expensive trans-oceanic links. This all hinges on a cooperative P2P client that only downloads from sites (or address ranges) which the local topology guru directs them to. Presumably the crude traffic shaping systems that cap bandwidth would still remain in place for non-cooperating P2P clients. --Michael Dillon _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog