On 9/27/2010 2:54 PM, Brandon Butterworth wrote:
No, I mean if there have to be caches why use p2p in the first place, once there's a network of caches p2p becomes a more complicated http and that model has been well optimised by some.
It's a redundancy factor. By participating in a p2p network as a cache, and even feeding clients information which would be important to them (ie, I'm actually better than your neighbor's house). p2p can be optimized. A p2p cache generally wouldn't cache items which don't have repeatability, so there would probably need to be multiple hits for the cache to grab the data. The cache itself would use p2p to obtain it's copy, providing information to it's clients even as the current clients and the cache server are both pulling from remotes. At no point should you consider such a caching solution to equate to a standard http cache. A proper standardized p2p cache shouldn't just be about caching information for local clients, but should also be about giving clients additional information to optimize them. Clients who are seeding information should be able to inform the cache of such, and should enough traffic be involved, the cache itself should be able to pull the necessary information and start providing to remotes instead of the client, so long as the client shows it's seeding (ie, client is seeding, but actually isn't transferring data since the cache is announcing it will on behalf of the client). This would, of course, not drop the overall outbound p2p traffic from an ISP at it's core, but could reduce last mile bandwidth while still participating as necessary. It meets the legal caching framework, as if the client stops providing, the cache will stop providing. Such a solution, of course should still maintain a "hey, IP x seeding, but cache at IP y has the data" (similar to proxy headers, but this works in a cloud which complicates it a bit) to meet any dmca tracking issues or ISPs will run from the legal nightmare. Jack