Re: Online games stealing your bandwidth
I fail to see the point. If an ISP needs to add caches they may as well just add a simple, cheaper, standard, http cache.
It's a bang-per-buck issue, and depends highly on whether your particular network sees more HTTP or P2P traffic.
Orly. No, I mean if there have to be caches why use p2p in the first place, once there's a network of caches p2p becomes a more complicated http and that model has been well optimised by some. I know the people stealing things don't want to pay akamai but games charging for access are a different matter. brandon
On 27 Sep 2010, at 20:54, Brandon Butterworth wrote:
I fail to see the point. If an ISP needs to add caches they may as well just add a simple, cheaper, standard, http cache.
It's a bang-per-buck issue, and depends highly on whether your particular network sees more HTTP or P2P traffic.
Orly.
No, I mean if there have to be caches why use p2p in the first place, once there's a network of caches p2p becomes a more complicated http and that model has been well optimised by some.
I know the people stealing things don't want to pay akamai but games charging for access are a different matter.
brandon
I agree but it isnt the SP who drives P2P use, its the users.. So whilst they use it, networks kind of have to make it work. We used the P2P cache for a very specific reason. We had a wireless uplink constrained network and the P2P cache cached users uplink traffic and served it from the cache, saving us about 50% up our P2P uplink load. -- Leigh Porter
On 9/27/2010 2:54 PM, Brandon Butterworth wrote:
No, I mean if there have to be caches why use p2p in the first place, once there's a network of caches p2p becomes a more complicated http and that model has been well optimised by some.
It's a redundancy factor. By participating in a p2p network as a cache, and even feeding clients information which would be important to them (ie, I'm actually better than your neighbor's house). p2p can be optimized. A p2p cache generally wouldn't cache items which don't have repeatability, so there would probably need to be multiple hits for the cache to grab the data. The cache itself would use p2p to obtain it's copy, providing information to it's clients even as the current clients and the cache server are both pulling from remotes. At no point should you consider such a caching solution to equate to a standard http cache. A proper standardized p2p cache shouldn't just be about caching information for local clients, but should also be about giving clients additional information to optimize them. Clients who are seeding information should be able to inform the cache of such, and should enough traffic be involved, the cache itself should be able to pull the necessary information and start providing to remotes instead of the client, so long as the client shows it's seeding (ie, client is seeding, but actually isn't transferring data since the cache is announcing it will on behalf of the client). This would, of course, not drop the overall outbound p2p traffic from an ISP at it's core, but could reduce last mile bandwidth while still participating as necessary. It meets the legal caching framework, as if the client stops providing, the cache will stop providing. Such a solution, of course should still maintain a "hey, IP x seeding, but cache at IP y has the data" (similar to proxy headers, but this works in a cloud which complicates it a bit) to meet any dmca tracking issues or ISPs will run from the legal nightmare. Jack
participants (3)
-
Brandon Butterworth
-
Jack Bates
-
Leigh Porter