
I work for a Finnish ISP and from my perspective all you American providers are "one huge Exodus." We battle with the issue by providing customers with an efficient web cache system. So basically what I'm saying is that GTEI could go a long way in solving the problem of asymmetric peering traffic due to web farm providers with an efficient web cache system for their customers.
On my continent this is almost an unwritten responsibility of an ISP.
I don't know where it will end, but at the moment 55% of the web traffic I see from my transparent cache customer log files is dynamic. That means 55% of the bytes flowing through port 80 at various chokepoints is marked uncacheable by the origins. My product finesses this up to and a little bit beyond what the HTTP spec allows, trying to distinguish between stuff that's marked uncacheable for advertising hit-rate reasons but is in fact quite static, compared to stuff that really is dynamically (usually CGI) generated and really isn't cacheable. Inktomi breaks even more rules than we do but their hit rate is still pretty marginal. The web pretty much just does not cache well, primarily because the people who generate content aren't the ones who get hurt by its uncacheability. During the time between October 1996 and now (September 1998), I've watched the percentage-uncacheable mark rise from 20% to 55%. It could stabilize at 55%. I don't think it will go down though. More likely it will rise a little further, since there are so many web site construction kits on the market that don't care one whit for cacheability (or the HTTP standard, but that's another story.) If you're still thinking of web objects as files, go spend some time with some marketing people and learn about "web-enabled applications" which is not about files at all. (You'd also learn the reason why I'm not expecting to sell very many more of my transparent web caches.) So, I'm all for having ISP's run caches and either offer transparent interception or some other automatic browser/cache relationship builder. But until we get a reasonable mechanism for _generating_ the content in multiple places we're going to have to provision the paths and servers and peering points with the brute force needed to move data from single sources to an outrageous and growing number of destinations. I'd say (returning to the topic of the message I'm replying to) that GTE/I and other carriers will have to argue about peering and assymetry for another six months at least before the right content distributions start to emerge. -- Paul Vixie La Honda, CA "Many NANOG members have been around <paul@vix.com> longer than most." --Jim Fleming pacbell!vixie!paul (An H.323 GateKeeper for the IPv8 Network)