Has anyone else exerienced problems with AOL's web proxy servers? We switched the IP address of a web site about 5 months ago, and we are still getting about 10,000 hits a day from AOL on the old address. We dropped a note to them a ways back but didn't get much of a response. Brian Horvitz Shore.Net
On Fri, 2 Jan 1998, Brian Horvitz wrote:
Has anyone else exerienced problems with AOL's web proxy servers? We switched the IP address of a web site about 5 months ago, and we are still getting about 10,000 hits a day from AOL on the old address. We dropped a note to them a ways back but didn't get much of a response.
On a related note. I would be curious to the liability issues surrounding the use of web proxy servers and Cisco Cache engines. ISP/NSPs traditionally only forwards packets or are at least responsible to a certain degree for resolution of IP addresses from DNS and routers. The use of these web cache technologies allows ISP/NSPs to now *intelligently determine* what data the requesting client receives. With web proxy server, at least the client is aware(we hope) that data is being cached but the Cisco Cache engine makes that process transparent. As we are all well aware, these intelligent implementations don't always work. It seems that we are on slippery grounds making content decisions for end user requests. And doesn't it become even messier with the suggested/proposed web caching at the MAEs or at the NSP level? Regards, Turnando Fuad NSNet
Brian Horvitz wrote:
Has anyone else exerienced problems with AOL's web proxy servers? We switched the IP address of a web site about 5 months ago, and we are still getting about 10,000 hits a day from AOL on the old address. We dropped a note to them a ways back but didn't get much of a response.
they've been problems for a long time; some of their proxy servers hold cache data for far too long. i have some web discussion pages that are effectively useless for aol subscribers because aol even caches cgi generated stuff with obvious cgi related extensions like ".cgi" and ".pl". they appear to have little organizational control over the software running on their server farm, either for software versions or configuration files; there are clearly several different variants of sendmail running, in addition to several different versions of their proxy software. it's kind of a mess, and they've never returned any of my email notes. sigh, richard -- Richard Welty Chief Internet Engineer, INet Solutions welty@inet-solutions.net http://www.inet-solutions.net/~welty/ 888-311-INET
they've been problems for a long time; some of their proxy servers hold cache data for far too long. i have some web discussion pages that are effectively useless for aol subscribers because aol even caches cgi generated stuff with obvious cgi related extensions like ".cgi" and ".pl".
nowhere in the standards is ".pl" recommended to be treated as an implicit Cache-Control:. if you tag your responses with Cache-Control: headers that make them do what you want (in this case, make the response uncacheable) then you won't have the above-described problem.
Nope this post can't be configured into your router Randy. On Fri, 2 Jan 1998, Richard Welty wrote:
Brian Horvitz wrote:
Has anyone else exerienced problems with AOL's web proxy servers? We switched the IP address of a web site about 5 months ago, and we are still getting about 10,000 hits a day from AOL on the old address. We dropped a note to them a ways back but didn't get much of a response.
they've been problems for a long time; some of their proxy servers hold cache data for far too long. i have some web discussion pages that are effectively useless for aol subscribers because aol even caches cgi generated stuff with obvious cgi related extensions like ".cgi" and ".pl".
I'm taking a wild guess that your CGI responses include neither Last-Modified or Expires headers. HTTP/1.0 (rfc1945) doesn't define what a proxy cache is to do in this situation, so AOL is compliant. HTTP/1.1 (rfc2068) section 13.2.4 gives more details on this, but that's not what AOL implements. Not to say that AOL's caches aren't littered with other standards violations. The DNS thing isn't something that only AOL is innocent of. Netscape navigator up through the 3.x versions (I haven't tested the 4.x versions, they may have fixed it) caches DNS responses for the lifetime of the browser. Given that some folks on stable unix machines are able to keep their browser open for months this sucks. One might argue it has something to do with the lack of timeout information in the gethostbyname(3) API. I've always taken the opinion that DNS changes for webservers regardless of your ttl values, will take at least two weeks and you better plan to either lose some of the hits or run extra IP aliases with a tunnel from the old addresses or whatnot. Dean
Dean Gaudet wrote:
The DNS thing isn't something that only AOL is innocent of. Netscape navigator up through the 3.x versions (I haven't tested the 4.x versions, they may have fixed it) caches DNS responses for the lifetime of the browser. Given that some folks on stable unix machines are able to keep their browser open for months this sucks.
In current versions of Netscape, IP addresses are cached for fifteen minutes. To tweak this, add or change the following line in your prefs.js file user_pref("network.dnsCacheExpiration", 900) // The integer is the timeout in seconds
One might argue it has something to do with the lack of timeout information in the gethostbyname(3) API.
This is, in fact, why it was implemented that way -- gethostbyname is available on pretty much every platform. I've pointed the folks responsible for this code to res_query() related documentation and code, so this may make it into a future version. Dan
On Fri, Jan 02, 1998 at 08:23:00AM -0800, Turnando Fuad wrote:
I would be curious to the liability issues surrounding the use of web proxy servers and Cisco Cache engines. ISP/NSPs traditionally only forwards packets or are at least responsible to a certain degree for resolution of IP addresses from DNS and routers. The use of these web cache technologies allows ISP/NSPs to now *intelligently determine* what data the requesting client receives. With web proxy server, at least the client is aware(we hope) that data is being cached but the Cisco Cache engine makes that process transparent. As we are all well aware, these intelligent implementations don't always work. It seems that we are on slippery grounds making content decisions for end user requests. And doesn't it become even messier with the suggested/proposed web caching at the MAEs or at the NSP level?
It may well be worse than that, depending on judicial interpretations of the Cubby and Stratton-Oakmont cases. I suspect that S-O wouldn't apply to this circumstance, but who can tell what a judge will think. Cheers, -- jr 'oh, yeah: Happy New Year!' -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "Two words: Darth Doogie." -- Jason Colby, Tampa Bay, Florida on alt.fan.heinlein +1 813 790 7592
At the WIPO meeting in December 1996, the consensus was that mirroring ran afoul of copyright and license issues, and that caching did not. Caching was deemed an automated (no human intervention required) response to demanded traffic, and mirroring was considered a proactive human act. I believe that there was a formal recommendation to this effect. Should the constituent bodies of WIPO agree with this notion, we could see civil law supporting it as early as 2007 (in the United States anyway). On the other hand if someone invokes the WIPO recommendation in defense of a civil suit (brought by a content provider who was losing advertising revenue) and wins, then the effect of the WIPO recommendation would make it into the law books even sooner. Note that RFC 2227 does more to resolve this issue than the WIPO recommendation does, since once it has been widely implemented, the content people are largely just going to put a restricted rights legend on their text to the effect that all copies must be unmodified, especially including the ad anchors, and that RFC 2227 must be implemented on all servers who hold such identical copies. The content people know that they will benefit hugely from caching and mirroring and anything else that offloads their web servers and primary links without requiring expensive mirrors and Cache Directors and whatnot. But at the moment the ad revenue they lose is worth more to them than the cost of doing their own mirroring.
At Friday, Paul A Vixie wrote:
At the WIPO meeting in December 1996, the consensus was that mirroring ran afoul of copyright and license issues, and that caching did not. Caching was deemed an automated (no human intervention required) response to demanded traffic, and mirroring was considered a proactive human act.
Another interesting question than the intellectual property issue, is the question of what a NSP/ISP might do with the statistics on proxy traffic -- the numbers on hits, bytes, etc -- and the usage profiles of individuals using the proxy. I think it is only a matter of time and opportunity before some ISPs would begin to exploit revenue opportunities associated with this new gatekeeping role, should it develop. --Kent
We could get this information via sniffing our network. We don't, but if we did, we would only do it in a manner that would completely protect the privacy of our customers. I think it would be naive to believe that this isn't happening somewhere today.
Another interesting question than the intellectual property issue, is the question of what a NSP/ISP might do with the statistics on proxy traffic -- the numbers on hits, bytes, etc -- and the usage profiles of individuals using the proxy.
I think it is only a matter of time and opportunity before some ISPs would begin to exploit revenue opportunities associated with this new gatekeeping role, should it develop.
On Fri, 2 Jan 1998, Turnando Fuad wrote:
On a related note.
I would be curious to the liability issues surrounding the use of web proxy servers and Cisco Cache engines. ISP/NSPs traditionally only forwards packets or are at least responsible to a certain degree for resolution of IP addresses from DNS and routers. The use of these web cache technologies allows ISP/NSPs to now *intelligently determine* what data the requesting client receives. With web proxy server, at least the client is aware(we hope) that data is being cached but the Cisco Cache engine makes that process transparent. As we are all well aware, these intelligent implementations don't always work. It seems that we are on slippery grounds making content decisions for end user requests. And doesn't it become even messier with the suggested/proposed web caching at the MAEs or at the NSP level?
I think I would have to categorize this 3 ways. No 1, caching in principle and, working properly, is probably not breaching any lines of legality, even as they stand now, and commen sense tells us that's it's a good thing (not to say that common sense would help you in, at least an american, court of law.) No 2, what AOL has is what I refer to as 'caching negligence'. They run a cache because it's good idea, but they run it poorly, and fail to notice or respond when it screws up. Probably the only people who have any legal recourse here are the AOL members, and not web site operators, since it's due to the negligence of AOL that the service for the members suffers. No 3 would be 'caching with prejudice'. This would get you in a whole lot of trouble if a provider was manipulating the cache, for any of a number of reasons. Regulating the content would probably be ok, but is available in many other products. Manipultaing the content would get you into a lot of trouble. Just my $0.02. Nick Bastin System Administrator - World Trade Internet Communications
nbastin@mail-2.wtic.net wrote:
On Fri, 2 Jan 1998, Turnando Fuad wrote:
On a related note.
I would be curious to the liability issues surrounding the use of web proxy servers and Cisco Cache engines. ISP/NSPs traditionally only forwards packets or are at least responsible to a certain degree for resolution of IP addresses from DNS and routers. The use of these web cache technologies allows ISP/NSPs to now *intelligently determine* what data the requesting client receives. With web proxy server, at least the client is aware(we hope) that data is being cached but the Cisco Cache engine makes that process transparent. As we are all well aware, these intelligent implementations don't always work. It seems that we are on slippery grounds making content decisions for end user requests. And doesn't it become even messier with the suggested/proposed web caching at the MAEs or at the NSP level?
I think I would have to categorize this 3 ways. No 1, caching in principle and, working properly, is probably not breaching any lines of legality, even as they stand now, and commen sense tells us that's it's a good thing (not to say that common sense would help you in, at least an american, court of law.) No 2, what AOL has is what I refer to as 'caching negligence'. They run a cache because it's good idea, but they run it poorly, and fail to notice or respond when it screws up. Probably the only people who have any legal recourse here are the AOL members, and not web site operators, since it's due to the negligence of AOL that the service for the members suffers. No 3 would be 'caching with
So, go and get an aol account and start bitching. That's done easily enough and is a small price to pay to ensure that the kajilions of aol'ers see one's site properly. Granted, it's still hard not to envision aolusers as ebola victims on a bus on the info super duper highway. -- Paul R.D. Lantinga Sr. Network Engineer Verifone India Pvt. Ltd. Bangalore
participants (11)
-
Brian Horvitz
-
Dan Mosedale
-
Dean Gaudet
-
Jay R. Ashworth
-
jon@branch.net
-
Kent W. England
-
nbastin@mail-2.wtic.net
-
Paul A Vixie
-
Paul R.D. LANtinga
-
Richard Welty
-
Turnando Fuad