
On Mon, 16 Nov 1998, Eric Dean wrote:
We performed a number of tests with most all the vendor's products. Considering that most all the vendor's are represented on this list, I do not want to get into a p*ssing contest on a public list as to who has the best cache.
We found important a number of items to consider with Web Caching.
1) Accuracy. If the served page is not the requested page, your customers will let you know.
2) Transparency. As long as your customers don't care it's there then everything is okay. When a Web Cache hangs up and black holes your Web traffic, that's a bad day. There are some layer 4 switches out in the marketplace (Foundry and Alteon) who redirect Web requests and run port 80 keepalives.
A related issue is correctness. From what I have seen of a product from a certain large router vendor, it does really stupid things like does not allow any persistent connections, breaks HTTP/1.1 chunked connections (well, actually worse: caches the chunked response so HTTP/1.0 clients that get the cached response see garbage) because they are too (lazy|dumb|rushed|etc.) to read a RFC, etc. You need to be _VERY_ careful when evaluating transparent proxies to see if the implementor actually knows anything at all about HTTP. Unfortunately, this can require you have a better than average knowledge of HTTP to begin with. I just really have trouble with "transparent proxies". The concept is bad (magically messing with traffic, that may not even be HTTP but may simply be sent over port 80 to get through filters, without the user being able to do a thing about it), and the implementations that I have seen are bad. The real long term solution is to provide better mechanisms where clients can automatically use a proxy if they should.