backbone transparent proxy / connection hijacking
Has anyone else noticed Digex playing with transparent proxying on their backbone? We have one of our T1's through them, and found that all web traffic going out our Digex connection goes through a proxy. We've got customers with web sites that are broken now because they can't communicate with things like Cybercash, because their outgoing http requests are hijacked and sent through a Digex web cache. Digex wants us to register each web server out on the rest of the internet that hosts from our network need to talk directly to. This looks like the beginning of a big PITA. I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks. ------------------------------------------------------------------ Jon Lewis <jlewis@fdt.net> | Spammers will be winnuked or Network Administrator | drawn and quartered...whichever Florida Digital Turnpike | is more convenient. ______http://inorganic5.fdt.net/~jlewis/pgp for PGP public key____
As I don't have a direct Digex connection I can't comment on your question, but I do want to pose this: If Digex causes HTTP traffic to go through their proxy, are they 1. filtering content? and 2. become responsible for all that goes through it. I seem to recall a lawsuit about compuserve screening email or chatrooms? anyone else think that Digex better get some really good lawyers? my two cents, Mark Skinner Verio Southern California
Has anyone else noticed Digex playing with transparent proxying on their backbone? We have one of our T1's through them, and found that all web traffic going out our Digex connection goes through a proxy. We've got customers with web sites that are broken now because they can't communicate with things like Cybercash, because their outgoing http requests are hijacked and sent through a Digex web cache.
Digex wants us to register each web server out on the rest of the internet that hosts from our network need to talk directly to. This looks like the beginning of a big PITA.
I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks.
------------------------------------------------------------------ Jon Lewis <jlewis@fdt.net> | Spammers will be winnuked or Network Administrator | drawn and quartered...whichever Florida Digital Turnpike | is more convenient. ______http://inorganic5.fdt.net/~jlewis/pgp for PGP public key____
On Thu, Jun 25, 1998 at 04:11:18PM -0400, Jon Lewis wrote:
Has anyone else noticed Digex playing with transparent proxying on their backbone? We have one of our T1's through them, and found that all web traffic going out our Digex connection goes through a proxy. We've got customers with web sites that are broken now because they can't communicate with things like Cybercash, because their outgoing http requests are hijacked and sent through a Digex web cache.
Digex wants us to register each web server out on the rest of the internet that hosts from our network need to talk directly to. This looks like the beginning of a big PITA.
I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks.
Sigh...... why did I know this kind of crap (hijacking connections) was going to start. Grrr..... I understand why people do it, but I do NOT approve of it. -- -- Karl Denninger (karl@MCS.Net)| MCSNet - Serving Chicagoland and Wisconsin http://www.mcs.net/ | T1's from $600 monthly / All Lines K56Flex/DOV | NEW! Corporate ISDN Prices dropped by up to 50%! Voice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS Fax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost
On Thu, 25 Jun 1998, Karl Denninger wrote:
I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks.
Sigh...... why did I know this kind of crap (hijacking connections) was going to start. Grrr.....
It's a good thing we got our own CIDR block when renumbering out of uunet space. If Digex doesn't reconsider this new policy, I'll be looking for quotes on non-proxied transit (T1 or Frame T1) in Gainesville, FL. ------------------------------------------------------------------ Jon Lewis <jlewis@fdt.net> | Spammers will be winnuked or Network Administrator | drawn and quartered...whichever Florida Digital Turnpike | is more convenient. ______http://inorganic5.fdt.net/~jlewis/pgp for PGP public key____
On Thu, Jun 25, 1998 at 07:49:30PM -0400, Jon Lewis wrote:
On Thu, 25 Jun 1998, Karl Denninger wrote:
I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks.
Sigh...... why did I know this kind of crap (hijacking connections) was going to start. Grrr.....
It's a good thing we got our own CIDR block when renumbering out of uunet space. If Digex doesn't reconsider this new policy, I'll be looking for quotes on non-proxied transit (T1 or Frame T1) in Gainesville, FL.
Well, I'd love to know where they think they get the authority to do this from in the first place.... that is, absent active consent. I'd be looking over contracts and talking to counsel if someone tried this with transit connections that I was involved in. Hijacking a connection without knowledge and consent might even run afoul of some kind of tampering or wiretapping statute (read: big trouble)..... -- -- Karl Denninger (karl@MCS.Net)| MCSNet - Serving Chicagoland and Wisconsin http://www.mcs.net/ | T1's from $600 monthly / All Lines K56Flex/DOV | NEW! Corporate ISDN Prices dropped by up to 50%! Voice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS Fax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost
The moving finger of Karl Denninger, having written:
Karl> Well, I'd love to know where they think they get the authority to do this Karl> from in the first place.... that is, absent active consent. Karl> I'd be looking over contracts and talking to counsel if someone tried this Karl> with transit connections that I was involved in. Hijacking a connection Karl> without knowledge and consent might even run afoul of some kind of tampering Karl> or wiretapping statute (read: big trouble)..... Hmmm, Title 18, Chapter 119 - Wire and Electronic Communications Interception... Sec 2511 Title 18, Chapter 1030 does not appear to apply. Standard disclaimers *do*, however apply :-) See .sig for details. -- Tom E. Perrine (tep@SDSC.EDU) | San Diego Supercomputer Center http://www.sdsc.edu/~tep/ | Voice: +1.619.534.5000 Been there, done that, erased the evidence, blackmailed the witnesses...
We had the same experience with them -- we have a T1 and commercial web hosting service at safeport.com, and our proxy server on our firewall became extremely broken due to the web proxy they inserted between us and the world. Apparently our proxy was generating bad keep-alives in some form -- of course, none of the web servers we used to contact had a problem with them. We received a few days warning before they started, however, so at least we had some idea what might have caused the sudden breakage of all out-going web access. My personal feeling is that this might be fine for web-browsing dialup customers, but as a commercial service buying IP connectivity, we were rather upset. On Thu, 25 Jun 1998, Jon Lewis wrote:
Has anyone else noticed Digex playing with transparent proxying on their backbone? We have one of our T1's through them, and found that all web traffic going out our Digex connection goes through a proxy. We've got customers with web sites that are broken now because they can't communicate with things like Cybercash, because their outgoing http requests are hijacked and sent through a Digex web cache.
Digex wants us to register each web server out on the rest of the internet that hosts from our network need to talk directly to. This looks like the beginning of a big PITA.
I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks.
------------------------------------------------------------------ Jon Lewis <jlewis@fdt.net> | Spammers will be winnuked or Network Administrator | drawn and quartered...whichever Florida Digital Turnpike | is more convenient. ______http://inorganic5.fdt.net/~jlewis/pgp for PGP public key____
Robert N Watson Carnegie Mellon University http://www.cmu.edu/ TIS Labs at Network Associates, Inc. http://www.tis.com/ SafePort Network Services http://www.safeport.com/ robert@fledge.watson.org http://www.watson.org/~robert/
At 04:11 PM 6/25/98 -0400, Jon Lewis wrote:
Has anyone else noticed Digex playing with transparent proxying on their backbone? We have one of our T1's through them, and found that all web traffic going out our Digex connection goes through a proxy. We've got customers with web sites that are broken now because they can't communicate with things like Cybercash, because their outgoing http requests are hijacked and sent through a Digex web cache.
Digex wants us to register each web server out on the rest of the internet that hosts from our network need to talk directly to. This looks like the beginning of a big PITA.
Sounds more like time to change Internet vendors. You've now told me whom I DON'T want to do business with. Yeah, I know renumbering's a bitch. I just went through it myself.
I wouldn't have a problem with Digex setting up some web caches and encouraging customers to setup their own caches and have them talk to the Digex ones via ICP...but caching everything without our knowledge/consent stinks.
Yeah, it tells you when to change vendors. ___________________________________________________ Roeland M.J. Meyer, ISOC (InterNIC RM993) e-mail: <mailto:rmeyer@mhsc.com>rmeyer@mhsc.com Internet phone: hawk.mhsc.com Personal web pages: <http://www.mhsc.com/~rmeyer>www.mhsc.com/~rmeyer Company web-site: <http://www.mhsc.com/>www.mhsc.com/ ___________________________________________ SecureMail from MHSC.NET is coming soon!
[...] playing with transparent proxying [...] [...] We've got customers with web sites that are broken now because they can't communicate with things like Cybercash [...]
The transparent proxy from one of our firewall vendors wasn't able to handle connections to Cybercash. Analysis showed that Cybercash was doing a half-close on their end of the socket, which the proxy took as a full-close, ending the session. Perhaps that's a common mistake? -- | Opinions are _mine_, facts Rob Quinn | | are facts. (703)689-6582 | | rquinn@sprint.net | | Sprint Corporate Security |
I wanted to take a moment to respond to this thread which has gotten somewhat inflamed. The problems being highlighted are not new or unknown and there are standard remedies in use by Inktomi's Traffic Server customers and other users of transparent caching. In fact, the two posters reporting concrete problems have both already had them swiftly remedied. Transparent caching brings with it significant benefits to the ISPs and backbones who deploy it, but also to the dialup users, corporate customers and downstream ISPs who utilize those links. Cached content is delivered accurately and quickly, improving the experience of web surfing. Further, caching helps unload congested pipes permitting increased performance for non-HTTP protocols. Many people believe that large-scale caching is necessary and inevitable in order to scale the Internet into the future. I will spend a few paragraphs talking about each of the concerns which have been expressed in this thread. Roughly, I think they are the following: disruption of existing services, correctness of cached content, and confidentiality/legal issues with transparent caching. We take all of these issues very seriously and have had dedicated resources in our development and technical support groups addressing them for some time. The center of this debate concerns the rare disruption of existing services which can occur when transparent caching is deployed. Two concrete examples of this have been cited on this list: access to a Cybercash web server and access from an old Netscape proxy server. Both of these incidents were swiftly and easily corrected by the existing facilities available in Traffic Server. The Cybercash server performed client authentication based on the IP address of the TCP connection. Placing a proxy (transparent or otherwise) in between clients and that server will break that authentication model. The fix was to simply configure Traffic Server to pass Cybercash traffic onwards without any attempt to proxy or cache the content. The second example was of a broken keepalive implementation in an extremely early Netscape proxy cache. The Netscape proxy falsely propagated some proxy-keepalive protocol pieces, even though it was not able to support it. The fix was to configure Traffic Server to not support keepalive connections from that client. Afterwards, there were no further problems. These two problems are examples of legacy issues. IP-based authentication is widely known to be a weak security measure. The Netscape server in question was years old. As time goes on, there will be a diminishing list of such anomalies to deal with. Inktomi works closely with all of our customers to diagnose any reported anomaly and configure the solution. Beyond that, to scale this solution, Inktomi serves as a clearinghouse of these anomaly lists for all of our customers. A report from any one customer is validated and made available to other Traffic Server installations to preempt any further occurrences. Inktomi also conducts proactive audits both inside live Traffic Servers and via the extensive "web crawling" we perform as part of our search engine business. The anomalies discovered by these mechanisms are similarly made available to our customers. The second issue being discussed is the correctness of cached content. Posters have suggested mass boycotting of caching by content providers concerned with the freshness of their content. Most content providers have no such concerns, frankly. The problem of dealing with cached content is well understood by publishers since caching has been in heavy use for years. Every web browser has a cache in it. AOL has been caching the lion's share of US home web surfers for years. For more information on the ways in which publishers benefit from caching see our white paper on the subject of caching dynamic and advertising content: http://www.inktomi.com/products/traffic/tech/ads.html And finally, there has been confusion concerning the confidentiality and legal issues of transparent caching. Transparent caching does not present any new threat to the confidentiality of data or usage patterns. All of these issues are already present in abundance in the absence of caching. Individuals responsible for managing networks will have to weigh the advantages of caching against these more nebulous considerations. We, and many others looking towards the future of a scalable Internet, are confident that caching is becoming an integral part of the infrastructure, and provides many benefits to hosters, ISPs, backbones and surfers alike. Paul Gauthier -- Paul Gauthier, (650)653-2800 CTO, Inktomi Corporation gauthier@inktomi.com
" ... and now, a word from our sponsor ..."
Inktomi's Traffic Server customers and other users of transparent
Transparent caching brings with it significant benefits to the ISPs and backbones who deploy it, but also to the
In this case, it brings a significant benefit to Digex, but hardly the customers of Digex. Bluntly, it potentially saves digex money when the use less of their infrestructure to serve the same amount of service to a client. I didn't say it was wrong, or bad, but lets be clear on who it is advantageous to.
dialup users, corporate customers and downstream ISPs who utilize those links. Cached content is delivered accurately
Accurately is a bone of contention. We've all seen what caching can do to time sensitive web-sites.
performance for non-HTTP protocols. Many people believe that large-scale caching is necessary and inevitable in order to scale the Internet into the future.
While it may be many people, they may all be wrong.
of cached content, and confidentiality/legal issues with
I didn'y see any mention of copyright infringement, which will be an issue at some point. Also, the fact that if I am caching my customers web-servers, they are potentially getting free service.
These two problems are examples of legacy issues. IP-based authentication is widely known to be a weak security measure.
But, this is irrelevant if people use it. Thats a simple point.
Most content providers have no such concerns, frankly. The problem
Hah! This is a silly statement. Not to mention, when sites are selling advertising based upon hits on thier site. Is there a way yet for the proxy to report back to the cached-site that is served a cached-copy of it's website?
of a scalable Internet, are confident that caching is becoming an integral part of the infrastructure, and provides many benefits to hosters, ISPs, backbones and surfers alike.
This part, I agree with you on. -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- Atheism is a non-prophet organization. I route, therefore I am. Alex Rubenstein, alex@nac.net, KC2BUO, ISP/C Charter Member Father of the Network and Head Bottle-Washer Net Access Corporation, 9 Mt. Pleasant Tpk., Denville, NJ 07834 Don't choose a spineless ISP! We have more backbone! http://www.nac.net -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
On Fri, 26 Jun 1998 alex@nac.net wrote:
Accurately is a bone of contention. We've all seen what caching can do to time sensitive web-sites.
Not really - enlighten us... Web caching when done responsibly by caching provider, customer AND content server has no "real" dangers of interfering with dynamic or time sensitive data.
I didn'y see any mention of copyright infringement, which will be an issue at some point. Also, the fact that if I am caching my customers web-servers, they are potentially getting free service.
Prefetching is really the only issue that bridges copyright infringement - all other content was requested by a third party and not the cache itself - in the arena of prefetching - where the cache is actively making a decision to gather the content then you may actually have a point for copyright infringement - but really a minor one.
Hah! This is a silly statement. Not to mention, when sites are selling advertising based upon hits on thier site. Is there a way yet for the proxy to report back to the cached-site that is served a cached-copy of it's website?
You don't need to report back hit stats really - the content provider merely needs to make "something" non-cacheable with an appropriate http expires header - it can be something like a 1 pixel gif or maybe the actual text of the page and take their hits off that - if "your" site comes up quickly it will reflect better on the dsigner and owner of the site - caching actually benefits content providers. -- I am nothing if not net-Q! - ras@poppa.clubrich.tiac.net
On Fri, 26 Jun 1998, Paul Gauthier wrote:
Transparent caching brings with it significant benefits to the ISPs and backbones who deploy it, but also to the dialup users, corporate customers and downstream ISPs who utilize those links. Cached content is delivered accurately and quickly, improving the experience of web surfing. Further, caching helps unload congested pipes permitting increased performance for non-HTTP protocols.
Marketing blather. None of the benefits you cite above are due to transparent caching, not one! They are all the result of caching and can be had just as well by ISPs who give their customers a choice whether or not to use a proxy cache.
Many people believe that large-scale caching is necessary and inevitable in order to scale the Internet into the future.
These "many people" you talk about are fools who have no understanding of Internet scaling. There are many uses of the Internet that do not involve redundant transfers of identical files over the net and those uses are likely to increase significantly in the future. Internet phone and video conferencing are a couple of examples but there are many more. Caching is good to have for a certain limited number of reasons but it will not help scale the Internet into the future.
are the following: disruption of existing services, correctness of cached content, and confidentiality/legal issues with transparent caching. We take all of these issues very seriously and have had dedicated resources in our development and technical support groups addressing them for some time.
I take this to mean that Inktomi advises its customers to disrupt their services by implementing transparent cachin on non-dialup customers?
The center of this debate concerns the rare disruption of existing services which can occur when transparent caching is deployed.
It's not a rare disruption, it is a *CONSTANT* disruption of service. When a customer pays a provider for transit, the provider should not be intercepting the data streams and substituting another stream that they *THINK* will be identical. It is an entirely different thing when a provider either gives a choice to their customer of using a proxy cache, or notifies the customer that their service is routed through a cache. This kind of thing is a benefit to the individual dialup customer and they can always choose to not use the cache or to pay for a different access service that does not force caching.
We, and many others looking towards the future of a scalable Internet, are confident that caching is becoming an integral part of the infrastructure, and provides many benefits to hosters, ISPs, backbones and surfers alike.
The only place that caching has in a backbone network is as an optional service that backbone customers can hook their own cache into via ICP if they so choose. Caching is a temporary hack that provides some limited benefit at the moment but the need for it is fading except in areas that are a couple of years behind in bandwidth deployment. -- Michael Dillon - Internet & ISP Consulting Memra Communications Inc. - E-mail: michael@memra.com Check the website for my Internet World articles - http://www.memra.com
In a previous episode Michael Dillon said... :: [...] Caching is a temporary hack that provides some limited :: benefit at the moment but the need for it is fading except in areas that :: are a couple of years behind in bandwidth deployment. I can't believe you said that. Heirarchical caching reduces latency and increases availability _in addition to_ conserving bandwidth. Those (particularly the second) will remain critically important features no matter if you're traversing a 512k line or OC48... -P
From what I have seen, the Alteon/Inktomi/Netcache/Cisco solutions do *not* allow for an unlimited bypass list - both based on destination or
On Fri, 26 Jun 1998, Paul Gauthier wrote: For the past 4 years, AS378 has blocked port 80 (all academic network of 8 universities and many colleges) and forced all users thru multiple proxy servers (most recently Squid). Over the years we had to build up a list of sites to bypass. But in addition, one has to provide the ability to bypass based on source IP address (some users just don't want it - even if it speeds up their page downloads and toasts their bread at the same time.) source IP address. When that happens, the ISP, Digex in this case, can have a simple authenticated web page where a customer can add their CIDR block to a bypass list in the transparent proxy. Till then, all the bashing will continue. Add to the things that will break - simplex or asymetrric routing. More and more customers are ordering simplex satellite lines. Imagine a European company that buys a 512kb line from an ISP but also buys a T1 simplex satellite line to augment b/w. The http request goes out with the sat-link CIDR block as source. The request hits the transparent proxy for a USA based page. The proxy retrieves the page from the USA, using its expensive transAtlantic link. Page hits the proxy. Now the transparent proxy needs to deliver the page. But the requestors IP address is located at some satellite provider in the USA (previously discussed here), so the transparent proxy routes the page back across the Atlantic for delivery via the satellite simplex line. Same problems happen with assymetric routing. I blv Vern has a study that shows that 60% of all routes on the Internet are assymetric. Bottom line: w/o bypass based on source or destination, the bashing will continue. Hank Nussbacher Israel
I wanted to take a moment to respond to this thread which has gotten somewhat inflamed. The problems being highlighted are not new or unknown and there are standard remedies in use by Inktomi's Traffic Server customers and other users of transparent caching. In fact, the two posters reporting concrete problems have both already had them swiftly remedied.
Transparent caching brings with it significant benefits to the ISPs and backbones who deploy it, but also to the dialup users, corporate customers and downstream ISPs who utilize those links. Cached content is delivered accurately and quickly, improving the experience of web surfing. Further, caching helps unload congested pipes permitting increased performance for non-HTTP protocols. Many people believe that large-scale caching is necessary and inevitable in order to scale the Internet into the future.
I will spend a few paragraphs talking about each of the concerns which have been expressed in this thread. Roughly, I think they are the following: disruption of existing services, correctness of cached content, and confidentiality/legal issues with transparent caching. We take all of these issues very seriously and have had dedicated resources in our development and technical support groups addressing them for some time.
The center of this debate concerns the rare disruption of existing services which can occur when transparent caching is deployed. Two concrete examples of this have been cited on this list: access to a Cybercash web server and access from an old Netscape proxy server. Both of these incidents were swiftly and easily corrected by the existing facilities available in Traffic Server.
The Cybercash server performed client authentication based on the IP address of the TCP connection. Placing a proxy (transparent or otherwise) in between clients and that server will break that authentication model. The fix was to simply configure Traffic Server to pass Cybercash traffic onwards without any attempt to proxy or cache the content.
The second example was of a broken keepalive implementation in an extremely early Netscape proxy cache. The Netscape proxy falsely propagated some proxy-keepalive protocol pieces, even though it was not able to support it. The fix was to configure Traffic Server to not support keepalive connections from that client. Afterwards, there were no further problems.
These two problems are examples of legacy issues. IP-based authentication is widely known to be a weak security measure. The Netscape server in question was years old. As time goes on, there will be a diminishing list of such anomalies to deal with. Inktomi works closely with all of our customers to diagnose any reported anomaly and configure the solution.
Beyond that, to scale this solution, Inktomi serves as a clearinghouse of these anomaly lists for all of our customers. A report from any one customer is validated and made available to other Traffic Server installations to preempt any further occurrences.
Inktomi also conducts proactive audits both inside live Traffic Servers and via the extensive "web crawling" we perform as part of our search engine business. The anomalies discovered by these mechanisms are similarly made available to our customers.
The second issue being discussed is the correctness of cached content. Posters have suggested mass boycotting of caching by content providers concerned with the freshness of their content. Most content providers have no such concerns, frankly. The problem of dealing with cached content is well understood by publishers since caching has been in heavy use for years. Every web browser has a cache in it. AOL has been caching the lion's share of US home web surfers for years. For more information on the ways in which publishers benefit from caching see our white paper on the subject of caching dynamic and advertising content: http://www.inktomi.com/products/traffic/tech/ads.html
And finally, there has been confusion concerning the confidentiality and legal issues of transparent caching. Transparent caching does not present any new threat to the confidentiality of data or usage patterns. All of these issues are already present in abundance in the absence of caching. Individuals responsible for managing networks will have to weigh the advantages of caching against these more nebulous considerations. We, and many others looking towards the future of a scalable Internet, are confident that caching is becoming an integral part of theinfrastructure, and provides many benefits to hosters, ISPs, backbones and surfers alike.
Paul Gauthier
-- Paul Gauthier, (650)653-2800 CTO, Inktomi Corporation gauthier@inktomi.com
Cisco policy routing can use source IP address for deciding to pass traffic to the cache engine. The cache engine, normaly can be configured to exempt destination. I believe that this fixes both issues. Expecting the customer to be able to have a clue to go to a www page is a bit much, tho. Some customers have setup IP based authentication on their NT server, but can't figure out how to configure SLL which wouldn't be cached, and would be more secure. The burden of making this work is on the cache operator. Also it turns out that the sites with the most problems with the cache are the ones paying the least money for service. Its hard to feel very sorry for a $20/month dialup customer, who is connecting to his coporate site with a broken NT server. If customers are using proxy's that break, its easy enough for them to speak ICP, and still get the same operational conditions, as far as the ISP side is concerned. As far as the asmetric routing issue, the traffic INSIDE the ISP isn't asmetric, and shouldn't need to be cached. I don't really see the problem here. (But it could be me.) In message <Pine.A41.3.96-heb-2.07.980627214536.55182A-100000@max.ibm.net.il>, Hank Nussbacher writes:
On Fri, 26 Jun 1998, Paul Gauthier wrote: From what I have seen, the Alteon/Inktomi/Netcache/Cisco solutions do *not* allow for an unlimited bypass list - both based on destination or source IP address. When that happens, the ISP, Digex in this case, can have a simple authenticated web page where a customer can add their CIDR block to a bypass list in the transparent proxy. Till then, all the bashing will continue.
Add to the things that will break - simplex or asymetrric routing. More and more customers are ordering simplex satellite lines. Imagine a European company that buys a 512kb line from an ISP but also buys a T1 simplex satellite line to augment b/w. The http request goes out with the sat-link CIDR block as source. The request hits the transparent proxy for a USA based page. The proxy retrieves the page from the USA, using its expensive transAtlantic link. Page hits the proxy. Now the transparent proxy needs to deliver the page. But the requestors IP address is located at some satellite provider in the USA (previously discussed here), so the transparent proxy routes the page back across the Atlantic for delivery via the satellite simplex line.
Same problems happen with assymetric routing. I blv Vern has a study that shows that 60% of all routes on the Internet are assymetric.
Bottom line: w/o bypass based on source or destination, the bashing will continue.
--- Jeremy Porter, Freeside Communications, Inc. jerry@fc.net PO BOX 80315 Austin, Tx 78708 | 512-458-9810 http://www.fc.net
At 09:37 PM 6/27/98 -0500, Jeremy Porter wrote:
Cisco policy routing can use source IP address for deciding to pass traffic to the cache engine. The cache engine, normaly can be configured to exempt destination. I believe that this fixes both issues. Expecting the customer to be able to have a clue to go to a www page is a bit much, tho. Some customers have setup
I find it ridiculous to suggest that an ACL be built and modified for each and every "broken" thing you find. I wouldn't be surprised if the resources necessary to keep this up - especially considering the potential customer dissatisfaction it *will* cause - outweighs the benifit of the cache.
IP based authentication on their NT server, but can't figure out how to configure SLL which wouldn't be cached, and would be more secure. The burden of making this work is on the cache operator. Also it turns out that the sites with the most problems with the cache are the ones paying the least money for service. Its hard to feel very sorry for a $20/month dialup customer, who is connecting to his coporate site with a broken NT server.
If you are just now figuring out that there are users who are clueless on the Internet, you're way behind the curve. If you figured this out a long time ago and have simply dismissed those users - even the $20/mo dialup customers - as "hard to feel very sorry for", then I'm surprised you are still in business. I give all of my users transit to their desired destination when the pay me for it. Not just those cluefull enough to configure exceptions to the proxy services I have decided to ram down their throat - without their foreknowledge or consent. You are, of course, welcome to do as you please on your network.
Jeremy Porter, Freeside Communications, Inc. jerry@fc.net
TTFN, patrick ************************************************************** Patrick W. Gilmore voice: +1-650-482-2840 Director of Operations, CCIE #2983 fax: +1-650-482-2844 PRIORI NETWORKS, INC. http://www.priori.net "Tomorrow's Performance.... Today" **************************************************************
In message <3.0.5.32.19980628014919.01258e00@priori.net>, "Patrick W. Gilmore" writes:
At 09:37 PM 6/27/98 -0500, Jeremy Porter wrote:
Cisco policy routing can use source IP address for deciding to pass traffic to the cache engine. The cache engine, normaly can be configured to exempt destination. I believe that this fixes both issues. Expecting the customer to be able to have a clue to go to a www page is a bit much, tho. Some customers have setup
I find it ridiculous to suggest that an ACL be built and modified for each and every "broken" thing you find. I wouldn't be surprised if the resources necessary to keep this up - especially considering the potential customer dissatisfaction it *will* cause - outweighs the benifit of the cache.
Well it wound be ideal for the cache vendor to fix the broken things, or supply technical fixes to the broken sites in question. I don't think it is unreasonable for people to follow RFCs and Best Current Pratices documents. Perhaps if all this crappy software out there wouldn't be a problem if we didn't have to patch the applications at the network level. There is absolutly no technical reason why browsers cannot autoconfigure for caching EVERY time. Netscape and Micsosoft are not interested in implementing this. (All they have to do is setup a source address registry for caches.)
IP based authentication on their NT server, but can't figure out how to configure SLL which wouldn't be cached, and would be more secure. The burden of making this work is on the cache operator. Also it turns out that the sites with the most problems with the cache are the ones paying the least money for service. Its hard to feel very sorry for a $20/month dialup customer, who is connecting to his coporate site with a broken NT server.
If you are just now figuring out that there are users who are clueless on the Internet, you're way behind the curve. If you figured this out a long time ago and have simply dismissed those users - even the $20/mo dialup customers - as "hard to feel very sorry for", then I'm surprised you are still in business.
Please this sort of attack is really uncalled for. If you don't understand the business case for not supporting all users, them I'm surprised you are in business. Some customers demands exceed the value of the customer. 90% of the support costs are from 10% of the user base. Why spend that money when you don't have to. I could give you a list of companies with similar stratagies, just to rub your face in your comments, as those companies are doing a lot better than yours.
I give all of my users transit to their desired destination when the pay me for it. Not just those cluefull enough to configure exceptions to the proxy services I have decided to ram down their throat - without their foreknowledge or consent.
You are, of course, welcome to do as you please on your network.
If you want to spending 30% more than I do to service a customer base that is 10% of the revenues, please feel free.
Jeremy Porter, Freeside Communications, Inc. jerry@fc.net
TTFN, patrick
************************************************************** Patrick W. Gilmore voice: +1-650-482-2840 Director of Operations, CCIE #2983 fax: +1-650-482-2844 PRIORI NETWORKS, INC. http://www.priori.net "Tomorrow's Performance.... Today" **************************************************************
--- Jeremy Porter, Freeside Communications, Inc. jerry@fc.net PO BOX 80315 Austin, Tx 78708 | 512-458-9810 http://www.fc.net
participants (15)
-
alex@nac.net
-
Hank Nussbacher
-
Jeremy Porter
-
Jon Lewis
-
Karl Denninger
-
Mark Skinner
-
Michael Dillon
-
Patrick McManus
-
Patrick W. Gilmore
-
Paul Gauthier
-
Rich Sena
-
Rob Quinn
-
Robert Watson
-
Roeland M.J. Meyer
-
Tom Perrine