Re: Why aren't ISPs providing stratum 1 NTP service?
In article <hot.mailing-lists.nanog-Pine.BSI.3.93.960719091042.14736C-100000@sidhe.memra.com>, Michael Dillon <michael@memra.com> wrote:
And heavily used WWW servers are another thing that could benefit from aligning themselves with the topology.
The protocols don't support this cleanly. So far nothing I've seen would allow a single URL to be used to access the "nearest" server. Until something like that exists (i.e. the end users don't need to know a thing about network topology) it seems pointless to align WWW servers with the topology. Your suggested use of redirects just complicates things -- consider how the URLs would end up looking in a search engine. Dean
On 21 Jul 1996, Dean Gaudet wrote:
And heavily used WWW servers are another thing that could benefit from aligning themselves with the topology.
The protocols don't support this cleanly. So far nothing I've seen would allow a single URL to be used to access the "nearest" server.
WWW servers can issue a "redirect" to a different URL. Anybody can hack this up with something like Apache by adding index.cgi to the index page possibilities and then enabling .cgi as an extension to automatically run a CGI script which could issue the redirect in the HTTP headers instead of emitting an HTML document. The CGI script would only need to be a simple table lookup similar to what a Cisco's SSP does. Of course their needs to be something more intelligent (like BGP to continue the analogy) that builds and maintains the lookup table based on some sort of heuristics.
Until something like that exists
It exists right now. Several people are doing this kind of thing. It's just not an off-the-shelf product. Yet.
the topology. Your suggested use of redirects just complicates things -- consider how the URLs would end up looking in a search engine.
Life is never perfect. ;-) Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
In message <Pine.BSI.3.93.960721163003.15875B-100000@sidhe.memra.com>, Michael Dillon writes:
WWW servers can issue a "redirect" to a different URL.
I know that. As I said, I don't consider a different URL to be a good solution. It confuses users. It doesn't interoperate well with WWW-authentication (you have to auth to each differently named server... suppose your net connection fluctuates while you're surfing and you get bounced elsewhere). It pollutes search engines with many duplicate URLs -- which is confusing for users. (Yeah you could just special case a few big search engines and never redirect them... hack.) Perhaps the largest problem, which actually acts against what you're trying to solve by doing this, is that it doesn't operate well with client or proxy caches. Every time there's a net burp, the users caches are invalidated (because of a redirect) and they have to reload everything.
Life is never perfect. ;-)
Yep. Redirects are far from it. There are better solutions... and we'd be better off if one of them is standardized and we use it instead. Dean
On Sun, 21 Jul 1996, Dean Gaudet wrote:
WWW-authentication (you have to auth to each differently named server... suppose your net connection fluctuates while you're surfing and you get bounced elsewhere).
Route dampening...
Life is never perfect. ;-)
There are better solutions... and we'd be better off if one of them is standardized and we use it instead.
Is anything like this planned for HTTP 1.1? or 1.2? It might be easier to get something like this into a standard if somebody hacks up part of the solution and runs it for a while as I described with redirects. Then the standards makers can say, "Hey here is a great idea and we can solve all these problems with it by incorporating it into HTTP 1.2". Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
On Sun, 21 Jul 1996, Michael Dillon wrote:
On Sun, 21 Jul 1996, Dean Gaudet wrote:
There are better solutions... and we'd be better off if one of them is standardized and we use it instead.
Is anything like this planned for HTTP 1.1? or 1.2? It might be easier to get something like this into a standard if somebody hacks up part of the solution and runs it for a while as I described with redirects. Then the standards makers can say, "Hey here is a great idea and we can solve all these problems with it by incorporating it into HTTP 1.2".
Since I don't follow HTTP evolution, so I could be missing something, but for distributed web service that is transparent to users, you'd need something that keeps track of RTT at the client level that has some persistance. -dorian
On Mon, 22 Jul 1996, Dorian R. Kim wrote:
Since I don't follow HTTP evolution, so I could be missing something, but for distributed web service that is transparent to users, you'd need something that keeps track of RTT at the client level that has some persistance.
Nope. You only need the a form of redirect that is transparent to the end user so their bookmark files etc... will always refer to the original master controller site. And some similar mods to webcrawlers and caches. The actual analysis of topology would be separate from serving up pages and would likely use more than one technique depending on the situation. Of course all this is hypothetical right now and would have minimal impact on network operations except for XP operators who have to build out more colo rack space.... Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
On Sun, 21 Jul 1996, Michael Dillon wrote:
WWW servers can issue a "redirect" to a different URL. Anybody can hack this up with something like Apache by adding index.cgi to the index page possibilities and then enabling .cgi as an extension to automatically run a CGI script which could issue the redirect in the HTTP headers instead of emitting an HTML document. The CGI script would only need to be a simple table lookup similar to what a Cisco's SSP does. Of course their needs to be something more intelligent (like BGP to continue the analogy) that builds and maintains the lookup table based on some sort of heuristics.
Isn't IBM doing some sort of fancy load redistribution for the WWW servers its running for the 1996 Summer Olympics? I seem to recall they were determining the "closest" server via a technique called "ping triangulation", whatever that is. Christopher E. Stefan flatline@ironhorse.com http://www.ironhorse.com/~flatline finger for PGP key System Administrator Phone: (206) 783-6636 Ironhorse Software, Inc. FAX: (206) 783-4591
Isn't IBM doing some sort of fancy load redistribution for the WWW servers its running for the 1996 Summer Olympics? I seem to recall they were determining the "closest" server via a technique called "ping triangulation", whatever that is.
If so, that's pretty silly and is probably not leading to performance that's very different from either purely random or strict round robin. Ping times only correlate to available bandwidth when congestion is bad. Otherwise these two factors are militantly unrelated to each other.
Vix, ] > Isn't IBM doing some sort of fancy load redistribution for the WWW ] > servers its running for the 1996 Summer Olympics? I seem to recall they ] > were determining the "closest" server via a technique called "ping ] > triangulation", whatever that is. ] ] If so, that's pretty silly and is probably not leading to performance ] that's very different from either purely random or strict round robin. Agreed. ] Ping times only correlate to available bandwidth when congestion is bad. ] Otherwise these two factors are militantly unrelated to each other. Agreed very much. However, ping times can be used as latency determination metrics. It may be perhaps that some really sexy dns server pings the dns query originator, and picks up NAP transit points en route, returning the appropriately close-ish web server as an A record. Unlikely, but slightly intriguing to think about. -alan
On Fri, 26 Jul 1996, Alan Hannan wrote:
Isn't IBM doing some sort of fancy load redistribution for the WWW servers its running for the 1996 Summer Olympics? I seem to recall they were determining the "closest" server via a technique called "ping triangulation", whatever that is.
Ping times only correlate to available bandwidth when congestion is bad. Otherwise these two factors are militantly unrelated to each other.
Triangulation implies that they are sampling ping times from two or more different routes. They may be relying on this to warn them when paths are becoming congested as opposed to using ping times as a means of measuring which is the better of two uncongested paths. If the ping triangulation is a process separate from web serving that is continuously sampling paths and adjusting web server routing (redirects) dynamically then it may well have some value. Michael Dillon - ISP & Internet Consulting Memra Software Inc. - Fax: +1-604-546-3049 http://www.memra.com - E-mail: michael@memra.com
They use the ping times to figure out which server would be closest. All the servers are not located in the same place. The idea is that european users may receive better service from a european server. Roy
On Sat, 27 Jul 1996, Roy wrote:
They use the ping times to figure out which server would be closest. All the servers are not located in the same place. The idea is that european users may receive better service from a european server.
This brings to mind a question: are ping times a more appropriate vector than hopcount or topological locality? Ping times reflect a lot of important (but ephemeral) aspects of performance which more direct measurements do not. E.g., the latency of trans-pond links nicely reflects their cost in a matter not easily captured in simple topology maps. Ditto for congested links which might be closer to the viewer. Of course cacheing solves all of these problems (J <- hook next to bait), but in this imperfect of worlds, what reasons, if any, make ping time less attractive than other metrics? I used to think them simple-minded and sloppy, but now I am not so sure. _____________________________________________________________________ Todd Graham Lewis Linux! Core Engineering Mindspring Enterprises tlewis@mindspring.com (800) 719 4664, x2804
At 1996-07-27 01:54 +0000, Alan Hannan wrote:
] Ping times only correlate to available bandwidth when congestion is bad. ] Otherwise these two factors are militantly unrelated to each other.
Agreed very much. However, ping times can be used as latency determination metrics.
And latency is disproportionately important when you're concerned only with HTTP performance. However, pings *can* be used to measure bandwidth, by sending multiple sizes of ping and measuring the difference in response times between small and large pings. On a graph of ping size vs. response time, you'd see the y-intercept representing base latency and the slope representing bandwidth. This is how bing does its work. -- Shields, CrossLink.
On 21 Jul 1996, Dean Gaudet wrote:
In article <hot.mailing-lists.nanog-Pine.BSI.3.93.960719091042.14736C-100000@sidhe.memra.com>, Michael Dillon <michael@memra.com> wrote:
And heavily used WWW servers are another thing that could benefit from aligning themselves with the topology.
The protocols don't support this cleanly. So far nothing I've seen would allow a single URL to be used to access the "nearest" server. Until something like that exists (i.e. the end users don't need to know a thing about network topology) it seems pointless to align WWW servers with the topology. Your suggested use of redirects just complicates things -- consider how the URLs would end up looking in a search engine.
You first have to get a decent metric for "nearest" down first, and then be able to measure and use it. Something basic like AS path lengths doesn't work, so you'll probably end up having to use something like history of RTTs. Not an easy problem to solve, to put it mildly. -dorian
From: Dean Gaudet <dgaudet@hotwired.com> Date: 21 Jul 1996 22:30:38 GMT Michael Dillon <michael@memra.com> wrote:
And heavily used WWW servers are another thing that could benefit from aligning themselves with the topology.
The protocols don't support this cleanly. The _protocols_ DO support this (although "cleanly" could be questioned). Very few clients have ever attempted to implement it, and few of them even came close to getting it right...although there were some pretty good examples a decade ago... So far nothing I've seen would allow a single URL to be used to access the "nearest" server. Here's how the current standards specify that allow this to work(*): The URL has a "fictitious" hostname, which resolves via the DNS to a set of A records, one for each of the redundant servers that carry the data (these servers can be extremely widely separated). The client then applies an algorithm to select the "nearest" one (by its definition of nearest). In fact the spec says you should rank them and try in order several of the addresses, in case one is non-working. This is _exactly_ the functionality that distributed web servers need. The only possibly "unclean" thing about this is using the fictitious host name. Of course, the weak link here is allowing the client to make this decision. Most clients use the stupidest algorithm available, which is pick one (and only one, another violation) of the addresses, essentially at random, and use that. This has as likely a chance to pick the worst one as it does to pick the best one. There are several heuristics available to any client program (some were implemented over a decade ago :-) and there is some work (SONAR and SRV) addressing better answers. Unfortunately, there is a chicken-and-egg problem here. The client programmers have no incentive to implement the better algorithms because no services are provisioned this way, and that's because few clients would make use of the distributed nature. -MAP (*) And a clearly "unclean" way, which will work with existing clients, is to take a routing prefix and distribute _that_ around. The servers would all have the same address (making them hard to manage), and the routing prefix would be advertised from each of these locales, and routing would find you the "closest" one. After all, you really are looking at a routing problem here. (Although something about hammers and looking like nails comes to mind. :-)
In message <mpatton=1996Jul22151319@bart.bbn.com>, "Michael A. Patton" writes:
The _protocols_ DO support this (although "cleanly" could be questioned). Very few clients have ever attempted to implement it, and few of them even came close to getting it right...although there were some pretty good examples a decade ago...
You're right... I forgot that "sortlist on steroids" could really solve this. Maybe it's time for bind and gated to be married? :) Another kludge I was thinking of that would almost work is to set DNS like this: www IN NS www1 www IN NS www2 www IN NS www3 Then each of www1, www2, www3 have different zones for www that list only themself. Then let bind figure out the one with the best rtt. Biggest problem is convergence -- which could be assisted if the clients actually timed out their DNS. They're arguably broken now -- they do a single lookup and use it forever (after renumbering web servers I've seen packets for the old address up to a month later -- I'm guessing people who don't restart their browser frequently, and it still has the old address cached).
Of course, the weak link here is allowing the client to make this decision. Most clients use the stupidest algorithm available, which is pick one (and only one, another violation) of the addresses, essentially at random, and use that. This has as likely a chance to pick the worst one as it does to pick the best one.
Rumour has it that netscape will have this fixed in an upcoming version. At least they'll time out one address and move on to others. Which is good news for those of us already doing multi-A record "load balancing".
(*) And a clearly "unclean" way, which will work with existing clients, is to take a routing prefix and distribute _that_ around.
Yeah I thought of this, but worried about hosts "on the edge" where more than one route is equally preferable and any sort of instability causing one of the routes to flap would be enough to "break" connections randomly. Dean
On Jul 22, 15:13, "Michael A. Patton" <MAP@bbn.com> wrote:
(*) And a clearly "unclean" way, which will work with existing clients, is to take a routing prefix and distribute _that_ around. The servers would all have the same address (making them hard to manage), and the routing prefix would be advertised from each of these locales, and routing would find you the "closest" one. After all, you
Yep, works. Management isn't a problem if each box is fitted with two interfaces and addresses, one for public consumption of contents, one for internal use. Not that we have done any of this for real (yet).
really are looking at a routing problem here. (Although something about hammers and looking like nails comes to mind. :-)
Well, yes, but then something like WWW and other stuff could be fitted with a number of unflattering descriptions. It's interesting to see ... it's only recently people on the US side have begun getting concerned about bandwidth issues, attempting to localize traffic if possible. So far, there wasn't anything that couldn't be solved with a couple of those DS3s, which cost the same on your side as one or two E1s on our side. (Part of the reason for the high cost of leased lines over here is that Europe is a large collection of twisty little places, all different. Hence, your leased lines go international and/or intercontinental at the drop of a hat.) So for a long time, localization and good geographic spread of servers of various kinds has been given very serious attention on this side. Now here's me waiting for some moron to invent Son of CU-SeeMe ... -- ------ ___ --- Per G. Bilse, Mgr Network Operations Ctr ----- / / / __ ___ _/_ ---- EUnet Communications Services B.V. ---- /--- / / / / /__/ / ----- Singel 540, 1017 AZ Amsterdam, NL --- /___ /__/ / / /__ / ------ tel: +31 20 5305333, fax: +31 20 6224657 --- ------- 24hr emergency number: +31 20 421 0865 --- Connecting Europe since 1982 --- http://www.EU.net e-mail: bilse@EU.net
participants (12)
-
Alan Hannan
-
Christopher E. Stefan
-
Dean Gaudet
-
dgaudetï¼ hotwired.com
-
Dorian R. Kim
-
Michael A. Patton
-
Michael Dillon
-
Michael Shields
-
Paul A Vixie
-
Per Gregers Bilse
-
Roy
-
Todd Graham Lewis