Re: topological closeness....
one other solution (being implemented, I believe), is a DNS server that listens to the BGP traffic, so it knows how far away things are, and when you ask it, it can chose from multiple responses to pick one "close".
-mo
The only problem that it does not work at all. Which WWW server is closer to me - one with BGP path 1239 3491 690 1333 (cnn.com) or one at 1239 1792 (www.cnc.ac.cn)? The second is in China, for God's sake! There ain't no such thing as global metrics. The only useful kind of metrics is administrative, and therefore they cannot reflect any real characteristics of paths. Even if there are uniform metrics, how do you tell which link is overloaded and which isn't? Cacheing appears to be the only sane way to distribute load. You always know where the closest cacheing server is. Now, there's a problem with coherency, but at least it can be done w/o magic. --vadim
check out sonar, draft-moore-sonar-01.txt. randy
Vadim, Yes, caching is a good idea (technically). And who said "AS path length" was a wonderful global metric??? just because the "usual suspects" BGP implementation generally does route selection based on that attribute doesn't mean that's the ONLY thing one can do. Depending on how much additional information one wished to supply about ASes, their general level of connectivity, and geographic location, one can conceivably produce hybrid metrics, probably heuristic, which reflect some sense of "performance". for example, the server could do round-trip measurements to "sufficiently interesting" ASes so that it could base its behavior on observations. the goal wouldn't be to do fine-grained decision-making, but if the "Bruce Springsteen Ticket Server" caused some reasonably long-lived congestion, it might be worthwhile redirecting some responses. The assumption is that this special DNS server might be concentrated on fielding responses for a special set of servers, like special Web servers (could be caches), and not general connectivity. So while few things are perfectly-universal solutions, the prospect of implementing heuristics that we all use today in getting a sense of how things are going, and what to try when the first guess is hosed, seems like a worthwhile attempt. some will work better than others. that is neither news, nor a reason to not try it. -mo
Even if there were workable global routing metrics, this problem _cannot_ be solved inside the confines of DNS, which specifies that there is no meaning to the order of RR's in a response. So even if a server could put them in the right order for a given client, that "client" might actually be a recursive server whose connectivity was different from the end-TCP client's. The recursive ("caching") server(s) can reorder the RR's and frequently do (either LIFO or random). The client can reorder the RR's. It is impermissable to send back a single RR when multiple RRs exist in the RRset. Setting TTL to 0 to prevent caching is not good enough. Doing this inside DNS is an idea utterly without merit. To find the "right" way, start with Keith Moore's SONAR and then make it better in minor ways and then implement it inside all exit gateways from this day forward. Picking the closest server is an end-host issue.
participants (4)
-
avg@postman.ncube.com
-
Mike O'Dell
-
Paul A Vixie
-
randy@psg.com