BTW, while it looks like you've shown it to be traditional load balancing, I ought to explain that this is also not a very good idea. The loadbalancer is a single point of failure, usually. Loadbalancers are a good idea for stateful, high-work-request servers such as web servers running web-apps. This allows you to apply many servers to what then appears to be a single service. This is well worth the expense of a single point-of-failure, since what you really wanted was a single, bigger and exponentially more expensive server. The multiple small cheap servers with the load balancer give you that view. When the load balancer detects failure and drops that failed server, the loadbalancer isn't really offering "higher availability" than a single server, but is rather compensating for the fact that multiple small cheap servers will collectively crash more frequently. However, DNS service is comparatively low-work-request, and low latency. Generally, people are seeking high availability and load distribution for DNS caches. But the work of the traditional load balancer is probably comparable to the work of the DNS server. So the benefits of multiple DNS servers behind a single load balancer are probably negligible. --Dean -- Av8 Internet Prepared to pay a premium for better service? www.av8.net faster, more reliable, better service 617 344 9000 ---------- Forwarded message ---------- Date: Wed, 20 Apr 2005 14:13:33 -0400 (EDT) From: Dean Anderson <dean@av8.com> To: Crist Clark <crist.clark@globalstar.com> Cc: nanog@merit.edu Subject: Re: Slashdot: Providers Ignoring DNS TTL? On Wed, 20 Apr 2005, Crist Clark wrote:
Dean Anderson wrote:
I'd rather expect this sort of behavior with anycasted servers...
I would not expect this kind of behavior from an anycasted address. You'd need a LOT of routing churn to see different caches every few seconds. It's much more likely some kind of load balancer in front of a DNS server farm.
No, you are thinking of the (wrong) claims originally made by ISC about how anycast would affect TCP to an anycast authoritative server. ISC wrongly asserted that since BGP routes don't churn very fast compared with DNS TCP connection lifetimes, that there should be no problem with anycast and TCP. This view has been shown to be wrong in the face of Per Packet Load Balancing (PPLB) which has been demonstrated to work on BGP links by haesu@towardex.com. Further, I showed that if you have PPLB on interior (eg OSPF) links leading to different BGP peers, the problem also happens. Packets are sent on a per packet basis to different places. But caching servers are usually setup to load balance. Usually, the servers with the same IP address share an ethernet along with multiple routers. So the packets are switched on essentially a per-packet basis. Or possibly a per-arp basis that alters the MAC-based-forwarding behavior of a switch. This is fairly fine grained load balancing.
With a cache, the behavior is confusing, but also harms DNS TCP support, just like that described for authoritative servers.
I verified it wasn't anycast by trying to exploit this very issue. I did a query that fell back to TCP while doing multiple small queries. I ran a network capture to pick out the short quries that occurred while the TCP query was going on. Short quries during the TCP connection came back with verying TTLs indicating that I was talking to different caches, i.e. different servers. Yet the TCP query continued without any hiccups. This indicates there is some type of per-session load balancing going on, not anycast routing.
This additional information would seem to indicate they are behind a more traditional stateful load balancer, rather than anycast. Without your TCP connection, I don't think you could distinguish a traditional load balancer from an anycast cache setup. --Dean -- Av8 Internet Prepared to pay a premium for better service? www.av8.net faster, more reliable, better service 617 344 9000