On Wed, 01 Sep 2004, Steve Francis wrote:
I'm sure there is research out there, but I can't find it, so does anyone know of any research showing how good/bad using DNS anycast is as a kludgey traffic optimiser? (i.e. having multiple datacenters, all anycasting the authoritative name server for a domain, but each datacenters' DNS server resolving the domain name to an IP local to that datacenter, under the assumption that if the end user hit that DNS server first, there is "some" relationship between that datacenter and good performance for that user.)
I can give you one data point: VeriSign anycasts j.root-servers.net from all the same locations (minus one) where the com/net authoritative servers (i.e., *.gtld-servers.net) are located. An informal examination of query rates among all the J root instances (traffic distribution via BGP) vs. query rates among all the com/net servers (traffic distribution via iterative resolver algorithms, which means round trip time in the case of BIND and Microsoft) shows much more even distribution when the iterative resolvers get to pick vs. BGP. Note that we're not using the no-export community, so all J root routes are global. When examining queries per second, there is a factor of ten separating the busiest J root instance from the least busy, whereas for com/net it's more like a factor of 2.5. Of course, I'm sure a lot of that has to do with server placement, especially in the BGP case. For what it's worth, Matt -- Matt Larson <mlarson@verisign.com> VeriSign Naming and Directory Services