[Warning: I've never actually deployed an anycast DNS setup so you are free to ignore my message.]
i'm not ignoring you because you raised two important issues.
1. There should always be non-anycast alternatives
I believe there is a strong consensus about that. And therefore a strong agreement that ".org" is seriously wrong.
i believe that icann/afilias/ultradns would be very receptive to input from the ietf-dnsop wg on this topic. but it's not cut and dried -- if you have two widely anycast'd servers plus one non-anycast server "just in case something bad happens to anycast" you're doing two questionable things: (1) treating anycast as new/unstable/experimental which it's not, and (2) limiting your domain's availability to the strength of that one non-anycast server. in the root server system we're about half anycast and half not, at the maximum practical NS RRset size, which as you certainly know, is 13. if .ORG's NS RRset were to be changed to include non-anycast nodes, i'd hope for 11 of them, or however many underlying servers there actually are. but at that point, the only thing anycast would buy you is ddos resistance and the ability to have more than 13 physical servers... which is all the root server system wants from anycast, but maybe not all that afilias and ultradns and icann want from anycast in .ORG.
This is after all a good engineering practice: when you deploy something new, do it carefully and not everywhere at the same time.
this is the second important point you raise. anycast isn't new. rodney pioneered it commercially in 1997 or so. it had been in campus-area use for at least six years by that time and perhaps 10 years depending on how you count. akamai has been using it since 1999 or so. if there were any stability problems like the pplb assertions made elsewhere in this thread, we'd've all been seeing them for a long time by now.
On 20-dec-04, at 17:32, Paul Vixie wrote:
1. There should always be non-anycast alternatives
I believe there is a strong consensus about that. And therefore a strong agreement that ".org" is seriously wrong.
i believe that icann/afilias/ultradns would be very receptive to input from the ietf-dnsop wg on this topic. but it's not cut and dried -- if you have two widely anycast'd servers plus one non-anycast server "just in case something bad happens to anycast" you're doing two questionable things: (1) treating anycast as new/unstable/experimental which it's not, and (2) limiting your domain's availability to the strength of that one non-anycast server.
??? How are things worse with two anycasted addresses and one non-anycasted address vs two anycasted addresses? I think there is one thing that isn't very controversial: two addresses isn't really enough. Let's start with that. I'd rather have 5 or 8 anycasted addresses than 2 anycasted addresses. (Although I think at least 8 addresses, half of which are anycast, would be best.)
if .ORG's NS RRset were to be changed to include non-anycast nodes, i'd hope for 11 of them, or however many underlying servers there actually are. but at that point, the only thing anycast would buy you is ddos resistance and the ability to have more than 13 physical servers... which is all the root server system wants from anycast, but maybe not all that afilias and ultradns and icann want from anycast in .ORG.
If we as a community feel we need DDoS resistance for the root and TLDs, we should consider more options than just anycast. Anycast can increase the number of physical servers, but this is also possible with more traditional clustering techniques or anycasting that stays outside BGP. What I find surprising is that every IP address gets to query the roots, despite the fact that most addresses don't have any need to do this or know how to do it properly. It would make perfect sense to me that people would have to sign up for "root service" before they get to talk to the root servers. This way, all unknown addresses can be filtered out. (Or more practical, rate limited.) Obviously something like this would face deployment issues, but if we're serious about DDoS issues these kinds of options are the ones we should consider. Another way to approach this would be for larger ISPs to connect to one or more roots using private peering. This gives those operators both the means and an incentive to keep such links free of clutter. Remember that DDoS is only a force of nature for end sites. In large networks it's just part of the general traffic that can be filtered or rate limited without too much trouble as long as it can be identified.
participants (2)
-
Iljitsch van Beijnum
-
Paul Vixie