From: "Greg Pendergrass" <greg@band-x.com> Subject: RE: WP: Attack On Internet Called Largest Ever
Future attacks will be stronger and more organized. So how do we protect the root servers from future attack?
As has been discussed here previously (see archive) it is unclear that the root DNS servers are particularly vulnerable, so further effort specifically defending them may be misplaced, compared to efforts to address DDoS in general, or efforts to fortify other parts of the Internet infrastructure.
From: "Joe Patterson" <jpatterson@asgardgroup.com>
would it cause problems, and more importantly would it solve potential problems, to put some/most/all of the root servers (and maybe gtld-servers too) into an AS112-like config?
Last time it was discussed I thought that the provisions already in the DNS RFC's to allow zone transfer for "." to recursive servers is a neat solution for the root zone. It can be implemented with existing technology, no new servers/routers needed. Bypasses the 13 root server limit. Reduces load on the current root servers. Increases performance when unknown domains are queried. Even if all addresses where the zone was available were public, a persistent DDoS would merely deny the addition of new TLD's, or readdressing of all DNS servers for a TLD, both occur rarely. The gtld-servers, and servers for other key zones, maybe more painful to do without, harder to replace, or less well configured and/or protected than the root servers.
From: "Stephen J. Wilcox" <steve@telecomplete.co.uk> Subject: Re: Testing root server down code
Microsoft DNS has a poor response and can spin out of control with all root servers available..
Unfair, Microsoft DNS has a good response and peak throughput when it isn't spining out of control ;-)
From: "Martin J. Levy" <mahtin@mahtin.com> Subject: Re: Testing root server down code
2. Encourage greater software diversity for DNS sever systems. Currently most DNS servers are based on the BIND Berkeley Internet Name Domain code base. There is also a Microsoft Windows version of DNS that very few groups currently run. 3. ...
Hence... At least in the US (and I can't say for the rest of the world), the government have been recommended to consider Microsoft's version of DNS.
Others might interpret that as not to run BIND, or Microsoft DNS ;-) Surely that should be "code bases", plural, as BIND 9 is a new code base? So that is BIND 4, BIND 8, BIND 9, MS DNS, UltraDNS and DJBDNS in fairly widespread use (and the one the root servers use if they don't use BIND), or supporting critical domains, but we still need more diversity?! I think promoting correct configuration, and in-balliwick delegation, would be more useful. Now how do I set follow-ups to comp.protocols.tcp-ip.domains ?
On Thu, 24 Oct 2002, Simon Waters wrote:
Last time it was discussed I thought that the provisions already in the DNS RFC's to allow zone transfer for "." to recursive servers is a neat solution for the root zone.
There are pluses and minuses to that approach. The people at .biz and .info are _still_ getting complaints from people sitting behind broken resolvers with bogus copies of the root zone. Doing this in a widespread manner is likely to lead to more problems of this sort for new TLD's, and updates to existing ones. Also, if you consider that <some high percentage> of root server queries are for the same say, 10 TLD's, and that those records are cached for 2 days, it would most likely be a net increase in root server traffic to have millions of resolvers slaving the zone. Speaking only for myself, I think the combination of anycast and DNSSEC has the best chance of success; both for the root and gTLD servers. Doug
participants (2)
-
Doug Barton
-
Simon Waters