RE: WP: Attack On Internet Called Largest Ever
It's universally agreed that the articles have mostly been blown out of proportion and dramatized, but that doesn't mean that attacks against the root servers can't be successful. Future attacks will be stronger and more organized. So how do we protect the root servers from future attack? There has been a lot about what did not happen yesterday, but how about some details about what did happen? Was it a ping flood, syn-flood, smurf, or some combination of types? Were the zombie machines windows, linux, or both? Some of the root servers were affected more than others, why? Was it that there was more ddos traffic directed at them, or that they had less hardware and network resources? - Greg Pendergrass
One thing I'm curious about (mostly because I think it's a neat idea, and was wondering if anyone else thought so too).. would it cause problems, and more importantly would it solve potential problems, to put some/most/all of the root servers (and maybe gtld-servers too) into an AS112-like config? It would seem to me like that would give the benefits of being able to spread the load around without making the list of root servers any larger, would make any kind of ddos on the root servers just that much more difficult to do, and might just increase speed/performance (for those 8 times a week when you actually use them) Is it a problem that's even worth looking at? Is it a solution that's worse (for some reason I haven't noticed yet) than the problem? Thoughts? -Joe Patterson
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of Greg Pendergrass Sent: Wednesday, October 23, 2002 10:31 AM To: 'Nanog@Merit. Edu' Subject: RE: WP: Attack On Internet Called Largest Ever
It's universally agreed that the articles have mostly been blown out of proportion and dramatized, but that doesn't mean that attacks against the root servers can't be successful. Future attacks will be stronger and more organized. So how do we protect the root servers from future attack?
There has been a lot about what did not happen yesterday, but how about some details about what did happen? Was it a ping flood, syn-flood, smurf, or some combination of types? Were the zombie machines windows, linux, or both? Some of the root servers were affected more than others, why? Was it that there was more ddos traffic directed at them, or that they had less hardware and network resources?
- Greg Pendergrass
[Longish diatribe. I just use my share of bandwidth here in larger packets. I hope you will consider S/N large enough] At 04:51 PM 10/23/2002, Joe Patterson wrote:
would it cause problems, and more importantly would it solve potential problems, to put some/most/all of the root servers (and maybe gtld-servers too) into an AS112-like config? .... Is it a problem that's even worth looking at?
It is definitely worth exploring. As David Conrad pointed out, the technology is there. Also it is very appealing in terms of DDoS resistance and general distributedness that works so well for the Internet.
Is it a solution that's worse (for some reason I haven't noticed yet) than the problem?
The problem is making absolutely sure that the root zone that is served is authentic. For AS112 this is not really important because the queries it syphons off are all bogus anyways. So I could not care less if they received bogus answers. For the root this is an entirely different matter! Of course if we had DNSSEC widely deployed it would be a no-brainer. But I am afraid that is going to take a long time; I hope it happens before DNS itself becomes obsoleted. So with the lack of DNS security the problem could be mitigted by routing security, i.e one could have some trust in the place the information comes from instead of having the information itself authenticated. However there is no such thing as routing security either. The best we can do in the absence of pertinent security technology is to try to distribute things carefully; always making sure that ISPs, and end-users if they wish, have current and usable information to determine themselves which DNS servers and which routes to them they trust. While doing this we also must maintain clearly the responsibility of the server operators to serve the authentic unique root zone and to provide a consistent service with good performance. At the same time there is the ever increasing number of self appointed people suggesting to run root servers for a variety of motives, usually even good intentions; however with the potential to change the content of the root zone *without accountability* or even without telling the users of those servers. Those who know me will testify that I am a very grass roots, bottom-up oriented person suspicious of centralisation and hierarchies. But the prospect of having multiple differing instances of the root zone in the Internet makes me very uncomfortable. In fact it would mean that we will have no Internet any longer but different networks, that one cannot trust any longer that a hyperlink will end up in a single place, that a server is really the one one intends to talk to etc. pp. Unfortunately we do not have the security techologies deployed yet that will alleviate this problem. So we have to keep things together for some time or end up with no Internet left. Daniel
The problem is making absolutely sure that the root zone that is served is authentic. For AS112 this is not really important because the queries it syphons off are all bogus anyways. So I could not care less if they received bogus answers. For the root this is an entirely different matter! Of course if we had DNSSEC widely deployed it would be a no-brainer. But I am afraid that is going to take a long time; I hope it happens before DNS itself becomes obsoleted.
I had some similar thoughts/worries. But then I realized, they apply to the current infrastructure just as well as to an anycast infrastructure. The security implications that I see come down to a few (I'm sure there are more) first, what happens if someone starts announcing bogus paths to the anycast AS/network? Can't they hijack the root nameservers? answer: Yes. Until nameservers on anycast networks are in place, the same attackers will have to do this by announcing the ip address of {all}.root-servers.net/32 (or /24 if their upstream won't accept a /32). The reason this doesn't happen every day is that providers generally are fairly good at not accepting clearly bogus advertisements from their customers, and (legitimately) trust that the providers they peer with have similar policies. This will work just as well with anycast. second, right now there are a few dozen physical machines that are the root name servers. They are, generally, fairly tough nuts, security-wise. What happens when we have to secure a few hundred machines instead of a few dozen? answer: if you can build one very secure single-purpose server, it's not all that much harder to build 100. The flip side of that is, if you've got a few hundred servers on anycast networks, then if one of them gets compromised, the "damage" is limited to those networks that see that server as "closest". And, as an extra added bonus, there's the neat feature that if an attacker is attacking a server across the network, and is attacking its anycast address, then which server he ends up attacking can tell you a lot about where he's coming from. third, physically securing more servers. answer: this actually is harder the more servers you have (well, it's harder if their redundancy is going to do you any good.) But, once again, the damage is limited to a smaller scope. fourth, what about attacks against the synchronization of the root server zone files? answer: first off, this probably (I'm not sure) doesn't happen very often. The root zone files don't change much. at least that's my understanding (the gtld-server zone files, on the other hand, do.) Also, this is already a problem. It's just a matter of scale. If you can build it right for a dozen servers, you can probably build it right for a hundred. There are probably other problems, but those are the ones I thought of when thinking about this... -Joe
On Wed, 23 Oct 2002, Greg Pendergrass wrote:
There has been a lot about what did not happen yesterday, but how about some details about what did happen? Was it a ping flood, syn-flood, smurf, or some combination of types? Were the zombie machines windows, linux, or both? Some of the root servers were affected more than others, why? Was it that there was more ddos traffic directed at them, or that they had less hardware and network resources?
And, calling a recent thread to attention: were the packets from valid sources or were they spoofed, and if so, what was the source address distribution like?
Hi, On 10/23/02 7:31 AM, "Greg Pendergrass" <greg@band-x.com> wrote:
It's universally agreed that the articles have mostly been blown out of proportion and dramatized, but that doesn't mean that attacks against the root servers can't be successful. Future attacks will be stronger and more organized. So how do we protect the root servers from future attack?
See RFC3258.
There has been a lot about what did not happen yesterday, but how about some details about what did happen? Was it a ping flood, syn-flood, smurf, or some combination of types? Were the zombie machines windows, linux, or both? Some of the root servers were affected more than others, why? Was it that there was more ddos traffic directed at them, or that they had less hardware and network resources?
I'll let others with more direct information answer this. Rgds, -drc
participants (5)
-
Daniel Karrenberg
-
David Conrad
-
Greg Pendergrass
-
Iljitsch van Beijnum
-
Joe Patterson