It's been a while since the big-name websites got DDOS'ed, but some large data centers I know of still see hosts on their networks getting DDOS'ed once or twice a day. It's not uncommon for those with only DS3 connections to get them completely clogged for several minutes with attack traffic. If it happens on the UUNet connection then it can get blocked quickly but other NOCs often take a half hour or more to answer. And as discussed earlier on this list, the major players will NOT maintain access lists on their routers for their customers. (At least, that's what they tell their DS3-and-smaller customers). One thing I've recommended to my clients is to temporarily add a (shorter prefix) route through one of their upstream providers who _will_ maintain access lists for them. For example, 1.2.3.4 is under attack. 1.0.0.0/8 is the netblock being advertised through teir 1 providers, and the DS3's to those major providers is being swamped with attack traffic. Quickly modify the BGP advertisement through ???.net to advertise 1.2.3.0/24 as a route. Since that is a shorter prefix, it becomes the preferred route to 1.2.3.4, and since ???.net will maintain access lists providing, let's say, CAR for UDP and ICMP, that's all that needs to be done to block that particular attack. That plan quickly falls apart if several hosts are getting attacked at once, though if your total NORMAL inbound traffic is smaller than the committed rate on ???.net, you can split all your netblocks into two subnets and advertise those through ???.net for the duration of the attacks. Or if it's an ACK attack, or something else that can't easily be rate-limited or blocked, you're still screwed until you can reach the provider and have the target null-routed. I know this causes the global routing tables to increase temporarily, BUT it keeps the datacenter's routes from flapping due to losing the BGP sessions with the upstreams, which often happens if nothing is done about the attack. Another interesting point to note is that lately, most attacks have been for the age-old purpose of taking over IRC channels by knocking out the host on which the operator's bot is running. At least, none of my clients have seen their websites getting attacked lately. Maybe the calm before the storm? Of course, there is also the option of stopping them at the source, which is time-consuming and is usually done after it has been blocked via other means. And for every compromised server or workstation (usually on .edu domains) that gets fixed, two more take its place. Plus, we rarely get helpful responses from non-US domains (primarily because we're unwilling to make the international long-distance calls and find out we don't have any languages in common). It's looking like the only way to improve is to buy bigger connections; but in light of the Yahoo attack (800Mbps was the last word I guess?) how big is big enough? How many OC48 connections can _your_ data center afford? jcomeau@world.std.com aka John Otis Lene Comeau Home page: http://world.std.com/~jcomeau/ Disclaimer: Don't risk anything of value based on free advice. "Anybody can do the difficult stuff. Call me when it's impossible."