Since I have started blacklisting people, the list has grown to more than 40 netwroks. However, only ONE, and that one was very short (< 5 minutes) smurf has taken down a customer circuit or our IRC server since the last edits were made to the amplifier list over a week ago. I'm going to try to post a web page on this tonight. What we're doing here WORKS. It inconveniences a few people (those who amplify smurfs), but it WORKS and it STOPS the smurf attacks from burying your connections. Our core routers don't even get mildly bothered doing the discards. -- -- Karl Denninger (karl@MCS.Net)| MCSNet - Serving Chicagoland and Wisconsin http://www.mcs.net/ | T1's from $600 monthly / All Lines K56Flex/DOV | NEW! Corporate ISDN Prices dropped by up to 50%! Voice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS Fax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost On Sun, Apr 26, 1998 at 02:08:04AM -0400, Martin, Christian wrote:
All,
Given the contributions to this thread have been mostly theoretical in nature, I'd like to share an experience of mine that in some ways negates some of the propsed solutions to smurf attacks in the context of smaller ISPs.
Recently, one of our downstream customers was subject to a smurf attack, and we placed an access-list on our egress interface to the customers network. The customer hangs of an SMDS cloud - our link to the cloud is at 34 Mbps, his T1. We were blocking echo replies destined for his network. We are connected upstream at 45Mbps. As the attack intensified, router CPU Utilization jumped to 99%, and the input queue on our inbound HSSI was at 75/75. We started dropping packets at a rate of about 7000/sec. The attacks were coming in from all over the world. The NetFlow cache was growing at an alarming rate, and, after a while the HSSI just DIED. As the HSSI bounced, our BGP session bounced with it, causing some mild route flapping (Not vaccilating enough to be damped, but enough nonetheless). Eventually the attack subsided, and all went back to normal, but for a period of time, say 10 minutes, we had a 7507, 64megs, RSP2 WITH the HSSI on a VIP2 on its knees.
We decided that parsing NetFlow logs would give us a better idea of who was amplifying the attacks, and with a simple shell script, we were able to build a database of ASNs, with admin contacts from RADB/RIPE/ARIN. We are planning on sending emails to these customers to ask them to stop amplifying smurfs (script does that too), because this is unacceptable.
My point, then is this: Filtering echo replies is/can be a futile attempt at preventing this type of attack. I watched a 7507 die defending against one attack. More recently, we got hit so hard that the router was screaming without ANY access-lists blocking ICMP echo-replies, perhaps because there was no real fast-switching taking place (each source is different, so the first packet is process switched. Our NetFlow cache went from 3900 flows to 27000 flows in about 4 mins.) And, as we were not amplifying the smurfs, source address verification is a moot point. I am all for allowing these netblocks time to implement this type of filtering (layer3 to layer2 broadcast translation prevention), but not for very long. It appears the best way to light a fire under someones rear end is to publicly shame them into acting. For those who don't act quickly enough, there can be no quarter.
If I appear hostile, I am...
PS
If anyone has similar experiences, please share them, as there has already been enough rhetoric filling this thread, and it is clear that everyone knows the solution lies at edge and beyond, not in the core.
-Christian Martin