On Fri, Jan 21, 2000 at 11:27:14AM -0800, Pat Myrto wrote:
Alex P. Rudnev has declared that:
e-mail me asking for the code.
Actually, you provided enough details, so any unix guy who knows his sockets can write the program in fifteen minutes.
This type of attack was known for a long time (and there are even nastier variations using TCP header bits and fragments), and, unfortunately, there's no good defense against it. There is one base rule - you (OS) MUST limit resources (CPU, MEMORY, buffers, sockets, etc) catched by any SINGLE origin (IP address, program, service).
Such approach broke just any except a few DoS attacks - for example, if you try to exhaust memory attaking single service, then (1) service can't catch all memory because it's the SINGLE origin, and (2) one SRC address can't catch many resources because it's SINGLE origin, and (3) you can't generate too many different addresses in case of reverse-filtering.
Any ideas/suggestions to hacks to kernel, etc (i.e., freebsd, linux, etc) to impose such limits (configurable by admin, preferably)? Especially in the CPU usage and memory areas (perhaps sockets/handles, too).
One can limit handles, memory, etc for a given user process, but I havent seen any such ability that would affect the TCP stack directly (the load of many of these attacks does not launch or run user-mode code - just eats up all the CPU and/or memory).
This idea sounds like one of the potentially more viable approaches. While this would not solve issues of saturating upstream links that cant handle volume, it WOULD help a lot to enable targeted machines/servers to weather an attack.
Routers - thats something the vendors should think about looking into.
The big loss in systems is sending out all those RSTs, the pcb hash lookups are secondary. Put a rate limit on the number of dropwithreset's you will even queue over a certain amount of time, else just straight drop. This applies to any kind of system generated "response" packet which can be stimulated to happen in large amounts, such as icmp errors. One of the more annoying things is dealing with all the paranoid people with their first firewall program and the urge to complain that the world is ending if they receive a single packet that looks out of place (especially if they contact your service provider and make false accusations or if you are the service provider and untrained in recognizing false accusations out of ignorance). I am continuingly supprised by how often this happens. The same can be said for allocating PCBs on syns in the first place. Much of this becomes a game to seperate the bad packets from the good. Routers face sheer packets/sec to deal with, underpowered ones always have and always will fail when forced to use CPU, nothing new about this. Some ASICs which work on establishing flows w/CPU and then doing high speed switching can fail miserably if the ability to program flows quickly hasn't been designed properly. A certain vendor's switching routers comes to mind. Just to point out the obvious about this one, the ack bit is set but the ack field is always 0, a condition which should really only occur when the ack field rolls over 32 bits. Food for thought. Many people seem to have problems understanding these concepts for some as-yet unknown reason. I'm not sure if this is fortunant or not, as it seems to apply to vendors and developers as well as packet kiddies. Nothing at all is "new" here, there is nothing "magical" about any of this stuff, and people who waste their time believing so are destined for problems. Personally I would be happy never to hear the name "stream.c" again. -- Richard A. Steenbergen <ras@above.net> http://users.quadrunner.com/humble PGP Key ID: 0x60AB0AD1 (E5 35 10 1D DE 7D 8C A7 09 1C 80 8B AF B9 77 BB) AboveNet Communications - AboveSecure Network Security Engineer, Vienna VA "A mind is like a parachute, it works best when open." -- Unknown