Re: Pattern matching odd HTTP request
Brian Behlendorf(brian@collab.net)@2001.09.18 18:50:56 +0000:
On Wed, 19 Sep 2001, Karsten W. Rohrbach wrote:
source ip based connection rate limiting would perhaps solve the problem. are there any modules available out there to accomplish this task?
http://modules.apache.org/search?id=241
is the only one I know of. I've not used it myself, and it's not a part of the "core" distribution. If people use it and it works, I'd appreciate knowing, as this comes up frequently enough that I'd agitate for getting it included.
update: mod_throttle/3.1.2 seems to do the thing (as far as i can see from the source) but i cannot get the thing running on my freebsd 4.3 box here, i keeps dumping core all the time. it is designed to already check the source ip address in the fixup handler. _all_ other modules i downloaded do not do this. i tried the following ones: - mod_throttle/3.1.2 dumps core at runtime, dang thing - mod_throttle_access/0.2 starts processing after GET request - mod_bandwidth/2.0.3 starts processing after GET request - mod_bwshare/0.1.2 shmget: invalid argument but SHM is there, i can see it with ipcs(1)! - mod_conn/1999 starts processing after GET request - mod_limitipconn/0.0.3 starts processing after GET request [http://modules.apache.org/search?id=123] is the url to mod_throttle which looks very mature (but runs buggy on my box, whyever). i am pretty tired now from my long uptime hunting that stupid nimda worm and getting rid of it on some of my customers' servers. i hate iis, yuck. mod_throttle looks like it could do it, because it contains the relevant code in the fixup handler. However the policies are pretty weird -- at least for me in my current state. man, it's been a long time since i hacked my last apache module. must be 5 years or so... if somebody gets mod_throttle running on freebsd4, drop me a line how you did it. it's nearly midnight and i'm going to take a looooong nap now. btw, if your log files keep growing and growing, use multilog from daemontools to autorotate them and keep them at a specified maximum size: ErrorLog "|exec setuidgid log multilog s200000 n5 /path/to/errlogdir" daemontools are at [http://cr.yp.to/daemontools.html] you could even do ErrorLog "|exec grep -v '/scripts/' | setuidgid log multilog s200000 n5 /path/to/errlogdir" to get rid of the darn worm-generated errors. lowering the timeout does not make much sense, the best solution would be having the max connections from one ip in the scoreboard structures and connection handler because it _must_ be processed at connection time, not when the request is already in -- there never will be a GET or any other request... i see that this can be doen in the fixup_handler in an module, but i am quite rusty concerning the apache api ;-) brian, i'd appreciate if you would forward this to the core people. thanks. i got an important appointment with my bed now... take care, /k (tired like hell) --
May the source be with you! KR433/KR11-RIPE -- WebMonster Community Founder -- nGENn GmbH Senior Techie http://www.webmonster.de/ -- ftp://ftp.webmonster.de/ -- http://www.ngenn.net/ karsten&rohrbach.de -- alpha&ngenn.net -- alpha&scene.org -- catch@spam.de GnuPG 0x2964BF46 2001-03-15 42F9 9FFF 50D4 2F38 DBEE DF22 3340 4F4E 2964 BF46 Please do not remove my address from To: and Cc: fields in mailing lists. 10x
Thanks for all your work on this one, Karsten, and I hope you had a good nap. :) mod_throttle looks like it will stop a DOS from one client effectively, though the configuration is a bit complex for just that use of it. I plan to implement it for that. It doesn't appear to be useful though for the type of DDOS that seems to be brewing (which I hope fizzles and dies). The traffic pattern I was seeing (one request every 1.5 minutes) means it would take 45 attackers to tie up a stock Apache indefinitely. If this was implemented as a nimda-like worm, using random IP scanning, and it attacked as found servers, I think there would be a pretty good chance of defending against it (firewall the ip if there are n number of timeouts in a time period). If it did discovery first, though, and kept a cache (I'm not going to throw a flag on someone looking for my /index.html) then attacked at a predetermined time I can't think of a way to defend against it with a per-IP configuration. I'd probably never set my per-IP limit below 5, and this would use 3.33 connections per IP. If, however, Apache had a limit on 'barely-open connections' with some sort of timeout function, I think that would help. For instance, it might look like: BarelyOpenConnectionTimeout 10 BarelyOpenConnectionLimit 50 Such that if there were 50 connections open that hadn't sent a request for 10 seconds, it would stop dropping them in a FIFO manner. I mostly hack on higher-level modules in mod_perl, so I don't know enough about apache internals to speak to the feasibility of such a function. -Bill
participants (2)
-
Bill McGonigle
-
Karsten W. Rohrbach