OT? Device to limit simultaneous connections per host?
Hello everyone, I'm curious if anyone knows of a device that can throttle or limit a remote host's simultaneous connections or requests per second for web traffic on a per-IP basis. So I don't want to say web server X can only have 100 simultaneous connections and 10 requests per second. I want to say that for any given IP connecting to web server X, any one IP can have no more than 5 open connections and should be throttled if it starts making more than ten requests per second. If it could even be url-aware in that it could only apply the rules to specific types of web requests, that would be even better. The motivation here is to find a piece of equipment that can protect compute-intensive, database-driven websites from overly aggressive proxies, firewalls, search engines, etc. which like to hammer a given site with 50+ simultaneous requests against pages that could potentially need a few seconds of processing time per request. I've looked at a Packeteer PacketShaper running in reverse of what it normally would, trying to throttle and shape requests against the server rather than optimizing traffic for a low speed link like it was designed, but that didn't really work out as it could not have the policies applied on a per remote IP basis. Thanks, David
----- Original Message ----- From: "David Hubbard" <dhubbard@dino.hostasaurus.com> To: <nanog@merit.edu> Sent: Wednesday, August 17, 2005 5:50 PM Subject: OT? Device to limit simultaneous connections per host?
Hello everyone, I'm curious if anyone knows of a device that can throttle or limit a remote host's simultaneous connections or requests per second for web traffic on a per-IP basis. --- snip ---
not exactly what you want, but mod_throttle will do (some of) this if you are using apache. however, keep in mind that mod_throttle had an integer underflow bug affecting its concurrent connection counter last time i used it. it's fairly trivial to find and fix and i still have the patch somewhere i think. it was also forwarded to the author, who regrettably expressed little interest in applying it for reasons best known to him (and no longer remembered by me). on a more general note, it is important to think carefully about what it is that you really want to throttle. throttling connections is easy (or easier at least) in comparison to throttling requests, since the latter can be done only if a) you are doing this throttling within the webserver (you already have a request sequence) or b) if you parse individual requests out of a pipelined request stream yourself. you should likewise consider how said throttling should take place - do you want to 'shape' (block for a period of time) or 'rate limit' (drop on the floor)? if it is the former, doing it after it hits your webserver is significantly less useful than preventing it from hitting it in the first place. not sure how on-topic this is (wrt nanog *or* the op's question), so i've kept it to a few assorted thoughts. hth. -p --- paul galynin
A simple Linux/IPTables combo can cover the rate limiting to 10 packets/second piece of this. /sbin/iptables -N HTTP /sbin/iptables -A HTTP -i eth0 -m limit --limit 10 --limit-burst 1 -j ACCEPT /sbin/iptables -A HTTP -i eth0 -j DROP /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 80 -j HTTP You'd just need to run it on your Web server. Someone more creative than me can probably come up with a -p tcp -m state -start ESTABLISHED -j DROP pre-rule in combination with the rules above to to limit the number of connections per client. David Hubbard wrote:
Hello everyone, I'm curious if anyone knows of a device that can throttle or limit a remote host's simultaneous connections or requests per second for web traffic on a per-IP basis. So I don't want to say web server X can only have 100 simultaneous connections and 10 requests per second. I want to say that for any given IP connecting to web server X, any one IP can have no more than 5 open connections and should be throttled if it starts making more than ten requests per second. If it could even be url-aware in that it could only apply the rules to specific types of web requests, that would be even better.
The motivation here is to find a piece of equipment that can protect compute-intensive, database-driven websites from overly aggressive proxies, firewalls, search engines, etc. which like to hammer a given site with 50+ simultaneous requests against pages that could potentially need a few seconds of processing time per request.
I've looked at a Packeteer PacketShaper running in reverse of what it normally would, trying to throttle and shape requests against the server rather than optimizing traffic for a low speed link like it was designed, but that didn't really work out as it could not have the policies applied on a per remote IP basis.
Thanks,
David
participants (3)
-
David Hubbard
-
Howard Hart
-
Paul G