A simple Linux/IPTables combo can cover the rate limiting to 10 packets/second piece of this. /sbin/iptables -N HTTP /sbin/iptables -A HTTP -i eth0 -m limit --limit 10 --limit-burst 1 -j ACCEPT /sbin/iptables -A HTTP -i eth0 -j DROP /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 80 -j HTTP You'd just need to run it on your Web server. Someone more creative than me can probably come up with a -p tcp -m state -start ESTABLISHED -j DROP pre-rule in combination with the rules above to to limit the number of connections per client. David Hubbard wrote:
Hello everyone, I'm curious if anyone knows of a device that can throttle or limit a remote host's simultaneous connections or requests per second for web traffic on a per-IP basis. So I don't want to say web server X can only have 100 simultaneous connections and 10 requests per second. I want to say that for any given IP connecting to web server X, any one IP can have no more than 5 open connections and should be throttled if it starts making more than ten requests per second. If it could even be url-aware in that it could only apply the rules to specific types of web requests, that would be even better.
The motivation here is to find a piece of equipment that can protect compute-intensive, database-driven websites from overly aggressive proxies, firewalls, search engines, etc. which like to hammer a given site with 50+ simultaneous requests against pages that could potentially need a few seconds of processing time per request.
I've looked at a Packeteer PacketShaper running in reverse of what it normally would, trying to throttle and shape requests against the server rather than optimizing traffic for a low speed link like it was designed, but that didn't really work out as it could not have the policies applied on a per remote IP basis.
Thanks,
David