Hi, Is not the other danger that any anti Dos measure is likely to fail unless unified over a significant % of the Internet / AS numbers to block to the BotNet client machines at source within their home AS. Once the army has been amassed and escaped their home ASes they can launch an attack against any target or bit of infrastructure that is proving effective or annoying in the disruption of the cyber criminal's commercial activities. Or, to ask the question another way, would the low % of infrastructure backbone attacks increase if the infrastructure started blocking effectively attacks rather than completing them through null routing the target. If the commercial $ are being paid to the ISP to prevent DoS surely the ISP then becomes an extortion target as well rather than just the end customer site. In a way its a bit similar to a protection racket in that as long as the ISP completes attacks rather than blocks them it is in the attackers interests to leave the infrastructure alone to a large degree. Anything less than a solution that sorts the BotNet problem at source AS will end up being like King Canute against a monster Hawaiian style wave of vengeful packets. And consequently the ISPs are faced with little alternative than to go along with the status quo than stick their head above the parapet and end up on the receiving end. Black hole routing easy & effective, source identification / traffic scrubbing expensive. Thoughts? Ben -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Jim Popovitch Sent: 29 January 2008 14:44 To: Patrick W. Gilmore Cc: nanog list Subject: Re: Worst Offenders/Active Attackers blacklists On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <patrick@ianai.net> wrote:
A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply
waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens.
Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.)
There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea.
Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out.
Andrew, IIUC, suggested that the default would be to allow while the check was performed. -Jim P.