Worst Offenders/Active Attackers blacklists
There are a number of public network attacker threat feeds available, the most well know of which, AFAIK, is the Internet Storm Center's DShield system. I know a few network operators, including at least one on this list, also run private versions of the DShield system. Are there many others? Do any or most network operators have some sort of private current block list that gets pushed out to routers and or firewalls/traffic shapers in real time? I'm the CTO and founder of ThreatSTOP (www.threatstop.com), and we're currently propagating the DShield, and some other, block lists for use in firewalls. I'm interested in gathering additional threat information, and serving additional communities. Is there any interest in a collaborative platform where anonymized candidates for blocking would be submitted by a trusted group, and then propagated out to the whole group? I'd be happy to collect responses anonymously and submit a summary back to the list, if people don't want to open this up on the list.
On Sun, 27 Jan 2008 12:21:27 PST, "Tomas L. Byrnes" said:
I'm the CTO and founder of ThreatSTOP (www.threatstop.com), and we're currently propagating the DShield, and some other, block lists for use in firewalls. I'm interested in gathering additional threat information, and serving additional communities.
Is there any interest in a collaborative platform where anonymized candidates for blocking would be submitted by a trusted group, and then propagated out to the whole group?
http://www.ranum.com/security/computer_security/editorials/dumb/ This illustrates dumb idea #2. Explain to me how you intend to enumerate enough of the "bad" hosts out there that such a blocklist would help, while still having it small enough that you don't blow out the RAM on whatever device you're installing it on. Have you *tested* whatever iptables/ipf/ACL for proper operation with 10 million entries?
My suggestion would be not even to try iptables. It'll take hours just to load 10 million entries. There's no efficient mass loading interface. -J
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: Monday, January 28, 2008 4:23 PM To: Tomas L. Byrnes Cc: nanog@nanog.org Subject: Re: Worst Offenders/Active Attackers blacklists
On Sun, 27 Jan 2008 12:21:27 PST, "Tomas L. Byrnes" said:
I'm the CTO and founder of ThreatSTOP (www.threatstop.com), and we're currently propagating the DShield, and some other, block lists for use in firewalls. I'm interested in gathering additional threat information, and serving additional communities.
Is there any interest in a collaborative platform where anonymized candidates for blocking would be submitted by a trusted group, and then propagated out to the whole group?
http://www.ranum.com/security/computer_security/editorials/dumb/
This illustrates dumb idea #2. Explain to me how you intend to enumerate enough of the "bad" hosts out there that such a blocklist would help, while still having it small enough that you don't blow out the RAM on whatever device you're installing it on. Have you *tested* whatever iptables/ipf/ACL for proper operation with 10 million entries?
Valdis.Kletnieks@vt.edu wrote:
On Sun, 27 Jan 2008 12:21:27 PST, "Tomas L. Byrnes" said:
I'm the CTO and founder of ThreatSTOP (www.threatstop.com), and we're currently propagating the DShield, and some other, block lists for use in firewalls. I'm interested in gathering additional threat information, and serving additional communities.
Is there any interest in a collaborative platform where anonymized candidates for blocking would be submitted by a trusted group, and then propagated out to the whole group?
http://www.ranum.com/security/computer_security/editorials/dumb/
This illustrates dumb idea #2. Explain to me how you intend to enumerate enough of the "bad" hosts out there that such a blocklist would help, while still having it small enough that you don't blow out the RAM on whatever device you're installing it on. Have you *tested* whatever iptables/ipf/ACL for proper operation with 10 million entries?
Why would you need to do this? There's already proven technology out there. Simply write a DNSBL module for iptables. 1. packet arrives and is forwarded (packets continue to arrive are forwarded in a default-allow security posture) 2. dnsbl checks are sent, and replies received 3. a entry is added to iptables/acl/ipf and removed based on the DNS zone's TTL points against: yes this generates extra traffic yes this could be used to make a DDoS attack worse , so I don't think anyone would argue that this is a good utility to run on a core/edge router on a large pipe yes the usual admonitions about dns security apply yes this creates a race condition to infect machines or do other dirty deeds before the dnsbl reply comes back, or were you wanting a perfect solution? In reality the DNSBL security model has been shown to work in practice with mail servers around the world. It's used for billions of e-mail transactions every day. Most major e-mail abuse prevention software that I'm aware of relies to some extent on DNSBL technology, and other than writing a software package to do it, there's no reason why it couldn't be done with $FIREWALL_OF_CHOICE. Further we use a default-allow security posture to ensure data flow, as this is an addition to other security measures taken on a given system as part of best practices. Also with this method and packages such as cfengine and rsync, updated lists of questionable hosts connecting to a network can be rapidly propagated to hosts which have not yet been attacked minimizing the effect of remote scans/infection attempts against netblocks. Regardless of the inefficiency, and the fact that there is a race to get the data back from the DNS server, it's better than what we have now, which is nothing at all like this. Andrew
Regardless of the inefficiency, and the fact that there is a race to get the data back from the DNS server, it's better than what we have now, which is nothing at all like this.
No, it is not. A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens. Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.) There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea. Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out. -- TTFN, patrick On Jan 29, 2008, at 1:40 AM, Andrew D Kirch wrote:
Valdis.Kletnieks@vt.edu wrote:
On Sun, 27 Jan 2008 12:21:27 PST, "Tomas L. Byrnes" said:
I'm the CTO and founder of ThreatSTOP (www.threatstop.com), and we're currently propagating the DShield, and some other, block lists for use in firewalls. I'm interested in gathering additional threat information, and serving additional communities.
Is there any interest in a collaborative platform where anonymized candidates for blocking would be submitted by a trusted group, and then propagated out to the whole group?
http://www.ranum.com/security/computer_security/editorials/dumb/
This illustrates dumb idea #2. Explain to me how you intend to enumerate enough of the "bad" hosts out there that such a blocklist would help, while still having it small enough that you don't blow out the RAM on whatever device you're installing it on. Have you *tested* whatever iptables/ipf/ACL for proper operation with 10 million entries?
Why would you need to do this? There's already proven technology out there. Simply write a DNSBL module for iptables.
1. packet arrives and is forwarded (packets continue to arrive are forwarded in a default-allow security posture) 2. dnsbl checks are sent, and replies received 3. a entry is added to iptables/acl/ipf and removed based on the DNS zone's TTL
points against: yes this generates extra traffic yes this could be used to make a DDoS attack worse , so I don't think anyone would argue that this is a good utility to run on a core/edge router on a large pipe yes the usual admonitions about dns security apply yes this creates a race condition to infect machines or do other dirty deeds before the dnsbl reply comes back, or were you wanting a perfect solution?
In reality the DNSBL security model has been shown to work in practice with mail servers around the world. It's used for billions of e-mail transactions every day. Most major e-mail abuse prevention software that I'm aware of relies to some extent on DNSBL technology, and other than writing a software package to do it, there's no reason why it couldn't be done with $FIREWALL_OF_CHOICE. Further we use a default-allow security posture to ensure data flow, as this is an addition to other security measures taken on a given system as part of best practices. Also with this method and packages such as cfengine and rsync, updated lists of questionable hosts connecting to a network can be rapidly propagated to hosts which have not yet been attacked minimizing the effect of remote scans/infection attempts against netblocks.
Regardless of the inefficiency, and the fact that there is a race to get the data back from the DNS server, it's better than what we have now, which is nothing at all like this.
Andrew
On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <patrick@ianai.net> wrote:
A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens.
Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.)
There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea.
Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out.
Andrew, IIUC, suggested that the default would be to allow while the check was performed. -Jim P.
On Jan 29, 2008, at 9:43 AM, Jim Popovitch wrote:
On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <patrick@ianai.net> wrote:
A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens.
Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.)
There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea.
Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out.
Andrew, IIUC, suggested that the default would be to allow while the check was performed.
I read that, but discounted it. There has been more than one single- packet compromise in the past. Not really a good idea to let packets through for a while, _then_ decide to stop them. Kinda closing the bard door after yada yada yada. Perhaps combine the two? Have a stateful firewall which also checks DNSBLs? I can see why that would be attractive to someone, but still not a good idea. Not to mention no DNSBL operator would let any reasonably sized network query them for every new source address - the load would squash the name servers. As I mentioned, zone transfer the DNSBL and check against that might add a modicum of usefulness, but still has lots of bad side effects. Then again, what do I know? Please implement this in production and show me I'm wrong. I smell a huge business opportunity if you can get it to work! -- TTFN, patrick
Patrick W. Gilmore wrote:
Perhaps combine the two? Have a stateful firewall which also checks DNSBLs? I can see why that would be attractive to someone, but still not a good idea. Not to mention no DNSBL operator would let any reasonably sized network query them for every new source address - the load would squash the name servers.
If you want the sort of performance you expect from your firewall now your going to have to evaluate the source on the basis of locally available information. bgp based blocklist would be a more sensible approach than an dnsbl. Then it's a question of how many blackhole prefixs you're willing to carry in your firewall's table...
As I mentioned, zone transfer the DNSBL and check against that might add a modicum of usefulness, but still has lots of bad side effects.
Then again, what do I know? Please implement this in production and show me I'm wrong. I smell a huge business opportunity if you can get it to work!
PWG> Date: Tue, 29 Jan 2008 10:02:20 -0500 PWG> From: Patrick W. Gilmore PWG> I read that, but discounted it. There has been more than one PWG> single-packet compromise in the past. Not really a good idea to PWG> let packets through for a while, _then_ decide to stop them. Kinda PWG> closing the bard door after yada yada yada. (Apologies for straying from ops into R&D-ish stuff.) True. IMNSHO, lookups would need to be instantaneous. Kind of like... routing. The RIB presumably is a very sparse array. What's needed is a FIB capable of approximately 2^24 routes, yet with only two destinations: "pass" and "drop". A naive approach would be a simple sorted array of 2^24 entries. Assuming IPv4, that's a 64 MB lookup table that can be searched using good old binary search. Take this as a starting point for something that definitely is possible. Said sorted array contains much redundancy. One can do _much_ better than a primitive sorted array. Uh, the more back-of-the-envelope calculations I run, the more I believe this is entirely doable. I'd need to dust off some code I wrote a while back, but I consider 150-clock IPv4 lookups reasonable. IPv6 would be slower by a factor of four. (Predictions based on profiling performed on Pentium4-targetted assembly code custom-written for a similar purpose.) Note that the above is slightly optimistic. It does not account for blowing out the TLB on every lookup. I'd need to review/profile that penalty. RAM use would be the biggest obstacle. Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
Patrick W. Gilmore wrote:
On Jan 29, 2008, at 9:43 AM, Jim Popovitch wrote:
On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <patrick@ianai.net> wrote:
A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens.
Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.)
There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea.
Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out.
Andrew, IIUC, suggested that the default would be to allow while the check was performed.
I read that, but discounted it. There has been more than one single-packet compromise in the past. Not really a good idea to let packets through for a while, _then_ decide to stop them. Kinda closing the bard door after yada yada yada.
I don't disagree with this, but I'm also noting that this is not the universal fix to everything wrong with the internet. I'll also note that there is in fact no such thing. You also don't get to discount, and then assault my position without a reasonable position for discounting it. One packet exploits will compromise the host whether this system is running or not. This is the case with any security system, as there is no "one size fits all" solution. So one might say that this system is not intended to deal with such an event, and that there are already methods out there for doing so. Such services should be firewalled off from a majority of the internet (MSSQL, SSH, RPC anyone?). If you aren't firewalling those services which are notoriously vulnerable to a one packet compromise, maybe I should suggest that this is the problem, and not the DNSBL based trust system that wasn't designed to stop the attack? Network security is, as always a practice of defense in depth.
Perhaps combine the two? Have a stateful firewall which also checks DNSBLs? I can see why that would be attractive to someone, but still not a good idea. Not to mention no DNSBL operator would let any reasonably sized network query them for every new source address - the load would squash the name servers.
I don't have a disagreement here, but zone transfers are easy to set up.
As I mentioned, zone transfer the DNSBL and check against that might add a modicum of usefulness, but still has lots of bad side effects.
Then again, what do I know? Please implement this in production and show me I'm wrong. I smell a huge business opportunity if you can get it to work!
On Jan 29, 2008, at 3:28 PM, Andrew D Kirch wrote:
Patrick W. Gilmore wrote:
On Jan 29, 2008, at 9:43 AM, Jim Popovitch wrote:
On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <patrick@ianai.net> wrote:
A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens.
Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.)
There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea.
Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out.
Andrew, IIUC, suggested that the default would be to allow while the check was performed.
I read that, but discounted it. There has been more than one single-packet compromise in the past. Not really a good idea to let packets through for a while, _then_ decide to stop them. Kinda closing the bard door after yada yada yada.
I don't disagree with this, but I'm also noting that this is not the universal fix to everything wrong with the internet. I'll also note that there is in fact no such thing. You also don't get to discount, and then assault my position without a reasonable position for discounting it. One packet exploits will compromise the host whether this system is running or not. This is the case with any security system, as there is no "one size fits all" solution. So one might say that this system is not intended to deal with such an event, and that there are already methods out there for doing so. Such services should be firewalled off from a majority of the internet (MSSQL, SSH, RPC anyone?). If you aren't firewalling those services which are notoriously vulnerable to a one packet compromise, maybe I should suggest that this is the problem, and not the DNSBL based trust system that wasn't designed to stop the attack? Network security is, as always a practice of defense in depth.
Of course there is no One Final Solution. However, each solution necessarily must be more good than bad. This solution has _many_ bad things, each of which are more bad than the solution is good. For instance, this creates and instant and trivial vector to DDoS the name servers doing the DNSBL hosting.
Perhaps combine the two? Have a stateful firewall which also checks DNSBLs? I can see why that would be attractive to someone, but still not a good idea. Not to mention no DNSBL operator would let any reasonably sized network query them for every new source address - the load would squash the name servers.
I don't have a disagreement here, but zone transfers are easy to set up.
Sure they are, but zone transfers, while not as bad as individual lookups, are still a bad idea IMHO. For instance, are you sure you want your dynamic filters 30 or 60 minutes out of date? BGP was discussed, but such feeds already exist and do not require a firewall. Anyway, as I have said multiple times now, you don't have to believe me. Please, set this up and tell us how it works. Your real-world experience will beat my mailing list post every time. -- TTFN, patrick
As I mentioned, zone transfer the DNSBL and check against that might add a modicum of usefulness, but still has lots of bad side effects.
Then again, what do I know? Please implement this in production and show me I'm wrong. I smell a huge business opportunity if you can get it to work!
PWG> Date: Tue, 29 Jan 2008 15:50:50 -0500 PWG> From: Patrick W. Gilmore PWG> [Z]one transfers, while not as bad as individual lookups, are still PWG> a bad idea IMHO. For instance, are you sure you want your dynamic PWG> filters 30 or 60 minutes out of date? As opposed to infinitely out-of-date (i.e., no filters)? Don't get me wrong; I'm none too keen on using DNS to distribute IP ACLs. I just am nitpicking that one particular point. PWG> BGP was discussed, but such feeds already exist and do not require a PWG> firewall. IMHO, this is better than anything DNS-based. Using zone transfers is like using RIP. *shudder* Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
On Jan 29, 2008, at 4:23 PM, Edward B. DREGER wrote:
PWG> [Z]one transfers, while not as bad as individual lookups, are still PWG> a bad idea IMHO. For instance, are you sure you want your dynamic PWG> filters 30 or 60 minutes out of date?
As opposed to infinitely out-of-date (i.e., no filters)? Don't get me wrong; I'm none too keen on using DNS to distribute IP ACLs. I just am nitpicking that one particular point.
Frequently, yes. FPs can be more dangerous than FNs. Depends on your network, clients, etc. And that's just the first reason that came to mind. There are plenty of others. Or maybe not. Prove me wrong! -- TTFN, patrick
PWG> Date: Tue, 29 Jan 2008 16:39:14 -0500 PWG> From: Patrick W. Gilmore PWG> [A]re you sure you want your dynamic filters 30 or 60 minutes out PWG> of date? EBD> As opposed to infinitely out-of-date (i.e., no filters)? PWG> Frequently, yes. FPs can be more dangerous than FNs. We're dealing with more than one issue, here: * How to disseminate information (DNSLLs, AXFR, BGP, etc.) * How to act on it. I'm curious what filtering method you use that never passes a packet from a "bad" host, yet never has an outdated ACL entry that blocks a recently-cleaned host. If you present your arguments to state "don't run ACLs", fine. Your case is a wholly valid one for not filtering. If you contend that your position supports static ACLs versus dynamic -- forget it. Static filters are even more prone to bitrot. (69/8, anyone?) Why? Because static \(.*\) requires more effort than dynamic \1. Despite dynamic routing's non-instantaneous convergence, I doubt anyone here uses much static routing. Do packets ever get misdirected due to dynamic routing protocol failure? You bet. Do we poo-poo dynamic routing? Maybe, but we still decide it's the best overall approach. Once one has the information, the question is how to act on it. Proposition: Make the "Evil Bit" for real. (Hear me out...) How do people deal with spam? Some block it outright. Others tag, allowing users to decide based on a numeric score. Sometimes based on ACL (DNSBL being just one way of communicating an ACL), sometimes based on inspection. Maybe one firewall drops blacklisted traffic. Another might set the "Evil Bit". Perhaps inserting a new IP option would be useful. Or map "badness" to something like, oh... say... 802.1p priority. PWD> Depends on your network, clients, etc. Exactly. Some people use default-only routing. Others use static. People here run dynamic. All have their places. Anyone using dynamic _anything_ accepts, explicitly or implicitly, that the information may be outdated or wrong. This does not mean dynamic is invalid across the board. Ehhhh.... did I just chase a red herring? I thought we were discusing RIB/FIB methods, not whether or not anyone would want to run dynamic firewall rules. Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
Hi, Is not the other danger that any anti Dos measure is likely to fail unless unified over a significant % of the Internet / AS numbers to block to the BotNet client machines at source within their home AS. Once the army has been amassed and escaped their home ASes they can launch an attack against any target or bit of infrastructure that is proving effective or annoying in the disruption of the cyber criminal's commercial activities. Or, to ask the question another way, would the low % of infrastructure backbone attacks increase if the infrastructure started blocking effectively attacks rather than completing them through null routing the target. If the commercial $ are being paid to the ISP to prevent DoS surely the ISP then becomes an extortion target as well rather than just the end customer site. In a way its a bit similar to a protection racket in that as long as the ISP completes attacks rather than blocks them it is in the attackers interests to leave the infrastructure alone to a large degree. Anything less than a solution that sorts the BotNet problem at source AS will end up being like King Canute against a monster Hawaiian style wave of vengeful packets. And consequently the ISPs are faced with little alternative than to go along with the status quo than stick their head above the parapet and end up on the receiving end. Black hole routing easy & effective, source identification / traffic scrubbing expensive. Thoughts? Ben -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Jim Popovitch Sent: 29 January 2008 14:44 To: Patrick W. Gilmore Cc: nanog list Subject: Re: Worst Offenders/Active Attackers blacklists On Jan 29, 2008 12:58 AM, Patrick W. Gilmore <patrick@ianai.net> wrote:
A general purpose host or firewall is NOTHING like a mail server. There is no race condition in a mail server, because the server simply
waits until the DNS query is returned. No user is watching the mail queue, if mail is delayed by 1/10 of a second, or even many seconds, nothing happens.
Now magine every web page you visit is suddenly paused by 100ms, or 1000ms, or multiple seconds? Imagine that times 100s or 1000s of users. Imagine what your call center would look like the day after you implemented it. (Hint: Something like a smoking crater.)
There might be ways around this (e.g. zone transfer / bulk load), but it is still not a good idea.
Of course I could be wrong. You shouldn't trust me on this, you should try it in production. Let us know how it works out.
Andrew, IIUC, suggested that the default would be to allow while the check was performed. -Jim P.
On Jan 29, 2008 7:14 AM, Ben Butler <ben.butler@c2internet.net> wrote:
Or, to ask the question another way, would the low % of infrastructure backbone attacks increase if the infrastructure started blocking effectively attacks rather than completing them through null routing the target. If the commercial $ are being paid to the ISP to prevent DoS
So first off you might consider where the 'null route' is applied, in which cases it's used vs other sorts of techniques. There are many, many cases everyday of things that get null routed due to them being a destination of a DoS/DDoS attack. In those cases almost always it's a completely useless thing that the end user doesn't even care about, so just stopping the flood is more important than any other solution. The cases of larger/more-important things being attacked get handled in other, more complex, ways. (acls, mitigation platforms/scrubbers/etc)
surely the ISP then becomes an extortion target as well rather than just the end customer site.
no, not really, sometimes the upstream devices get packet-love, but that's not difficult to fix either... who needs their internal infrastructure reachable by the external world? See work on infrastructure acls by: james gill @vzb + darrel lewis @ cisco + paul quinn @ cisco + barry greene @ cisco +.... new book by Greg Schudel @ cisco -> <http://www.ciscopress.com/bookstore/product.asp?isbn=1587053365> note that I haven't looked at the book but it seems to cover some of this.
In a way its a bit similar to a protection racket in that as long as the ISP completes attacks rather than blocks them it is in the attackers interests to leave the infrastructure alone to a large degree.
or it's in their interest because their monetary flow comes across those same pipes.... so turning off the intertubes is contrary to their goals. (see presentations by Team Cymru on this topic actually)
Black hole routing easy & effective, source identification / traffic scrubbing expensive.
The distinction between blackhole-routing and scrubbing that you draw is overly simplistic, if you are a UUNET/VerizonBusiness customer (or sprint or ATT though I can't easily find their links...) <http://www.verizonbusiness.com/products/security/managed/#services-dos> yours for the low-low price of 3250/month... which is well worth it if you have an ecommerce site that of any decent revenue draw... The folks at UUNET/VZB will even do things aside from NullRoute if you have issues and are their customer, all you have to do is call and ask them for assistance when problems arise, some of that is described at: <http://www.verizonbusiness.com/terms/us/products/internet/sla/> (I had to google search this, vz's website isn't so helpful on finding information....) -Chris
participants (10)
-
Andrew D Kirch
-
Ben Butler
-
Christopher Morrow
-
Edward B. DREGER
-
Jason J. W. Williams
-
Jim Popovitch
-
Joel Jaeggli
-
Patrick W. Gilmore
-
Tomas L. Byrnes
-
Valdis.Kletnieks@vt.edu