2009 Worldwide Infrastructure Security Report available for download.
[Apologies for any duplication if you've seen this notification on other lists.] We've just posted the 2009 Worldwide Infrastructure Security Report for download at this URL: <http://www.arbornetworks.com/report> This year's WWISR is based upon the broadest set of survey data collected by Arbor to date, with the number of respondents doubling from 66 to 132, and much greater input from non-USA/non-EMEA, regional providers. The WWISR is based upon input from the global operational community, and as such, is unique in its focus on the operational security aspects of public-facing networks. Many of you contributed to the survey which forms the foundation of the report; as always, we're grateful for your insight and participation, and welcome your feedback and comments. Thanks much! ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Injustice is relatively easy to bear; what stings is justice. -- H.L. Mencken
-----Original Message----- From: Dobbins, Roland [mailto:rdobbins@arbor.net] Sent: Wednesday, January 20, 2010 9:17 AM To: NANOG list Subject: 2009 Worldwide Infrastructure Security Report available for download.
[Apologies for any duplication if you've seen this notification on other lists.]
We've just posted the 2009 Worldwide Infrastructure Security Report for download at this URL:
<http://www.arbornetworks.com/report>
This year's WWISR is based upon the broadest set of survey data collected by Arbor to date, with the number of respondents doubling from 66 to 132, and much greater input from non-USA/non-EMEA, regional providers. The WWISR is based upon input from the global operational community, and as such, is unique in its focus on the operational security aspects of public-facing networks.
Many of you contributed to the survey which forms the foundation of the report; as always, we're grateful for your insight and participation, and welcome your feedback and comments.
Thanks Roland. I'm wondering if you can clarify why 'Figure 1' only goes up to 2008 and states in key findings "This year, providers reported a peak rate of only 49 Gbps". I happen to personally recall looking at ATLAS sometime last year and seeing an ongoing attack that was on orders of magnitude larger than that. It was interesting to see the observation that DDoS attack scale growth has slowed over the past 12 months, including the authors belief that this is a result of "the upper bounds of IP backbone network capacity (e.g., Nx10 Gbps backbone link rates, awaiting upgrades to 100 Gbps rather than 40 Gbps deployment)". It is expected that 100 Gbps will be quickly adopted this year in order to remove the inefficiencies of Nx10 Gbps LAG bundles, and 10 Gbps is likely to start being adopted at the server level. Also there is already talk about Terabit Ethernet sometime in 2015. All of this leads me to believe that attack size will likely increase again as these technologies become more widely deployed. An interesting observation was the decrease in the use of flow-based tools, and the corresponding increase in the use of things like SNMP tools, DPI, and customer calls for attack detection. Surely this must have been a factor of a larger respondent pool... I'd really like to think people aren't opting not to use flow-based tools in favor or receiving customer calls :( Completely agree on the disturbing observation of the increase in rate-limiting as a primary mitigation mechanism for dealing with DDoS. I've seen more and more people using this as a mitigation strategy, against my advice. For anyone interested in more information on the topic, and why rate-limiting is akin to cutting your foot off, I highly recommend you take a look at the paper "Effectiveness of Rate-Limiting in Mitigating Flooding DoS Attacks" presented by Jarmo Molsa at the Third IASTED International conference. It's nice that the report includes respondent organization types, but what I'd really like to see is number of attacks broken down by industry. I think this would go a long way towards allowing companies to better quantify their risk-score and associated spend based on their associated industry. Otherwise, really good stuff. Thanks for sharing! Stefan Fouant, CISSP, JNCIE-M/T www.shortestpathfirst.net GPG Key ID: 0xB5E3803D
On Wed, 20 Jan 2010, Stefan Fouant wrote:
Completely agree on the disturbing observation of the increase in rate-limiting as a primary mitigation mechanism for dealing with DDoS. I've seen more and more people using this as a mitigation strategy, against my advice. For anyone interested in more information on the topic, and why rate-limiting is akin to cutting your foot off, I highly recommend you take a look at the paper "Effectiveness of Rate-Limiting in Mitigating Flooding DoS Attacks" presented by Jarmo Molsa at the Third IASTED International conference.
Thanks to Arbor for collecting the report and your observations. One thing I found extremely strange is that almost 50% report they use BCP38/Strict uRPF at peering edge, yet only about 33% use it in customer direction. (Figure 13, p20) I wonder if peering edge refers to "drop your own addresses" or real strict uRPF (or the like)? If not I'm curious if this is for real, and how in earth they're doing it, especially given that in Fig 15 (p22) shows they don't implement BGP prefix filtering. If you can't filter BGP, how could you filter packets? Based on my experience, even if you filter BGP, you may not be able to filter packets except in simple scenarios. -- Pekka Savola "You each name yourselves king, yet the Netcore Oy kingdom bleeds." Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
Pekka Savola wrote:
On Wed, 20 Jan 2010, Stefan Fouant wrote:
Completely agree on the disturbing observation of the increase in rate-limiting as a primary mitigation mechanism for dealing with DDoS. I've seen more and more people using this as a mitigation strategy, against my advice. For anyone interested in more information on the topic, and why rate-limiting is akin to cutting your foot off, I highly recommend you take a look at the paper "Effectiveness of Rate-Limiting in Mitigating Flooding DoS Attacks" presented by Jarmo Molsa at the Third IASTED International conference.
Thanks to Arbor for collecting the report and your observations.
Indeed.
One thing I found extremely strange is that almost 50% report they use BCP38/Strict uRPF at peering edge, yet only about 33% use it in customer direction. (Figure 13, p20)
I wonder if peering edge refers to "drop your own addresses" or real strict uRPF (or the like)?
Depends. It can do that, BOGON, and any other prefix you want your edge to discard. I would imagine that it would be difficult to use strict uRPF on a peering interface though, as packets through that peer may be received on a different interface than it was sent on (in a multi-homing situation). I do strict uRPF for any directly connected clients (SDSL, fibre, collocation etc) that are single-homed. It's literally one command on a router interface that is connected to the switch (subnet) of aggregated clients. For our clients that multi-home into two of our different edge gear via BGP, I use loose uRPF. This allows fail-over without packets being dropped. In some multi-homed client cases, I can get away with using strict. This is possible in situations where a client has one high-bandwidth link and one low-bandwidth link in a fail-over-only case. If BGP is set up correctly, the secondary link will never be used until the primary goes down. All packets are sent/received on the only interface in the network that knows about the client prefix, so it works. If the primary fails, the secondary takes over completely, so again, strict works. Loose uRPF allows a packet to come into any valid interface (and you can even allow default route). This seems counter-intuitive, however, the important point to note is that once uRPF is enabled even in loose mode, it will effectively allow you to drop based on source address when combined with RTBH on any interface it is configured on.
If not I'm curious if this is for real, and how in earth they're doing it, especially given that in Fig 15 (p22) shows they don't implement BGP prefix filtering. If you can't filter BGP, how could you filter packets? Based on my experience, even if you filter BGP, you may not be able to filter packets except in simple scenarios.
This isn't about packet 'filtering', it's about 'dropping' (or sinking). Essentially, in a uRPF [S/]RTBH setup, your edge routers are configured with routes that point to a special address that is destined (eventually) to null (usually this is automated...the routes are sent to the edge via a 'trigger' box). When a packet comes in (or attempts to go out) of the interface configured with uRPF, the system treats the null route as best-path, and discards (or forwards) it. This setup does not require you to have ANY eBGP whatsoever, and also works in deployments where all of your eBGP peers are sending only default. As long as you have iBGP to all edge devices, this setup is pretty trivial to configure. Throw in a Team Cymru route-server peering on your trigger box, and you've automated BOGON management network-wide. I don't think I explained this very clearly (hopefully it was accurate... it is early in the morning ;). Here is a decent 'howto': http://www.packetlife.net/blog/2009/jul/6/remotely-triggered-black-hole-rtbh... Steve
On Jan 21, 2010, at 4:34 AM, Pekka Savola wrote:
Thanks to Arbor for collecting the report and your observations.
One thing I found extremely strange is that almost 50% report they use BCP38/Strict uRPF at peering edge, yet only about 33% use it in customer direction. (Figure 13, p20)
Ahh, so, turns out some of the colors in a few of the charts (including Figure 13) were transposed (that's what I get for just looking at the numbers in the draft), making several of the findings seem a bit strange. This has since been corrected and should clarify your cocnern.. Thanks Pekka (and several others) for pointing this out. -danny
On Jan 20, 2010, at 8:32 AM, Stefan Fouant wrote:
I'm wondering if you can clarify why 'Figure 1' only goes up to 2008 and states in key findings "This year, providers reported a peak rate of only 49 Gbps". I happen to personally recall looking at ATLAS sometime last year and seeing an ongoing attack that was on orders of magnitude larger than that.
That was an error in the chart (which has since been corrected), it should have illustrated that 2009 respondents indicated 49 Gbps was the largest observed attack. FWIW, I've seen empirical evidence supporting much larger attacks (~82 Gbps), and the Akamai folks indicated recently they'd seen attacks on the order of 120Gbps towards a single target. However, these attacks were NOT reflected in survey feedback expressly, and were therefore not included in the report.
An interesting observation was the decrease in the use of flow-based tools, and the corresponding increase in the use of things like SNMP tools, DPI, and customer calls for attack detection. Surely this must have been a factor of a larger respondent pool... I'd really like to think people aren't opting not to use flow-based tools in favor or receiving customer calls :(
Yep, I think this is simply an artifact of a larger respondent pool size, with many smaller respondents being represented. -danny
On Jan 22, 2010, at 8:08 AM, Danny McPherson wrote:
Yep, I think this is simply an artifact of a larger respondent pool size, with many smaller respondents being represented.
Correct, as noted in the text, the change in survey demographics appears to be the cause of this shift. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Injustice is relatively easy to bear; what stings is justice. -- H.L. Mencken
participants (5)
-
Danny McPherson
-
Dobbins, Roland
-
Pekka Savola
-
Stefan Fouant
-
Steve Bertrand