We have some DDoS-sensitive customers asking us to refer them to the best ISPs for "in-the-core" DDoS defense. Other than UUnet (hi Chris!) and MFN, I'm not aware of any ISPs in North America developing a reputation for consistent DDoS defense. Could folks contact me either off-list or on-list? It seems that large content providers and Tier2/3 bandwidth buyers would do well to collaborate on group RFP's for this type of thing to send the message to ISPs it is something to invest in (dare I say productize?). While UUnet's detection/blocking is great, it would be wonderful to see some more intelligent filtering of DDoS traffic ala RiverHead or similar approach that doesn't completely blackhole victim IPs. Cheers, -Lane Equinix
On Tue, Jul 29, 2003 at 04:33:28PM -0700, Lane Patterson wrote: [ obnoxious text wordwrapped :) ]
We have some DDoS-sensitive customers asking us to refer them to the best ISPs for "in-the-core" DDoS defense. Other than UUnet (hi Chris!) and MFN, I'm not aware of any ISPs in North America developing a reputation for consistent DDoS defense. Could folks contact me either off-list or on-list?
It seems that large content providers and Tier2/3 bandwidth buyers would do well to collaborate on group RFP's for this type of thing to send the message to ISPs it is something to invest in (dare I say productize?). While UUnet's detection/blocking is great, it would be wonderful to see some more intelligent filtering of DDoS traffic ala RiverHead or similar approach that doesn't completely blackhole victim IPs.
Well, there are a few things/issues here. One is the "security" of such filtering. As many times as it's come up here saying "Filter your customers, it's important", how many people out there have a strict policy for filtering them? Would you want these same customers and providers that can not get the filtering right in the first place to have the ability to accidentally (or intentionally) leak a blackhole route to your larger network? Yes, there is the ability to log bgp updates to have accountability amongst other things, but the more serious issue is that people are not doing effective filtering [of announcements] in the first place. As far as I can tell these days, the US depends on the Internet to be a utility. Always-on, and there is (for the most part) sufficent interconnection that the choice between the top few providers isn't as much a technical decision, but more of a financial one. (There is no need to connect to MCI, Sprint and UUNet each to avoid the peering congestion points as in the past). Equinix itself is demonstrating this with your "change providers monthly" service that you offer. I think it will be some time before there will be adoption of this across most of the networks. We want people to contact our security team instead of "blackhole and forget" type solutions. If someone abuses the PSTN, or other networks they eventually will get their service terminated. If people abuse their access by launching DoS attacks, we need to catch them and get their access terminated. It's a bit harder to trace than PSTN (or other netowrks) but I feel of value to do so. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
At 10:58 AM 30/07/2003 -0400, Jared Mauch wrote:
If someone abuses the PSTN, or other networks they eventually will get their service terminated. If people abuse their access by launching DoS attacks, we need to catch them and get their access
Gee, wouldnt that be nice. Having personally dealt with one that had ~ 500 hosts involved on several dozen networks, I can confirm that of all the repeated pleas for help to said networks to track down the controlling party, I had a grand total of ONE (yes, 1 as in one above zero) who actually responded with a response beyond the auto-responders.... And that was to let me know that the user in question had already formatted their hard drive before the admin could see what was on the machine and who might have been controlling the machine. It took several _weeks_ for all the attacking hosts to be killed off with several reminder messages to various networks. So I dont hold much optimism for actually tracking down the actual attacker. ---Mike
terminated. It's a bit harder to trace than PSTN (or other netowrks) but I feel of value to do so.
- Jared
-- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Wed, Jul 30, 2003 at 02:43:16PM -0400, Mike Tancsa wrote:
At 10:58 AM 30/07/2003 -0400, Jared Mauch wrote:
If someone abuses the PSTN, or other networks they eventually will get their service terminated. If people abuse their access by launching DoS attacks, we need to catch them and get their access
Gee, wouldnt that be nice. Having personally dealt with one that had ~ 500 hosts involved on several dozen networks, I can confirm that of all the repeated pleas for help to said networks to track down the controlling party, I had a grand total of ONE (yes, 1 as in one above zero) who actually responded with a response beyond the auto-responders.... And that was to let me know that the user in question had already formatted their hard drive before the admin could see what was on the machine and who might have been controlling the machine.
It took several _weeks_ for all the attacking hosts to be killed off with several reminder messages to various networks. So I dont hold much optimism for actually tracking down the actual attacker.
While I can have sympathy for this situation, you removed my argument about the "DoS and forget". Lets say I am running www.example.com. I have it load-shared across a series of 5-10 machines, and they all get DoS attacked via some worm, etc.. (ala the www1.whitehouse.gov) with a large set of traffic. I can't just deem that IP unusable on my ARIN justification and have my providers absorb the cost of the traffic at zero cost to me or them. (well, unless they're getting the traffic on a customer link and want to continue billing at that bandwidth overage rate ;-) ) The router ports my upstream has invested (for peering) and circuits for their network have a cost. If an attack lasts 10 minutes, yes, the blackhole is easy to move, but what if it is coded to follow dns entries, honor ttl, and continue to pound on devices. You can't just submit a route/form/whatnot to your provider and have them leave in a null0/discard route indenfiately. I'm sorry you had poor luck tracking them down, but without the providers putting the access controls necessary to prevent the route-leak misconfiguration, I don't want to think about the instability you (or others) are speaking of introducing if there is the ability to distribute a null0 route to your upstream and accidentally leak it. (sorry LINX members but ..) You should see the number of people who post to the LINX ops list a month saying "whoops, we leaked routes, can you clear your max prefix counters?" Imagine someone accidentally leaking your routes to their upstream and tagging them with the community due to misconfiguration. - Jared
terminated. It's a bit harder to trace than PSTN (or other netowrks) but I feel of value to do so.
- Jared
-- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
-- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
At 03:19 PM 30/07/2003 -0400, Jared Mauch wrote:
On Wed, Jul 30, 2003 at 02:43:16PM -0400, Mike Tancsa wrote:
At 10:58 AM 30/07/2003 -0400, Jared Mauch wrote:
If someone abuses the PSTN, or other networks they eventually will get their service terminated. If people abuse their access by launching DoS attacks, we need to catch them and get their access
Gee, wouldnt that be nice. Having personally dealt with one that had ~
500
hosts involved on several dozen networks, I can confirm that of all the repeated pleas for help to said networks to track down the controlling party, I had a grand total of ONE (yes, 1 as in one above zero) who actually responded with a response beyond the auto-responders.... And that was to let me know that the user in question had already formatted their hard drive before the admin could see what was on the machine and who might have been controlling the machine.
It took several _weeks_ for all the attacking hosts to be killed off with several reminder messages to various networks. So I dont hold much optimism for actually tracking down the actual attacker.
While I can have sympathy for this situation, you removed my argument about the "DoS and forget".
I understand the point you are making, but I am speaking just to the side comment you made, "we need to catch them and get their access." I totally agree with you. But based on my recent experiences with organizational responses, it seems NO ONE agrees with it in practice. It seems all the discussion around DDoSes center on ways of coping with DDoSes, or mitigating the effects and not making 'the solutions worse than the problem.' However, there does not seem to be enough discussion and effort in to catching and prosecuting the people doing it. I would be at least happy with the "catching part." I recall one of our users was involved in a DoS once a few years back when the "giant pings" could crash MS boxes. The fact that his perceived anonymity was removed was enough to keep him from repeating his attacks.... ---Mike
On Wed, 30 Jul 2003, Mike Tancsa wrote:
I recall one of our users was involved in a DoS once a few years back when the "giant pings" could crash MS boxes. The fact that his perceived anonymity was removed was enough to keep him from repeating his attacks....
That's the heart of the problem. Anyone who's owned enough boxes can sit there happily running a DDoS anonymously against a target because: 1) The OS/software/default settings for a lot of internet connected machines are weak, making it easy to attack from multiple locations. 2) A lot of networks have no customer or egress filtering and make it a lot more difficult to trace DDoS traffic because it generally uses faked source addresses. If these issues are addressed then it becomes a lot harder to remain anonymous and starting DDoS attacks against targets that can trace you becomes a lot less attractive. Cheers, Rich
On Wed, 30 Jul 2003 variable@ednet.co.uk wrote:
On Wed, 30 Jul 2003, Mike Tancsa wrote:
I recall one of our users was involved in a DoS once a few years back when the "giant pings" could crash MS boxes. The fact that his perceived anonymity was removed was enough to keep him from repeating his attacks....
If these issues are addressed then it becomes a lot harder to remain anonymous and starting DDoS attacks against targets that can trace you becomes a lot less attractive.
Sure, trace my attacks to the linux box at UW, I didn't spoof the flood and you can prove I did the attacking how? You can't because I and 7 other hackers all are fighting eachother over ownership of the poor UW student schlep's computer... The problem isn't the network, nor the filtering/lack-of-filtering, its a basic end host security problem. Until that is resolved, the ability of attackers to own boxes in remote locations and use them for malfeasance will continue to haunt us. I would guess that the other owners of the machines attacking Mike (assuming they got the emails he sent... big assumption) probably said: "Great another person getting attacked from that joker's win2k machine, hurray:(" and moved on about thier business. They know that they can't get the end user to secure their machine and they know that if the get him/her to reload the OS or 'clean' it of the 'virus' the problem will arise anew within 17 minutes :( I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Cheers,
Rich
At 10:37 PM 30/07/2003 +0000, Christopher L. Morrow wrote:
Sure, trace my attacks to the linux box at UW, I didn't spoof the flood and you can prove I did the attacking how?
You can at least TRY and see where the controlling traffic stream is originating from. i.e. if crap is coming out of box X, all the effort is spent on dealing with the spew coming from X through clever filtering and null routing, rather than trying to figure out who is controlling X. Good grief, is it really that difficult to put on an acl to log inbound tcp setup connections to the attacking host ? "Proof" in a legal sense is probably impossible if its some kid in Kiev and highly cost prohibitive if its some kid in Boston and you are in New York. But you know what, the odds are it is from a western country and odds are its not some politically motivated attack, its some emboldened kid due to the anonymity of the Internet, pissed off that someone questioned his manhood on IRC and decides to take it out via some ego enlarging attack. In the cases we have dealt with where it was one of our customers, contacting the parents and explaining that what was being done was against the law, was enough to stop the kid from continuing. Even when the attacker was an adult, talking to the person, explaining its against our AUP and against the law was, in our cases, enough to stop the person. Its amazing how compliant and timid darksith2999@hushmail.com becomes when you talk to Joe-Brown@we-know-where-you-live Are all these incidents bored teenage kids ? No. But I would put money on it the majority are. Really, how many of the very clever hackers you know are involved in DDoS attacks ?
You can't because I and 7 other hackers all are fighting eachother over ownership of the poor UW student schlep's computer...
Great, so of the 7 inbound streams, what effort is it to identify the IP address ? In our case ipfw add 20 count log tcp from any to x.x.x.x setup will it always work ? no. But it will catch more attackers than clever routing and filtering, as that just copes with the issue and does nothing to deal with it.
The problem isn't the network, nor the filtering/lack-of-filtering, its a basic end host security problem.
I would say all have some responsibility. Its not just an end user problem, its not just a network operator problem. I would say a DDoS would violate everyone's AUP on this list no ? If you choose to not enforce your AUP, how are you not responsible ? This is like the cops saying, "people are going to drive drunk and do stoooopid things. We cant stop them from doing this, so we give up"
Until that is resolved, the ability of attackers to own boxes in remote locations and use them for malfeasance will continue to haunt us. I would guess that the other owners of the machines attacking Mike (assuming they got the emails he sent...
I sent email to the listed abuse contacts first. If that bounced (as it did with several korean networks) I contacted the AS, or RADB contacts. I even contacted the APNIC registrar to inform them that all contacts bounced for one of the Korean ISPs. I then asked a Korean friend to look around the website for a "real person" and emailed that address. But the majority of the infected hosts were (surprise, surprise) in the largest networks e.g. AT&T, TW, Comcast, colo providers, and other resi broadband providers in Japan, Korea and Canada. Not because they have the lion's hare of dumb users, but because they have the lion's share of users period. Almost all had auto-responders saying "if spam, email here, if network abuse, email here"... If it was a different address, I then re-sent the complaints to the address instructed.
big assumption) probably said: "Great another person getting attacked from that joker's win2k machine, hurray:(" and moved on about thier business.
We dont do this. If a customer host is infected with virus/worm or is used in an attack, we contact the customer. If they dont do anything or choose to ignore us, we cut them off.
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
And kids will continue to attack / cause problems with impunity when there are no consequences for their actions. If network operators would enforce their AUPs, I think we would go a long way to reduce these types of headaches. This starts with putting *some* effort into identifying the controlling source. ---Mike
] Sure, trace my attacks to the linux box at UW, I didn't spoof the flood ] and you can prove I did the attacking how? You can't because I and 7 other ] hackers all are fighting eachother over ownership of the poor UW student ] schlep's computer... Only seven? Must be a lame box. :) -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Wed, 30 Jul 2003, Rob Thomas wrote:
] Sure, trace my attacks to the linux box at UW, I didn't spoof the flood ] and you can prove I did the attacking how? You can't because I and 7 other ] hackers all are fighting eachother over ownership of the poor UW student ] schlep's computer...
Only seven? Must be a lame box. :)
it was at UW and that damned computer security guy, old Mr. What's-His-Name-Dietrich was watching :)
On Wed, 30 Jul 2003, Christopher L. Morrow wrote:
Sure, trace my attacks to the linux box at UW, I didn't spoof the flood and you can prove I did the attacking how? You can't because I and 7 other hackers all are fighting eachother over ownership of the poor UW student schlep's computer...
You're quite right. This only means we'll be able to: 1) Stop the attack more quickly. 2) Alert the admins of the box that it's owned so that they can fix it and begin tracing how it happened.
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Agreed, we all (network providers, router vendors, software vendors and end users) need to be working together to solve this problem. There is no magic bullet. Rich
CLM> Date: Wed, 30 Jul 2003 22:37:21 +0000 (GMT) CLM> From: Christopher L. Morrow CLM> The problem isn't the network, nor the filtering / CLM> lack-of-filtering, its a basic end host security problem. Beyond basic filtering, it's a whack-a-mole to deal with rogue systems. Until the pain of having such a system is a sufficient barrier (or reward for being good is sufficient motivation), will there be change? Who should be held accountable for vulnerable boxen? IANAL, but automobile vendors have recall notices... Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita _________________________________________________________________ DO NOT send mail to the following addresses : blacklist@brics.com -or- alfra@intc.net -or- curbjmp@intc.net Sending mail to spambait addresses is a great way to get blocked.
On Sat, 2 Aug 2003, E.B. Dreger wrote:
CLM> Date: Wed, 30 Jul 2003 22:37:21 +0000 (GMT) CLM> From: Christopher L. Morrow
CLM> The problem isn't the network, nor the filtering / CLM> lack-of-filtering, its a basic end host security problem.
Beyond basic filtering, it's a whack-a-mole to deal with rogue systems. Until the pain of having such a system is a sufficient
unless the rogue systems are out of the box secure... not every OS is, but certainly there has been progress in this arena take simple examples like OpenBSD and RedHat linux (or most other linuxes really) and some non-free os's have also adopted a more 'secure' by default configuration recently.
barrier (or reward for being good is sufficient motivation), will there be change? Who should be held accountable for vulnerable boxen?
I believe the vendor should, but my opinion matters not :) The lawyers and congress folks (or someone like that) needs to get a little more mad about their 'critical infrastructure' and how vulnerable it is due to negligence and incompetence, or atleast a criminial level of naivety...
IANAL, but automobile vendors have recall notices...
mandated by federal regulations inside the US (atleast)... perhaps you want this for vendors also?
CLM> Date: Sat, 2 Aug 2003 02:45:29 +0000 (GMT) CLM> From: Christopher L. Morrow CLM> EBD> Who should be held accountable for vulnerable boxen? CLM> CLM> I believe the vendor should, but my opinion matters not :) I agree. It stinks when cutting code, knowing that _some_ competitor is slinging out crap... they're cutting corners, and won't be held accountable -- at least in the short term. This hurts the entire industry. Sort of like deaggregating routes, helping track down and shut down spammers and abusers, et cetera... cut corners, cut costs, and hurt the entire industry. CLM> The lawyers and congress folks (or someone like that) needs CLM> to get a little more mad about their 'critical CLM> infrastructure' and how vulnerable it is due to negligence CLM> and incompetence, or atleast a criminial level of naivety... Exactly. CLM> > IANAL, but automobile vendors have recall notices... CLM> CLM> mandated by federal regulations inside the US (atleast)... CLM> perhaps you want this for vendors also? Something like that. Notification currently is decent, but lacks teeth. I think vendors and admins should be required to follow certain procedures to qualify for liability limitations. Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita _________________________________________________________________ DO NOT send mail to the following addresses : blacklist@brics.com -or- alfra@intc.net -or- curbjmp@intc.net Sending mail to spambait addresses is a great way to get blocked.
EBD> Date: Sun, 3 Aug 2003 20:06:16 +0000 (GMT) EBD> From: E.B. Dreger EBD> Sort of like deaggregating routes, helping track down and Ugh. s/helping/not helping/ Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita _________________________________________________________________ DO NOT send mail to the following addresses : blacklist@brics.com -or- alfra@intc.net -or- curbjmp@intc.net Sending mail to spambait addresses is a great way to get blocked.
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Do you believe backbone networks should do nothing?
On Mon, Aug 04, 2003 at 05:28:07PM -0400, bdragon@gweep.net wrote:
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Do you believe backbone networks should do nothing?
I'm not sure what you are saying here, backbones do do something, the problem is that it's easy to fill up a T1. *really* easy. Just grab a few smurf amps and you can do it in a few seconds if you can send spoofed traffic. Or compromise a machine in a colo and type ping -f <foo> The backbones can't do much about this as if someone is within their burstable bandwidth (or purchased), how are they to know that this traffic is not legitimate. There will always be "i've got bigger pipes than you" issues such as this. So, you need to have hosts (and routers) to be secured such that they can't be compromised. the *nix installations have been moving towards this over time. Note that RedHat no longer allows inbound connections by default in rh9 on anything, they use iptables to drop all this traffic. Much different than the 3.0.3 days where you got your INN server, mars-nwe, etc.. all installed so you had a whole plethora of things that could be compromised as compared to now. the *BSD unices have also been securing themselves slowly over time as well, bind and sendmail no longer run as root very long in their default configurations (other than to bind to the ports), and there are other limitations that are being added as well. I won't speak for Washington State based companies and their default security profiles and what (little) has been done to shift those during the same timeframe.. I'm just hoping that people do change the mentality as follows: You have to know how to turn the service on to open the ports. This tends to mean that you know what you're doing in the first place, or have done it on purpose and (might) have an idea of the security implications of enabling such a service. While this may not hold true, it does possibly shift some of the liability onto the end-user. You enabled it, you got rooted via it, you should know to keep updated. This also means that if you don't do anything, you by default are not listening on ports 135-139,445, etc.. to get compromised, winpopup spam, etc.. it would allow the enterprise people to also enable things as necessary when they do their default template installs as well.. and everyone becomes happy. - jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Mon, Aug 04, 2003 at 05:28:07PM -0400, bdragon@gweep.net wrote:
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Do you believe backbone networks should do nothing?
I'm not sure what you are saying here, backbones do do something, the problem is that it's easy to fill up a T1. *really* easy.
I was asking about Chris's use of "having end networks implement proper source filtering" implying that backbones should not do so.
On Mon, 4 Aug 2003 bdragon@gweep.net wrote:
On Mon, Aug 04, 2003 at 05:28:07PM -0400, bdragon@gweep.net wrote:
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Do you believe backbone networks should do nothing?
I'm not sure what you are saying here, backbones do do something, the problem is that it's easy to fill up a T1. *really* easy.
I was asking about Chris's use of "having end networks implement proper source filtering" implying that backbones should not do so.
There are many cases in which the backbone can't determine the 'proper' traffic an edge is sending in. Not to mention the problems of filtering on an edge device with 100's or 1000's of interfaces. The proper and simple place for this filtering is as close to the end device as possible. Backbones just aren't made to filter traffic, edge networks are.
chris@UU.NET ("Christopher L. Morrow") writes:
There are many cases in which the backbone can't determine the 'proper' traffic an edge is sending in.
i'd like to discuss these, or see them discussed. networks have edges, even if some networks are "edge networks" and some are "backbone networks." bcp38 talks about various kinds of "loose" rpf, for example not accepting a source for which there's no corresponding nondefault route.
Not to mention the problems of filtering on an edge device with 100's or 1000's of interfaces. The proper and simple place for this filtering is as close to the end device as possible. Backbones just aren't made to filter traffic, edge networks are.
so, the problem is transitive. you might have a multihomed customer whose network spans some of the same peerography as yours, and if you both use hot potato there will be path assymetry, such that your route back to a source might be through pop A while they deliver that source's traffic to you at pop B. your only recourse is to get them to filter at their edge, which you hope is less ambiguous than yours. but they might have the same situation with their downstream. and you're not requiring them to do edge filtering as a contract/peering term, nor are you requiring them to require their downstreams to do so. this means the problem goes from "transitive" to "laundered" in about one AS hop or so. i don't consider this a healthy situation, and i'd like to hear you list the kinds of rpf you know of and why none can be used on a "backbone". -- Paul Vixie
On Tue, 5 Aug 2003, Paul Vixie wrote:
chris@UU.NET ("Christopher L. Morrow") writes:
There are many cases in which the backbone can't determine the 'proper' traffic an edge is sending in.
i'd like to discuss these, or see them discussed. networks have edges, even if some networks are "edge networks" and some are "backbone networks." bcp38 talks about various kinds of "loose" rpf, for example not accepting a source for which there's no corresponding nondefault route.
Sure, rpf (or something akin to it) is included in BCP38... though there are plenty of equipment vendors who's equipment isn't capable of performing even loose-rpf :( Some folks are saddled with such hardware. However, at the lan edge, almost every piece of hardware is capable of filtering with a very simple acl (or urpf even in strict mode). The default configuration placed by CPE users should have this included. This is the right place for these restrictions, beyond the lan edge the decisions about routing become much more complex.
Not to mention the problems of filtering on an edge device with 100's or 1000's of interfaces. The proper and simple place for this filtering is as close to the end device as possible. Backbones just aren't made to filter traffic, edge networks are.
so, the problem is transitive. you might have a multihomed customer whose
Yes, its transitive.
but they might have the same situation with their downstream. and you're not requiring them to do edge filtering as a contract/peering term, nor are you requiring them to require their downstreams to do so. this means the
Certianly.. I don't write contracts or make sales... though I'm sure someone in the sales department, or likely 'contracts' or 'legal' would be happy to entertain this idea. I can say that the technology is, and has been for 2 years, included in basic customer CPE configs when they are sent out by our install group. (for UUNET atleast) Other providers might do the same, I've got little visibility into that arena.
problem goes from "transitive" to "laundered" in about one AS hop or so. i don't consider this a healthy situation, and i'd like to hear you list the kinds of rpf you know of and why none can be used on a "backbone".
1) by access-list (strict this is called by some?) ip verify unicast reverse-path <100-199> without knowing all networks the customer has this has problems :( you don't HAVE to know all networks your customer owns, some they might not want transiting your network :( 2) by interface (strict I'd call this too) ip verify unicast source reachable-via rx This has the same problems as 1... again, perhaps your customer doesn't want to return transit traffic across your network for this? 3) loose mode ip verify unicast source reachable-via any This is more workable, though some providers use 1918 space internally so this won't help these networks, nor will it help if you are using that dead ip space for other purposes :( It's helpful for blackholing sources and getting backscatter though :) Note, all urpf have the same problems on some platforms, places where the urpf is done as a process-switched solution are a problem (e0 cards on cisco 12000's for example). There are still ALOT of problematic hardware platforms out there, and as near as I can tell this is only really supported on cisco and juniper gear, no? -Chris
On 5 Aug 2003, Paul Vixie wrote:
i'd like to discuss these, or see them discussed. networks have edges, even if some networks are "edge networks" and some are "backbone networks." bcp38 talks about various kinds of "loose" rpf, for example not accepting a source for which there's no corresponding nondefault route.
When I proposed reverse-path filtering in the first place I stated that loose RPF is applicable to multi-homed networks which do not transit. http://www.cctec.com/maillists/nanog/historical/9609/msg00321.html http://www.cctec.com/maillists/nanog/historical/9609/msg00406.html --vadim
On Mon, 4 Aug 2003 bdragon@gweep.net wrote:
On Mon, Aug 04, 2003 at 05:28:07PM -0400, bdragon@gweep.net wrote:
I'm all for raising the bar on attackers and having end networks implement proper source filtering, but even with that 1000 nt machines pinging 2 packet per second is still enough to destroy a T1 customer, and likely with 1500 byte packets a T3 customer as well. You can't stop this without addressing the host security problem...
Do you believe backbone networks should do nothing?
I'm not sure what you are saying here, backbones do do something, the problem is that it's easy to fill up a T1. *really* easy.
I was asking about Chris's use of "having end networks implement proper source filtering" implying that backbones should not do so.
There are many cases in which the backbone can't determine the 'proper' traffic an edge is sending in. Not to mention the problems of filtering on an edge device with 100's or 1000's of interfaces. The proper and simple place for this filtering is as close to the end device as possible. Backbones just aren't made to filter traffic, edge networks are.
Certain "Backbone Networks" _are_ the edge (dialup, single-homed customers, web-hosting) and yet still don't do anything. loose RPF is available on all but the most crippled gear from the major vendors, which I wouldn't want to go advertising that I had nothing but crippled equipment. Certain "Backbone Networks" require their customers to provide them lists of networks, which could certainly be used with a contact leadtime and customer notice for filling in Strict+Acl. Also, you mentioned RFC1918 as it related to loose RPF. Vendor J does linerate acls. Vendor C (with the compiled acls option) does as close to linerate as that gear is ever likely to do. The "my gear can't do these things" excuse is getting quite threadbare at this point. It comes down to not wanting to do these things, and not wanting to do these things just isn't acceptable. As Paul stated, there are requirements one can make of peers and customers. There are requirements one can make of vendors. As some Shoe company has said, "Get out there and _do_ something"
On Tue, 5 Aug 2003 bdragon@gweep.net wrote:
There are requirements one can make of vendors.
These have been made, several times :) In fact there is an IETF working group pushing these requirments now, Mr. Bush could provide the details that have slipped my addled brain.
As some Shoe company has said, "Get out there and _do_ something"
This is also the case, things are being done for most networks...
There are requirements one can make of vendors. These have been made, several times :) In fact there is an IETF working group pushing these requirments now, Mr. Bush could provide the details that have slipped my addled brain.
it is not a wg. but there is a draft being actively worked, see draft-jones-opsec-00.txt.
As some Shoe company has said, "Get out there and _do_ something" This is also the case, things are being done for most networks...
and for those who are not, darwin is a worthy read randy
Randy Bush wrote:
There are requirements one can make of vendors. These have been made, several times :) In fact there is an IETF working group pushing these requirments now, Mr. Bush could provide the details that have slipped my addled brain.
it is not a wg. but there is a draft being actively worked, see draft-jones-opsec-00.txt.
Closing in on -01 draft....target was this week, but sleep and USENIX securtity (often incompatable) have conspired to slow it down. If you're interested, pull the current draft and subscribe to the mailing list echo "subscribe opsec" | mail majordomo@ops.ietf.org I'm currently integrating IETF BOF and mailing list feedback, but once once -01 is out, I would like feedback from nanog (don't spend *too* many cycles on -00 major changes/additions/ section renumbering in -01 "soon") Thanks, ---George Jones
Hi, NANOGers. Ooooo, you just knew I'd have to chime in eventually. :) ] 1) The OS/software/default settings for a lot of internet connected ] machines are weak, making it easy to attack from multiple locations. Yep, quite true. Vulnerable hosts are a commodity, not a scarce resource. There are 728958 entries in my hacked device database since 01 JAN 2003 that attest to this fact. ] 2) A lot of networks have no customer or egress filtering and make it a ] lot more difficult to trace DDoS traffic because it generally uses faked ] source addresses. I've tracked 1787 DDoS attacks since 01 JAN 2003. Of that number, only 32 used spoofed sources. I rarely see spoofed attacks now. When a miscreant has 140415 bots (the largest botnet I've seen this year), spoofing the source really isn't a requirement. :| Filtering the bogons does help, and everyone should perform anti-spoofing in the appropriate places. It isn't, however, a silver bullet. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
Filtering the bogons does help, and everyone should perform anti-spoofing in the appropriate places. It isn't, however, a silver bullet.
it's necessary but not sufficient. but if we knew the source addresses were authentic, then some pressure on the RIRs to make address block holders reachable would yield entirely new echelons of accountability. with the current anonymity of ddos sources, it's not possible to file a class action lawsuit against suppliers of the equipment, or software, or services which make highly damaging ddos's a fact of life for millions of potential class members. so please focus on "anti-spoofing"'s *necessity* and not on the fact that by itself it won't be sufficient. "anti-spoofing" will enable solutions which are completely beyond consideration at this time. (we'll know the tide has turned when BCP38 certifications for ISPs are available from the equivilent of "big 8" ("big 2" now?) accounting firms, and these certifications will be prerequisite to getting BGP set up.) -- Paul Vixie
I agree with Pauls' position on anti-spoofing, without that, you are fighting A losing battle. Henry R Linneweh Paul Vixie <vixie@vix.com> wrote:
Filtering the bogons does help, and everyone should perform anti-spoofing in the appropriate places. It isn't, however, a silver bullet.
it's necessary but not sufficient. but if we knew the source addresses were authentic, then some pressure on the RIRs to make address block holders reachable would yield entirely new echelons of accountability. with the current anonymity of ddos sources, it's not possible to file a class action lawsuit against suppliers of the equipment, or software, or services which make highly damaging ddos's a fact of life for millions of potential class members. so please focus on "anti-spoofing"'s *necessity* and not on the fact that by itself it won't be sufficient. "anti-spoofing" will enable solutions which are completely beyond consideration at this time. (we'll know the tide has turned when BCP38 certifications for ISPs are available from the equivilent of "big 8" ("big 2" now?) accounting firms, and these certifications will be prerequisite to getting BGP set up.) -- Paul Vixie
Filtering the bogons does help, and everyone should perform anti-spoofing in the appropriate places. It isn't, however, a silver bullet. it's necessary but not sufficient.
anti-spoofing is useful, but vastly insufficient, and hence not necessary
randy
anti-spoofing eliminates certain avenues of attack allowing one to focus on remaining avenues, and hence (as Vix stated) is necessary but not sufficient.
Filtering the bogons does help, and everyone should perform anti-spoofing in the appropriate places. It isn't, however, a silver bullet. it's necessary but not sufficient. anti-spoofing is useful, but vastly insufficient, and hence not necessary anti-spoofing eliminates certain avenues of attack allowing one to focus on remaining avenues, and hence (as Vix stated) is necessary but not sufficient.
it turns 1% of the technical problem into a massive social business problem which, even if it was solvable (which it practically isn't), would also be addressed by technical solutions where no spoofing is involved. but it would provide a lot of fun and soapboxes for wannabe net police and vigilantes. randy
Randy Bush wrote:
anti-spoofing eliminates certain avenues of attack allowing one to focus on remaining avenues, and hence (as Vix stated) is necessary but not sufficient.
it turns 1% of the technical problem into a massive social business problem which, even if it was solvable (which it practically isn't), would also be addressed by technical solutions where no spoofing is involved.
Spoofed packets are harder to trace to the source than non-spoofed packets. Knowing where a malicious packet is very important to the process of trying to stop the malicious packet(s). Anyone without anti-spoof filtering has no interest in managing their network, keeping it secure, and assisting the Internet as a whole. Without spoofing, one could take a list of 5,000 IP addresses involved in an attack and say, "These are either compromised or direct attacks," and issue reports to the correct people (with a few scripts). With spoofing, there is no reliable way of knowing if a host is compromised, the attacker, or if it's just another IP being spoofed. In such cases, on has to contact each IP owner and find out if spoof protection is enabled. If it is, then the party needs to look into the problem. If not, then it's just another waste of time. -Jack
On Mon, Aug 04, 2003 at 04:59:53PM -0500, Jack Bates wrote:
on has to contact each IP owner and find out if spoof protection is enabled.
it's worse than that. If they have it enabled (eg: 10.0.0.0/24 has it enabled), but nobody else does, it allows everyone else to spoof from the 10/24 prefix causing large sets of complaints to filter into their mailbox. If we're able to authenticate the sources, then we can presume abuse reports are authentic. (aside from address space hijacking issues). it all comes down to filtering, filtering, filtering. announcement filtering, anti-spoof filtering, peer filtering. If you're not doing this, you *SHOULD* be. I know it's hard to do these things in the current business environment. Those of you that can, please take the time to do this. It will make the lives of the rest of us much easier when tracing attacks back. For those of you that are doing IPv6 deployments, might I suggest you also take the time to do the same? I know that Cisco has v6 u-rpf support already. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
it all comes down to filtering, filtering, filtering.
announcement filtering, anti-spoof filtering, peer filtering.
If you're not doing this, you *SHOULD* be. I know it's hard to do these things in the current business environment. Those of you that can, please take the time to do this. It will make the lives of the rest of us much easier when tracing attacks back.
Also, if you can't do it everywhere, doing it where you _can_ is preferable to not doing anything at all. -- BD.
Hi, NANOGers. ] Also, if you can't do it everywhere, doing it where you _can_ is preferable to ] not doing anything at all. Indeed, every little bit helps. We will win these battles by degrees, folks, not through a single panacea. So, with that said, I have to make a shameless plug for the Bogon Page. It makes filtering reasonably easy and automated. If there is something else we at Team Cymru can do to help you filter out the nasty packets, please don't hesitate to ask! <http://www.cymru.com/Bogons/index.html> I am on a soapbox, but that's because I'm short. :) Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
Hi, NANOGers. ] For those of you that are doing IPv6 deployments, might I suggest ] you also take the time to do the same? I know that Cisco has v6 u-rpf ] support already. It's "shameless plug and solicitation of feedback day" here at Team Cymru. :) We have put together a very rough beta of an IPv6 bogon page. We could really use some feedback, suggestions, and witty comments. You will find the BETA page at the following URL: <http://www.cymru.com/Documents/bogonv6-list.html> The long-term goal is to provide this data in the same manner that we provide the IPv4 data, e.g. through HTML, text, DNS, and BGP peering. Thanks! Rob, for Team Cymru. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Tue, Aug 05, 2003 at 07:25:47AM +0300, Hank Nussbacher wrote:
On Mon, 4 Aug 2003, Jared Mauch wrote:
For those of you that are doing IPv6 deployments, might I suggest you also take the time to do the same?I know that Cisco has v6 u-rpf support already.
but not netflow as far as i remember. -hank
I've heard of Cisco having EFT IPv6 netflow images available someplace. - jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Mon, 4 Aug 2003, Jack Bates wrote:
Randy Bush wrote:
anti-spoofing eliminates certain avenues of attack allowing one to focus on remaining avenues, and hence (as Vix stated) is necessary but not sufficient.
it turns 1% of the technical problem into a massive social business problem which, even if it was solvable (which it practically isn't), would also be addressed by technical solutions where no spoofing is involved.
Spoofed packets are harder to trace to the source than non-spoofed packets. Knowing where a malicious packet is very important to the
this is patently incorrect: www.secsup.org/Tracking/ has some information you might want to review. Tracking spoofed attacks is infact EASIER than non-spoofed attacks, especially if your network has a large 'edge'.
On Tue, 5 Aug 2003, Christopher L. Morrow wrote:
Spoofed packets are harder to trace to the source than non-spoofed packets. Knowing where a malicious packet is very important to the
this is patently incorrect: www.secsup.org/Tracking/ has some information you might want to review. Tracking spoofed attacks is infact EASIER than non-spoofed attacks, especially if your network has a large 'edge'.
Errr... you don't need to _track_ non-spoofed attacks - you _know_ where the source is. Instead of going box to box back to the source (most likely across several providers) you can immediately go to _their_ provider. --vadim
On Tue, 5 Aug 2003, Christopher L. Morrow wrote:
Spoofed packets are harder to trace to the source than non-spoofed packets. Knowing where a malicious packet is very important to the
this is patently incorrect: www.secsup.org/Tracking/ has some information you might want to review. Tracking spoofed attacks is infact EASIER than non-spoofed attacks, especially if your network has a large 'edge'.
Errr... you don't need to _track_ non-spoofed attacks - you _know_ where the source is. Instead of going box to box back to the source (most likely across several providers) you can immediately go to _their_ provider.
so long as you are sure they aren't spoofed, yes. The point I mis-made was that tracking the spoofed attacks back to your edge is quicker since in many cases the non-spoofed attacks come from 'everywhere' so blocking traffic becomes a null route very quickly :( (unless the upstreams from your edge device can absorb the load and the protocol/ports being flooded are not critical to the business of the box being hammered.
At 07:02 PM 05/08/2003 +0000, Christopher L. Morrow wrote:
so long as you are sure they aren't spoofed, yes.
A recent post by Rob Thomas said, "I've tracked 1787 DDoS attacks since 01 JAN 2003. Of that number, only 32 used spoofed sources. I rarely see spoofed attacks now." Thats about 1%. Of the few attacked directed at us and originating from our customers, that generally jives. What number are you seeing ? ---Mike
On Tue, 5 Aug 2003, Mike Tancsa wrote:
At 07:02 PM 05/08/2003 +0000, Christopher L. Morrow wrote:
so long as you are sure they aren't spoofed, yes.
A recent post by Rob Thomas said, "I've tracked 1787 DDoS attacks since 01 JAN 2003. Of that number, only 32 used spoofed sources. I rarely see spoofed attacks now."
Thats about 1%. Of the few attacked directed at us and originating from our customers, that generally jives. What number are you seeing ?
More and more there is less and less spoofing, its just not required and it causes more damage with less effort :( Why spoof when you have 1000 machines pumping 1 packet per second? (or 10)
More and more there is less and less spoofing, its just not required and it causes more damage with less effort :( Why spoof when you have 1000 machines pumping 1 packet per second? (or 10)
leaving the spoofing option open for future generations of attacks, rather than having a witch-hunt and tracking down and upgrading every insecure edge, is just about the worst thing we could do. because when an attacker wants an extra edge, they'll add spoofing to their attack profile, and the core's immune system will be totally unprepared. knowing this, and knowing that spoofing isn't actually necessary right now, the current generation of attackers would be well advised to stop spoofing for a while so that nobody makes any serious attempt to plug the hole. (and, it sounds like that strategy might already be working.) could someone here who can write win32 apps, and someone else who can write cocoa apps, please volunteer short executables that will try to spoof a few packets through some well known server, and then report as to whether the current computer/firewall/cablemodem/isp/core permitted this or not? isc would be happy to host the server component of this, as long as source code for the executables is available under a bsd style copyright, and the executables are released without any fee. this is so the community can gather compelling evidence for the witch-hunt. (i expect we'd have to come up with a "web button" campaign to brand isp's who dtrt. sort of like the old squid-era "cache now!" thing.) -- Paul Vixie
On Wed, Aug 06, 2003 at 12:58:19AM +0000, Paul Vixie wrote:
could someone here who can write win32 apps, and someone else who can write cocoa apps, please volunteer short executables that will try to spoof a few packets through some well known server, and then report as to whether the current computer/firewall/cablemodem/isp/core permitted this or not? isc would be happy to host the server component of this, as long as source code for the executables is available under a bsd style copyright, and the executables are released without any fee.
How would the spoofing program, or its user, be able to tell if it was successful? Unless I'm very confused, the definition of spoofing is that the return packets aren't going to come back to you. I can imagine a packet format where the real source address was in the data, but with no authentication this would itself be subject to abuse. You'd need a little protocol: Volunteer Server real-source-->server <--back to real source with ip to fake, cookie fake-source-->server with cookie <--back to real source with result as a courtesy Doing this from behind a NAT would be difficult. -- Barney Wolff http://www.databus.com/bwresume.pdf I'm available by contract or FT, in the NYC metro area or via the 'Net.
They have existed in the past it was how many an irc server was hacked.. It's just not easy to accomplish but there are many hacker tools to do this still available, some with better capabilities at this then others. Also you could have 2 ip addresses on the same host different interfaces eg 10.0.0.2 and 10.0.0.3, and use 10.0.0.2 and spoof 10.0.0.3 as the source, and since you can listen to both interfaces, you can determine if it arrived on the wrong interface. jason On 5 Aug 2003 at 21:31, Barney Wolff wrote:
On Wed, Aug 06, 2003 at 12:58:19AM +0000, Paul Vixie wrote:
could someone here who can write win32 apps, and someone else who can write cocoa apps, please volunteer short executables that will try to spoof a few packets through some well known server, and then report as to whether the current computer/firewall/cablemodem/isp/core permitted this or not? isc would be happy to host the server component of this, as long as source code for the executables is available under a bsd style copyright, and the executables are released without any fee.
How would the spoofing program, or its user, be able to tell if it was successful? Unless I'm very confused, the definition of spoofing is that the return packets aren't going to come back to you.
I can imagine a packet format where the real source address was in the data, but with no authentication this would itself be subject to abuse. You'd need a little protocol:
Volunteer Server real-source-->server <--back to real source with ip to fake, cookie fake-source-->server with cookie <--back to real source with result as a courtesy
Doing this from behind a NAT would be difficult.
-- Barney Wolff http://www.databus.com/bwresume.pdf I'm available by contract or FT, in the NYC metro area or via the 'Net.
On Wed, 6 Aug 2003, Paul Vixie wrote:
More and more there is less and less spoofing, its just not required and it causes more damage with less effort :( Why spoof when you have 1000 machines pumping 1 packet per second? (or 10)
leaving the spoofing option open for future generations of attacks, rather than having a witch-hunt and tracking down and upgrading every insecure edge, is just about the worst thing we could do. because when an attacker wants an extra edge, they'll add spoofing to their attack profile, and the core's immune system will be totally unprepared.
I don't believe I ever said that the edges shouldn't filter... did I?
] I don't believe I ever said that the edges shouldn't filter... did I? Nope. I've always heard you say quite the opposite - the edges should filter. :) -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Wed, Aug 06, 2003 at 12:58:19AM +0000, Paul Vixie quacked:
could someone here who can write win32 apps, and someone else who can write cocoa apps, please volunteer short executables that will try to spoof a few packets through some well known server, and then report as to whether the current computer/firewall/cablemodem/isp/core permitted this or not? isc would be happy to host the server component of this, as long as source code for the executables is available under a bsd style copyright, and the executables are released without any fee.
If anyone wants this, I have a unix client and server that does the basics of the testing Paul's suggesting. I used it to test for spoofability from a bunch of my nodes, I don't claim it's something you want to open up to cable users as-is. :) The code has only been tested on FreeBSD. YMMV. BSD license. No attempt at real accounting or security. But maybe it'll get someone off the ground. :) If you have compilation problems, try ripping out the ltconfig and using automake to install the right version for your own computer (automake --add-missing). http://eep.lcs.mit.edu/spooftest-dist.tar.gz -Dave (spoof now!) -- work: dga@lcs.mit.edu me: dga@pobox.com MIT Laboratory for Computer Science http://www.angio.net/ I do not accept unsolicited commercial email. Do not spam me.
Hi, NANOGers. ] leaving the spoofing option open for future generations of attacks, ] rather than having a witch-hunt and tracking down and upgrading every ] insecure edge, is just about the worst thing we could do. When I first looked at this problem back in March 2001, I did a study of one often attacked web site. The data showed that 66.85% of all the source addresses hitting the site were *obvious* bogons, e.g. RFC1918, unallocated prefixes, etc. That is 66.85% of all naughty packets that this site never should have received. What was the total percentage of spoofed source packets? That was anyone's guess. You can see this in a presentation I did entitled "60 Days of Basic Naughtiness": <http://www.cymru.com/Presentations/60Days.zip> Since then things have changed in many ways, but the mitigation of spoofing, be it bogon or otherwise, is an improvement. It takes another tool out of their toolbox. We win this battle by degrees. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
Filtering the bogons does help, and everyone should perform anti-spoofing in the appropriate places. It isn't, however, a silver bullet. it's necessary but not sufficient. anti-spoofing is useful, but vastly insufficient, and hence not necessary anti-spoofing eliminates certain avenues of attack allowing one to focus on remaining avenues, and hence (as Vix stated) is necessary but not sufficient.
it turns 1% of the technical problem into a massive social business problem which, even if it was solvable (which it practically isn't), would also be addressed by technical solutions where no spoofing is involved.
but it would provide a lot of fun and soapboxes for wannabe net police and vigilantes.
randy
What is your solution which addresses the 100%? 99%? 50%? What problems does anti-spoofing create?
On Wed, 30 Jul 2003, Rob Thomas wrote:
I've tracked 1787 DDoS attacks since 01 JAN 2003. Of that number, only 32 used spoofed sources. I rarely see spoofed attacks now.
Do you have any ideas as to why that is? Is it due to more providers doing source filtering? It wouldn't make sense for attackers to become less sophisticated unless they became more difficult to catch for other reasons (e.g. botnets getting bigger). Rich
I would say that because backdoored hosts are easily available in large quantities, spoofing does not make sense and usually alarms various systems more quickly than packets from legitimate addresses. Pete ----- Original Message ----- From: <variable@ednet.co.uk> To: "Rob Thomas" <robt@cymru.com> Cc: "NANOG" <nanog@merit.edu> Sent: Thursday, July 31, 2003 4:17 PM Subject: Re: WANTED: ISPs with DDoS defense solutions
On Wed, 30 Jul 2003, Rob Thomas wrote:
I've tracked 1787 DDoS attacks since 01 JAN 2003. Of that number, only 32 used spoofed sources. I rarely see spoofed attacks now.
Do you have any ideas as to why that is? Is it due to more providers doing source filtering? It wouldn't make sense for attackers to become less sophisticated unless they became more difficult to catch for other reasons (e.g. botnets getting bigger).
Rich
I take it folks havent started implementing RFC3514 yet, should solve all these issues.... Steve On Thu, 31 Jul 2003, Petri Helenius wrote:
I would say that because backdoored hosts are easily available in large quantities, spoofing does not make sense and usually alarms various systems more quickly than packets from legitimate addresses.
Pete
----- Original Message ----- From: <variable@ednet.co.uk> To: "Rob Thomas" <robt@cymru.com> Cc: "NANOG" <nanog@merit.edu> Sent: Thursday, July 31, 2003 4:17 PM Subject: Re: WANTED: ISPs with DDoS defense solutions
On Wed, 30 Jul 2003, Rob Thomas wrote:
I've tracked 1787 DDoS attacks since 01 JAN 2003. Of that number, only 32 used spoofed sources. I rarely see spoofed attacks now.
Do you have any ideas as to why that is? Is it due to more providers doing source filtering? It wouldn't make sense for attackers to become less sophisticated unless they became more difficult to catch for other reasons (e.g. botnets getting bigger).
Rich
Hi, Rich. ] Do you have any ideas as to why that is? The anti-spoofing filtering, while not ubiquitous, has had an effect. The increase in the size of botnets is another reason. The fact that the number of vulnerable hosts has reached commodity level is perhaps the primary reason. The loss of 10K bots often introduces only the most minor of delays. :( Regarding sophistication: I never make the mistake of believing the enemy is dumb. I also do not believe the enemy will go further than what is necessary to accomplish the mission. Just enough is good enough. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
1) The OS/software/default settings for a lot of internet connected machines are weak, making it easy to attack from multiple locations.
I´ll start looking for this to happen when Microsoft manages to release an OS version which does not contain remote exploitable flaw before the boxes hit the store self. Remember, security is not a process, it´s lifestyle. Pete
1) The OS/software/default settings for a lot of internet connected machines are weak, making it easy to attack from multiple locations.
I´ll start looking for this to happen when Microsoft manages to release an OS version which does not contain remote exploitable flaw before the boxes hit the store self.
lots of late night pondering tonight. the anti-nat anti-firewall pure-end-to-end crowd has always argued in favour of "every host for itself" but in a world with a hundred million unmanaged but reprogrammable devices is that really practical? if *all* dsl and cablemodem plants firewalled inbound SYN packets and/or only permitted inbound UDP in direct response to prior valid outbound UDP, would rob really have seen a ~140Khost botnet this year? -- Paul Vixie
Paul Vixie wrote:
lots of late night pondering tonight.
the anti-nat anti-firewall pure-end-to-end crowd has always argued in favour of "every host for itself" but in a world with a hundred million unmanaged but reprogrammable devices is that really practical?
The most popular applications today either prefer or require bidirectional connectivity. Peer2peer traffic is about half of total and there can be only so many "corporate sponsored" SuperNodes . Also, games and some other applications, like SIP and other VoIP stuff require to be able to connect to the remote host. Obviously you can engineer around all this but then, fixing the host is also "just software".
if *all* dsl and cablemodem plants firewalled inbound SYN packets and/or only permitted inbound UDP in direct response to prior valid outbound UDP, would rob really have seen a ~140Khost botnet this year?
Sure. One late remote exploit requires just a embedded MIDI file on a web page which MS's browser will be happy to download and "execute". Or did you think that the NAT box would allow only text based browsing and provide HTTP to Gopher translation? While you are at it, make sure all email-clients are safe and immune to viruses. Pete
On 31 Jul 2003, Paul Vixie wrote:
the anti-nat anti-firewall pure-end-to-end crowd has always argued in favour of "every host for itself" but in a world with a hundred million unmanaged but reprogrammable devices is that really practical?
Not everything could be hidden behind a firewall, particularly in this world of increasingly mobile and transient connectivity. Besides, firewalls only protect against outsiders, whereas most damaging attacks are from insiders. What we need is a new programming paradigm, capable of actually producing secure (and, yes, reliable) software. C and its progeny (and "program now, test never" lifestyle) must go. I'm afraid it'll take laws which would actually make software makers to pay for bugs and security vulnerabilities in shipped code to make such paradigm shift a reality. --vadim
What we need is a new programming paradigm, capable of actually producing secure (and, yes, reliable) software. C and its progeny (and "program now, test never" lifestyle) must go. I'm afraid it'll take laws which would actually make software makers to pay for bugs and security vulnerabilities in shipped code to make such paradigm shift a reality.
Blaming the tools for the mistakes programmers make is like saying "guns kill people" when the truth is that people kill people with guns. We´ve code running, where the core parts are C and has a track record better than the "utopian" five nines so many people mistakenly look for. However, since improvements are always welcome, please recommend tools which would allow us to progress "above and beyond" C and it´s deficencies. Pete
On Thu, 31 Jul 2003, Petri Helenius wrote:
What we need is a new programming paradigm, capable of actually producing secure (and, yes, reliable) software. C and its progeny (and "program now, test never" lifestyle) must go. I'm afraid it'll take laws which would actually make software makers to pay for bugs and security vulnerabilities in shipped code to make such paradigm shift a reality.
Blaming the tools for the mistakes programmers make is like saying "guns kill people" when the truth is that people kill people with guns.
We´ve code running, where the core parts are C and has a track record better than the "utopian" five nines so many people mistakenly look for.
However, since improvements are always welcome, please recommend tools which would allow us to progress "above and beyond" C and it´s deficencies.
We digress but.. Private deployment of software written in C is very different from a major public release, especially so when included with source code. Steve
On Thu, 31 Jul 2003, Petri Helenius wrote:
What we need is a new programming paradigm, capable of actually producing secure (and, yes, reliable) software. C and its progeny (and "program now, test never" lifestyle) must go. I'm afraid it'll take laws which would actually make software makers to pay for bugs and security vulnerabilities in shipped code to make such paradigm shift a reality.
Blaming the tools for the mistakes programmers make is like saying "guns kill people" when the truth is that people kill people with guns.
Yep, it is people who choose tools and methods which produce code which is guaranteed to be unreliable and insecure - simply because those tools allow one to be lazy and cobble things together fast without much design or planning.
We╢ve code running, where the core parts are C and has a track record better than the "utopian" five nines so many people mistakenly look for.
A real programmer can write FORTRAN program in any language. The problem is that the even the best programmers make mistakes. Many of those mistakes (particularly, security-related - such as not checking for buffer overflows) can be virtually eliminated by the right tools. As for "code running" - in the course of my current project I had to write code interacting with ciscos - and immediately found a handful of bugs (some of them serious) in the supposedly stable and working code which has hundreds of thousands of users. I'm afraid you're confusing code running stably in a particular environment with good-quality code. (Excuse me for being rude - but my notion of reliable code comes from my early programming experience in an organization which produced systems controlling high-energy industrial processes - where an average computer crash causes immediate deaths of those unlucky to be around the controlled object, and prison terms for the manufacturer's management).
However, since improvements are always welcome, please recommend tools which would allow us to progress "above and beyond" C and it╢s deficencies.
May I suggest Algol-68, for example? Or any other language which actually supports boundary checks in arrays and strings not added as an afterthought? Or using CPUs and OSes which won't let to execute code from stack and data segments? Or doing event-driven programming instead of practically undebuggable multithreading? There's no market[*] for higher-quality software - therefore, there's no pressure to improve tools and methods. If anything, the trend is to use more and more languages lacking strong typing and lots of implicit conversions, specifically designed for "rapid prototyping" aka quick and dirty hackery - all of which was known to be dangerous for decades. --vadim [*] "No market" means that quality is not a differentiator because it is impossible to evaluate quality prior to purchase, and after purchase manufacturers are shielded from any responsibility for actually delivering on promises by the license language.
Vadim Antonov wrote:
On Thu, 31 Jul 2003, Petri Helenius wrote:
What we need is a new programming paradigm, capable of actually producing secure (and, yes, reliable) software. C and its progeny (and "program now, test never" lifestyle) must go. I'm afraid it'll take laws which would actually make software makers to pay for bugs and security vulnerabilities in shipped code to make such paradigm shift a reality.
Blaming the tools for the mistakes programmers make is like saying "guns kill people" when the truth is that people kill people with guns.
Yep, it is people who choose tools and methods which produce code which is guaranteed to be unreliable and insecure - simply because those tools allow one to be lazy and cobble things together fast without much design or planning.
There is nothing in C which guarantees that code will be unreliable or insecure. C has the advantage of power and flexibility. It does no hand holding, so any idiot coder claiming to be a programmer can slap together code poorly. This is the fault of the programmer, and not the language. The syntax for C is just fine, and since any language is nothing more than syntax, C is a workable language. There are libraries out there for handling arrays with sanity checks. The fact that people don't use them is their own fault. For that matter, one can easily write their own. I don't know how many times I have gotten a vacant expression when mentioning the word flowchart; which is nothing more than the visual form of what any programmer should have going through their head (and on paper if they really want to limit mistakes). What I'd give to see a detailed flowchart for sendmail. I'd hang it on my walls (as I'm sure it'd take more than one). <snip>
A real programmer can write FORTRAN program in any language. The problem is that the even the best programmers make mistakes. Many of those mistakes (particularly, security-related - such as not checking for buffer overflows) can be virtually eliminated by the right tools.
Write a small program in C and then write it in Perl. Have the program open a 1.4G syslog file and run a tight loop reading in one line at a time, scanning for sendmail log entries, parsing the line, and writing out to a file the datetime, envelope_from, nrcpts, msgid. Your program is half way to actually being useful for something. But that should be far enough. Time both programs. For what it's worth, sorry Perl took so long. If a programmer can write a process in any language, then naturally the programmer should choose the language which provides the most flexibility, performance, and diversity; or the right tool. -Jack
On Fri, 1 Aug 2003, Jack Bates wrote:
There is nothing in C which guarantees that code will be unreliable or insecure.
Lack of real strong typing, built-in var-size strings (so the compiler can actually optimize string ops) and uncontrollable pointer operations is enough to guarantee that any complicated program will have buffer-overflow vulnerabilities.
C has the advantage of power and flexibility.
So does assembler - way more than C.
It does no hand holding, so any idiot coder claiming to be a programmer can slap together code poorly. This is the fault of the programmer, and not the language.
Presumeably, a non-idiot can produce ideal code in significant quantities. May I politely inquire if you ever wrote anything bigger than 10k lines - because anyone who did knows for sure that no program is ideal, and that humans forget, make mistakes and cannot hold the entire project in mind, and that our minds tend to see how things are supposed to be, not how they are - making overlooking silly mistakes a certainity. Some languages help to catch mistakes. Some help them to stay unnoticed, until a hacker kid with spermotoxicosis and too much free time comes poking around.
The syntax for C is just fine, and since any language is nothing more than syntax, C is a workable language.
I'm afraid you're mistaken - a language is a lot more than syntax. Syntax is easy, the semantics is hard. To my knowledge, only one group ever atempted to formally define semantics of a real programming language - and what they produced is 300-something pages of barely readable "Algol-68: The Revised Report" filled with statements in a context-dependent grammar. All _syntax_ for the same language fits in 6 pages. C is a workable language, but it is not close (by far) to a language which would incorporate support for known best practices for large-scale software engineering. C++ is somewhat better, but it fails horribly in some places, particulary when you want to write reusable code (hint: STLport.com was hosted on one of my home boxes for some time :) Java is overly restrictive and has no support for generic programming (aka templates), besides the insistence on garbage colletion makes it nearly useless for any high-performance stuff. Anyway, my point is not that there is an ideal language which everyone must use, but rather that the existing ones are inadequate, and no serious effort is being spent on getting them better (or even getting existing better research languages into the field). The lack of effort is simply a consequence of lack of demand.
There are libraries out there for handling arrays with sanity checks. The fact that people don't use them is their own fault.
Overhead. To get reasonable performance on boundary-checked arrays you need compiler to do deeper optimization than is possible with calling library routines (or even inlining them - because the semantics of procedure call is restrictive).
For that matter, one can easily write their own. I don't know how many times I have gotten a vacant expression when mentioning the word flowchart; which is nothing more than the visual form of what any programmer should have going through their head (and on paper if they really want to limit mistakes).
I don't use flowcharts - they're less compact than text, so they hinder comprehension of complex pieces of code (it is a well-known fact that splitting text onto separate pages which need to be flipped back and forth significantly degrades speed and accuracy of comprehension - check any textbook on cognitive psychology). There were many graphical programming projects (this is a perinneal mania in programming tool-smith circles), none of them yielded any significant improvement in productivity or quality.
What I'd give to see a detailed flowchart for sendmail. I'd hang it on my walls (as I'm sure it'd take more than one).
Sendmail is a horrible kludge, and, frankly, I'm amazed that it is still being supplied as a default MTA with Unix-like OS-es.
Write a small program in C and then write it in Perl. ... Time both programs. For what it's worth, sorry Perl took so long.
Perl is interpreted, C is compiled. In fact, Perl is worse than C when it comes to writing reliable programs, for obvious reasons. If anything, I wouldn't advocate using any of the new-fangled hack-and-run languages for anything but writing 10-line scripts.
If a programmer can write a process in any language, then naturally the programmer should choose the language which provides the most flexibility, performance, and diversity; or the right tool.
A professional programmer will choose a language which lets him do the required job with minimal effort. Since quality is not generally a project requirement in this industry (for reasons I already mentioned) the result is predictable - use of languages which allow quick and dirty programming; getting stuff to do something fast, so it can be shipped, and screw the user, who is by now well-conditioned to do the three-finger salute instead of asking for refund. --vadim
Vadim Antonov wrote:
Lack of real strong typing, built-in var-size strings (so the compiler can actually optimize string ops) and uncontrollable pointer operations is enough to guarantee that any complicated program will have buffer-overflow vulnerabilities.
Typing can be enforced if the programmer chooses to.
So does assembler - way more than C.
I agree. I love both.
Presumeably, a non-idiot can produce ideal code in significant quantities. May I politely inquire if you ever wrote anything bigger than 10k lines - because anyone who did knows for sure that no program is
Of course, and yes.
ideal, and that humans forget, make mistakes and cannot hold the entire project in mind, and that our minds tend to see how things are supposed to be, not how they are - making overlooking silly mistakes a certainity.
Correct. It cannot be held in mind, which is why a model must be fashioned and enforced so that the programmer minimizes mistakes. There are many tools available to help one do this, and it's not too difficult to write your own. In some cases, it's common sense.
C is a workable language, but it is not close (by far) to a language which would incorporate support for known best practices for large-scale software engineering. C++ is somewhat better, but it fails horribly in
How do you figure? Best Practices is what you do with what you have, not what you have itself. Ingress/Egress filtering is considered a best practice. Yet it isn't performed throughout the 'net. Solid programming can be done, but if the individual(s) or company do not wish to take the time to do it right, then it will have problems.
Overhead. To get reasonable performance on boundary-checked arrays you need compiler to do deeper optimization than is possible with calling library routines (or even inlining them - because the semantics of procedure call is restrictive).
Let me get this strait. You have a language which is very low on overhead and adding any overhead is unacceptable despite the fact that it would still have less overhead than many other languages?
I don't use flowcharts - they're less compact than text, so they hinder comprehension of complex pieces of code (it is a well-known fact that splitting text onto separate pages which need to be flipped back and forth significantly degrades speed and accuracy of comprehension - check any textbook on cognitive psychology). There were many graphical programming projects (this is a perinneal mania in programming tool-smith circles), none of them yielded any significant improvement in productivity or quality.
I imagine that they didn't. The flowchart was more of a training tool than anything. I create 3 dimentional flowcharts in my head when dealing with any process; programming or engineering. 9 times out of 10, I will beat someone else to the solution to a problem. Why? Because I know how to quickly break a process down from end to end into it's tiniest pieces, rule out layers that don't apply to the problem and quickly follow the path to where reality differs from theory.
Sendmail is a horrible kludge, and, frankly, I'm amazed that it is still being supplied as a default MTA with Unix-like OS-es.
Yeah, but a flowchart on the wall would really look cool. :P
A professional programmer will choose a language which lets him do the required job with minimal effort. Since quality is not generally a project requirement in this industry (for reasons I already mentioned) the result is predictable - use of languages which allow quick and dirty programming; getting stuff to do something fast, so it can be shipped, and screw the user, who is by now well-conditioned to do the three-finger salute instead of asking for refund.
Social issue, not technical. Change the perspective of the programmer, project manager, etc, etc and the same language can be used to produce quality code. -Jack
PH> Date: Thu, 31 Jul 2003 21:09:34 +0300 PH> From: Petri Helenius PH> However, since improvements are always welcome, please PH> recommend tools which would allow us to progress "above and PH> beyond" C and it�s deficencies. I'll pick on you for a bit, although this applies to all too many technical people; I, too, slip up periodically. When people write "it's" instead of "its" and other such mistakes, is any programming language sufficiently idiot-proof? (Answer: Only if bugs absolutely cannot result in exploitable software.) Veering OT a bit, C class libraries have all sorts of bells and whistles to simplify "features". Why not add classlib support for safe programming, such as bounds-checked buffers? Some of us write/use our own, anyway... Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita _________________________________________________________________ DO NOT send mail to the following addresses : blacklist@brics.com -or- alfra@intc.net -or- curbjmp@intc.net Sending mail to spambait addresses is a great way to get blocked.
On Thu, Jul 31, 2003 at 09:09:34PM +0300, pete@he.iki.fi said: [snip]
What we need is a new programming paradigm, capable of actually producing secure (and, yes, reliable) software. C and its progeny (and "program now, test never" lifestyle) must go. I'm afraid it'll take laws which would actually make software makers to pay for bugs and security vulnerabilities in shipped code to make such paradigm shift a reality.
Blaming the tools for the mistakes programmers make is like saying "guns kill people" when the truth is that people kill people with guns.
Pete is right. There is no tool sufficiently safe as to prevent abuse, and yet still be useful. Or more succinctly, "Nothing is foolproof to a sufficiently talented fool." -- Scott Francis || darkuncle (at) darkuncle (dot) net illum oportet crescere me autem minui
I?ll start looking for this to happen when Microsoft manages to release an OS version which does not contain remote exploitable flaw before the boxes hit the store self.
If FreeBSD, OpenBSD, NetBSD, RedHat, Debian, SuSE were packaged and and sold in stores, how would this be any different? Oh wait, They are packaged and sold in stores! People find remote exploitable flaws in the aforementioned OSes during during their pre-release, release, after-release, alpha, beta, gamma, delta force, and nuclear reactor stages. No one's perfect, not ISPs, not users, not software vendors, not you. So please stop telling us about the lifestyle on planet Utopia.
If FreeBSD, OpenBSD, NetBSD, RedHat, Debian, SuSE were packaged and and sold in stores, how would this be any different? Oh wait, They are packaged and sold in stores!
Just by comparing the OpenBSD security track record to the one of any Windows release would dismiss your point.
People find remote exploitable flaws in the aforementioned OSes during during their pre-release, release, after-release, alpha, beta, gamma, delta force, and nuclear reactor stages.
It´s not black and white, while no OS is perfect, there are measurable differences on how much they are exploited. Diversity would make trojan/virus writers job harder. So diversity is good.
No one's perfect, not ISPs, not users, not software vendors, not you. So please stop telling us about the lifestyle on planet Utopia.
So by telling people to shut up you expect to make the world more secure? Right :) Pete
On Thu, 31 Jul 2003, Omachonu Ogali wrote:
So by telling people to shut up you expect to make the world more secure? Right :)
No, but merely talking about the how much the vendor sucks doesn't make them suck any less nor the users suck any more.
In some cultures shame is a powerful motivation to not behave like a jackass. In the US it doesn't seem to induce people to not cut corners on software development or lie to customers and shareholders. I find that unfortunate. joelja -- -------------------------------------------------------------------------- Joel Jaeggli Academic User Services joelja@darkwing.uoregon.edu -- PGP Key Fingerprint: 1DE9 8FCA 51FB 4195 B42A 9C32 A30D 121E -- In Dr. Johnson's famous dictionary patriotism is defined as the last resort of the scoundrel. With all due respect to an enlightened but inferior lexicographer I beg to submit that it is the first. -- Ambrose Bierce, "The Devil's Dictionary"
But in the telco world, how often do you have people's home phones trojanned and directed to 'DoS' another company? To pull that off with great magnitude, you need a whole lot of coordinated access to the physical plant, which is either impossible or extremely noticeable. But in a scenario like that, if a telco user gets their access canned, it's most likely because the telco user themself was abusing their privileges, not getting abused by some random fool attacking another user/company via their facilities just to swing their nuts around anonymously. But don't get it twisted, I agree with your idea of cooperation and tracking but this is like chasing suicide bombers. You can kill a drone or two or fifty, but new ones will pop up in their place. You can kill the drone controller, but the drones will continue to execute their mission as they were doing before, but now, without any method or controller to tell them to stop attacking. Not to mention, by cutting off the drone's Internet access, regular users get caught in the crosshairs of the drone hunters. At the same time, if you tell a user their computer is trojanned, but you would like to bait it to catch the culprit, they'll get worried about their personal data and either go on a formatting campaign, or abandon the computer altogether (trashing it, selling it, giving it away, etc). I think one way to definitely help is by user education. ISPs should kick out newsletters or advisories to their users, informing them of the latest scam, spam, or exploit and how to protect themselves from it or how to determine if the user is a victim of the exploit in question. This is where telcos (with fraud departments) are usually successful, every now and then you'll get some sort of info on the latest trend to watch out for. You either get it directly from the telco, or from some other 3rd party source that got it from the telco or another person (examples: news, community bulletins, office e-mails, etc). Too often do new users get brand spanking new Internet access, and maybe a trial version of anti-virus software and the ISP calls it a day, then the user is left to wander through the wilderness. Another big plus is network cooperation. Too often have attacks gone unnoticed until someone becomes a target of the DoS and then throws a fit over how no one is doing anything. (No, I'm not singling anyone out). Granted, the general response to Slammer was better than usual, but how often do companies with small T1 customers getting smacked with 10-200Mbps get to prosecute or even at the least, identify the attacker before, during, or after the filtering? Let me stop now, this e-mail is way too long.
Yo Omachonu! I guess you have not read Kevin Mitnick's new book yet. Better read it before you make more statements like this. RGDS GARY --------------------------------------------------------------------------- Gary E. Miller Rellim 20340 Empire Blvd, Suite E-3, Bend, OR 97701 gem@rellim.com Tel:+1(541)382-8588 Fax: +1(541)382-8676 On Wed, 30 Jul 2003, Omachonu Ogali wrote:
But in the telco world, how often do you have people's home phones trojanned and directed to 'DoS' another company? To pull that off with great magnitude, you need a whole lot of coordinated access to the physical plant, which is either impossible or extremely noticeable.
How about quoting the excerpt in question than telling me to pick up a book that I would lose interest in after the first ten pages?
participants (25)
-
Barney Wolff
-
bdragon@gweep.net
-
Christopher L. Morrow
-
David G. Andersen
-
E.B. Dreger
-
Gary E. Miller
-
George Jones
-
Hank Nussbacher
-
Henry Linneweh
-
Jack Bates
-
Jared Mauch
-
Jason Robertson
-
Joel Jaeggli
-
Lane Patterson
-
Mike Tancsa
-
Omachonu Ogali
-
Omachonu Ogali
-
Paul Vixie
-
Petri Helenius
-
Randy Bush
-
Rob Thomas
-
Scott Francis
-
Stephen J. Wilcox
-
Vadim Antonov
-
variable@ednet.co.uk