Super risky. This would be a 99% legal worry plus. Unless all the end points and networks they cross sign off on it the risk is beyond huge. -jim ------Original Message------ From: Jeffrey Lyon Sender: To: nanog@merit.edu Subject: Ethical DDoS drone network Sent: Jan 4, 2009 10:06 PM Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma? Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative. Thoughts? -- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc. Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401. Sent from my BlackBerry device on the Rogers Wireless Network
Refer earlier posts. End points ('drones') would have to be legitimate endpoints, not drones on random boxes. That eliminates legal liability client-side. If the traffic is non abusive then I don't see the risk for the network providers in the middle either. If it's clearly established that the source (drones), destination (target) are all 'opted in' and there's no 'collateral damage' (in bandwidth terms or otherwise, being the ways in which I see other parties potentially being impacted) I don't know that it's anywhere near as risky as you imply. You'd have to be careful not to trip IDS or similar for all the networks you transit, to avoid impacting on others in the event of some mis-fired responses... What would be an example legitimate security purpose, except to perhaps drill responses to illegitimate botnets? Mark. On Mon, 5 Jan 2009, deleskie@gmail.com wrote:
Super risky. This would be a 99% legal worry plus. Unless all the end points and networks they cross sign off on it the risk is beyond huge.
-jim ------Original Message------ From: Jeffrey Lyon Sender: To: nanog@merit.edu Subject: Ethical DDoS drone network Sent: Jan 4, 2009 10:06 PM
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative.
Thoughts?
-- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc.
Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
Sent from my BlackBerry device on the Rogers Wireless Network
On Jan 4, 2009, at 9:18 PM, deleskie@gmail.com wrote:
Super risky. This would be a 99% legal worry plus. Unless all the end points and networks they cross sign off on it the risk is beyond huge.
Since when do I need permission of "networks they cross" to send data from a machine I (legitimately) own to another machine I own? If this were an FTP or other data transfer, would I have any legal issues? And if not, how is that different from load testing using a random protocol? Before anyone jumps up & down, I know that all networks reserve the right to filter, use TE, or otherwise alter traffic passing over their infrastructure to avoid damage to the whole. But if I want to (for instance) stream a few 100 Gbps and am paying transit for all bits sent or received, since when do I have any legal worries? You want to 'attack' yourself, I do not see any problems. And I see lots of possible benefits. Hell, just figuring out which intermediate networks cannot handle the added load is useful information. -- TTFN, patrick
------Original Message------ From: Jeffrey Lyon Sender: To: nanog@merit.edu Subject: Ethical DDoS drone network Sent: Jan 4, 2009 10:06 PM
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative.
Thoughts?
-- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc.
Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
Sent from my BlackBerry device on the Rogers Wireless Network
On Jan 5, 2009, at 2:08 PM, Patrick W. Gilmore wrote:
You want to 'attack' yourself, I do not see any problems. And I see lots of possible benefits.
This can be done internally using various traffic-generation and exploit-testing tools (plenty of open-source and commercial ones available). No need to build a 'botnet', literally - more of a distributed test-harness And it must be *kept* internal; using non-routable space is key, along with ensuring that application-layer effects like recursive DNS requests don't end up leaking and causing problems for others. But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO. And prior to any testing of this sort, it makes sense to review the architecture(s), configuration(s), et. al. of the elements to be tested in order to ensure they incorporate the relevant BCPs, and then implement those which haven't yet been deployed, and *then* test. In general, I've found that folks tend to get excited about things like launching simulated attacks, setting up honeypots, and the like, because it's viewed as 'cool' and fun; the reality is that in most cases, analyzing and hardening the infrastructure and all participating nodes/elements/apps/services is a far wiser use of time and resources, even though it isn't nearly as entertaining. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
On Jan 5, 2009, at 1:33 AM, Roland Dobbins wrote:
On Jan 5, 2009, at 2:08 PM, Patrick W. Gilmore wrote:
You want to 'attack' yourself, I do not see any problems. And I see lots of possible benefits.
This can be done internally using various traffic-generation and exploit-testing tools (plenty of open-source and commercial ones available). No need to build a 'botnet', literally - more of a distributed test-harness
And it must be *kept* internal; using non-routable space is key, along with ensuring that application-layer effects like recursive DNS requests don't end up leaking and causing problems for others.
We disagree. I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.
But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.
Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.
And prior to any testing of this sort, it makes sense to review the architecture(s), configuration(s), et. al. of the elements to be tested in order to ensure they incorporate the relevant BCPs, and then implement those which haven't yet been deployed, and *then* test.
You live in a very structured world. Most people live in reality-land where there are too many variables to control, and not only is it impossible guarantee that everything involved is strict to BCP, but the opposite is almost certainly true. Remember, systems do not work in isolation, and when you touch other networks, weird things happen.
In general, I've found that folks tend to get excited about things like launching simulated attacks, setting up honeypots, and the like, because it's viewed as 'cool' and fun; the reality is that in most cases, analyzing and hardening the infrastructure and all participating nodes/elements/apps/services is a far wiser use of time and resources, even though it isn't nearly as entertaining.
Here we agree: Entertainment has (should have?) nothing to do with it. -- TTFN, patrick
On Mon, 5 Jan 2009, Patrick W. Gilmore wrote:
On Jan 5, 2009, at 1:33 AM, Roland Dobbins wrote:
On Jan 5, 2009, at 2:08 PM, Patrick W. Gilmore wrote:
You want to 'attack' yourself, I do not see any problems. And I see lots of possible benefits.
This can be done internally using various traffic-generation and exploit-testing tools (plenty of open-source and commercial ones available). No need to build a 'botnet', literally - more of a distributed test-harness
And it must be *kept* internal; using non-routable space is key, along with ensuring that application-layer effects like recursive DNS requests don't end up leaking and causing problems for others.
We disagree.
I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.
But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.
Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.
Fine test it by simulation on you or the transit end of the pipes. Do not transmit your test sh?t data across the `net. That solves that question? :)
On Jan 4, 2009, at 11:11 PM, Gadi Evron wrote:
On Mon, 5 Jan 2009, Patrick W. Gilmore wrote:
On Jan 5, 2009, at 1:33 AM, Roland Dobbins wrote:
On Jan 5, 2009, at 2:08 PM, Patrick W. Gilmore wrote: I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.
But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.
Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.
Fine test it by simulation on you or the transit end of the pipes. Do not transmit your test sh?t data across the `net.
How do you propose a model is built for the simulation if you can't collect data from the real world? This is not "sh?t data". Performance testing across networks is very real and happening now. The more knowledge I have of a path the better decisions I can make about that path. Kris
On Sun, 4 Jan 2009, kris foster wrote:
On Jan 4, 2009, at 11:11 PM, Gadi Evron wrote:
On Mon, 5 Jan 2009, Patrick W. Gilmore wrote:
On Jan 5, 2009, at 1:33 AM, Roland Dobbins wrote:
On Jan 5, 2009, at 2:08 PM, Patrick W. Gilmore wrote: I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.
But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO.
Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.
Fine test it by simulation on you or the transit end of the pipes. Do not transmit your test sh?t data across the `net.
How do you propose a model is built for the simulation if you can't collect data from the real world?
This is not "sh?t data". Performance testing across networks is very real and happening now. The more knowledge I have of a path the better decisions I can make about that path.
I am sorry for joking, I was sure we were talking about DDoS testing?
Kris
On Jan 5, 2009, at 3:39 AM, Gadi Evron wrote:
On Sun, 4 Jan 2009, kris foster wrote:
On Jan 4, 2009, at 11:11 PM, Gadi Evron wrote:
On Mon, 5 Jan 2009, Patrick W. Gilmore wrote:
On Jan 5, 2009, at 2:08 PM, Patrick W. Gilmore wrote: I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information. But before any testing is done on production systems (during maintenance windows scheduled for this type of testing, naturally), it should all be done on airgapped labs, first, IMHO. Without arguing that point (and there are lots of scenarios where
On Jan 5, 2009, at 1:33 AM, Roland Dobbins wrote: that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing. Fine test it by simulation on you or the transit end of the pipes. Do not transmit your test sh?t data across the `net.
How do you propose a model is built for the simulation if you can't collect data from the real world?
This is not "sh?t data". Performance testing across networks is very real and happening now. The more knowledge I have of a path the better decisions I can make about that path.
I am sorry for joking, I was sure we were talking about DDoS testing?
I've been called by more one provider because I was "DDoS'ing" someone with traffic that someone requested. Strange how the word "DDoS" has morphed over time. But back to your original point, how can you tell it is shit data? DDoSes frequently use valid requests or even full connections. If I send my web server port 80 SYNs, why would you complain? Knowing whether the systems - internal _and_ external - can handle a certain load (and figuring out why not, then fixing it) is vital to many people / companies / applications. Despite the rhetoric here, it is simply not possible to "test" that in a lab. And I guarantee if you do not test it, there _will_ be unexpected problems when Bad Stuff happens. As mentioned before, Reality Land is not clean and structured. -- TTFN, patrick
On Mon, 05 Jan 2009 06:53:49 EST, "Patrick W. Gilmore" said:
Knowing whether the systems - internal _and_ external - can handle a certain load (and figuring out why not, then fixing it) is vital to many people / companies / applications. Despite the rhetoric here, it is simply not possible to "test" that in a lab. And I guarantee if you do not test it, there _will_ be unexpected problems when Bad Stuff happens.
Amen to that, brother. Trust me, you definitely want to do your load testing at a 2AM (or other usually dead time) of your own choosing, when you have the ability to pull the switch on the test almost instantly if it gets out of hand. The *last* think you want is to get a surprise slashdotting of your web servers while the police have your entire site under lockdown. Been there, done that, it's not fun.
PWG> Date: Mon, 5 Jan 2009 06:53:49 -0500 PWG> From: Patrick W. Gilmore PWG> But back to your original point, how can you tell it is shit data? AFAIK, RFC 3514 is the only standards document that has addressed this. I have yet to see it implemented. ;-) Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
On Jan 5, 2009, at 3:04 PM, Patrick W. Gilmore wrote:
I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.
AUPs are a big issue, here..
Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.
Agree completely.
You live in a very structured world.
The idea is to instantiate structure in order to reduce the chaos. ;>
Most people live in reality-land where there are too many variables to control, and not only is it impossible guarantee that everything involved is strict to BCP, but the opposite is almost certainly true.
Nothing's perfect, but one must do the basics before moving on to more advanced things. The low-hanging fruit, as it were (and of course, this is where scale becomes a major obstacle, in many cases; the fruit may be hanging low to the ground, but there can be a *lot* of it to pick).
Remember, systems do not work in isolation, and when you touch other networks, weird things happen.
One ought to get one's own house in order first, prior to looking at externalities. Agree with you 100% that they're important, but one must do what one can within one's own span of control, first.
Here we agree: Entertainment has (should have?) nothing to do with it.
Implementing BCPs is drudgery; because of this, it often receives short shrift. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
On Jan 5, 2009, at 2:54 AM, Roland Dobbins wrote:
On Jan 5, 2009, at 3:04 PM, Patrick W. Gilmore wrote:
I can think of several instances where it _must_ be external. For instance, as I said before, knowing which intermediate networks are incapable of handling the additional load is useful information.
AUPs are a big issue, here..
No, they are not. AUPs do not stop me from sending traffic from my host to my host across links I am paying for.
Without arguing that point (and there are lots of scenarios where that is not at all necessary, IMHO), it does not change the fact that external testing can be extremely useful after "air-gap" testing.
Agree completely.
You live in a very structured world.
The idea is to instantiate structure in order to reduce the chaos.
;>
Most people live in reality-land where there are too many variables to control, and not only is it impossible guarantee that everything involved is strict to BCP, but the opposite is almost certainly true.
Nothing's perfect, but one must do the basics before moving on to more advanced things. The low-hanging fruit, as it were (and of course, this is where scale becomes a major obstacle, in many cases; the fruit may be hanging low to the ground, but there can be a *lot* of it to pick).
Perhaps we are miscommunicating. You seem to think I am saying people should test externally before they know whether their internal systems work. Of course that is a silly idea. That does not invalidate the need for external testing. Nor does it guarantee everything will be "BCP compliant", especially since "everything" includes things outside your control. -- TTFN, patrick
RD> Date: Mon, 5 Jan 2009 15:54:50 +0800 RD> From: Roland Dobbins RD> AUPs are a big issue, here.. And AUPs [theoretically] set forth definitions. Of course, there exist colo providers with "unlimited 10 Gbps bandwidth" whose AUPs read "do not use 'too much' bandwith or we will get angry", thus introducing ambiguity regarding just _for what_ one is paying... Perhaps "abuse" is best _operationally_ defined as "something that angers someone enough that it's at least sort of likely to cost you some money -- and maybe even a lot"? Were the definition clear, I doubt there'd be such a long NANOG thread. (Yes, I'm feeling optimistic today.) Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
FWIW, I'm primarily concerned about testing PPS loads and not brute force bandwidth. Best regards, Jeff On Mon, Jan 5, 2009 at 12:51 PM, Edward B. DREGER <eddy+public+spam@noc.everquick.net> wrote:
RD> Date: Mon, 5 Jan 2009 15:54:50 +0800 RD> From: Roland Dobbins
RD> AUPs are a big issue, here..
And AUPs [theoretically] set forth definitions.
Of course, there exist colo providers with "unlimited 10 Gbps bandwidth" whose AUPs read "do not use 'too much' bandwith or we will get angry", thus introducing ambiguity regarding just _for what_ one is paying...
Perhaps "abuse" is best _operationally_ defined as "something that angers someone enough that it's at least sort of likely to cost you some money -- and maybe even a lot"?
Were the definition clear, I doubt there'd be such a long NANOG thread. (Yes, I'm feeling optimistic today.)
Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
-- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc. Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
You could just troll people on IRC until you get DDOS'd. All the fun, none of the work! -----Original Message----- From: Jeffrey Lyon [mailto:jeffrey.lyon@blacklotus.net] Sent: Monday, January 05, 2009 11:54 AM To: nanog@merit.edu Subject: Re: Ethical DDoS drone network FWIW, I'm primarily concerned about testing PPS loads and not brute force bandwidth. Best regards, Jeff On Mon, Jan 5, 2009 at 12:51 PM, Edward B. DREGER <eddy+public+spam@noc.everquick.net> wrote:
RD> Date: Mon, 5 Jan 2009 15:54:50 +0800 RD> From: Roland Dobbins
RD> AUPs are a big issue, here..
And AUPs [theoretically] set forth definitions.
Of course, there exist colo providers with "unlimited 10 Gbps bandwidth" whose AUPs read "do not use 'too much' bandwith or we will get angry", thus introducing ambiguity regarding just _for what_ one is paying...
Perhaps "abuse" is best _operationally_ defined as "something that angers someone enough that it's at least sort of likely to cost you some money -- and maybe even a lot"?
Were the definition clear, I doubt there'd be such a long NANOG thread. (Yes, I'm feeling optimistic today.)
Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
-- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc. Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
JL> Date: Mon, 5 Jan 2009 12:54:24 -0500 JL> From: Jeffrey Lyon JL> FWIW, I'm primarily concerned about testing PPS loads and not brute JL> force bandwidth. Which underscores my point: <x> bps with minimally-sized packets is even higher pps than <x> bps with "normal"-sized packets, for any non-minimal value of "normal". Thus, the potential for breaking something that scales based on pps instead of bps _increases_ under such testing. I've not [yet] seen an AUP that reads "customer shall maintain a minimum packet size of 400 bytes (combined IP header and payload) averaged over a moving one-hour window". ;-) Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
FWIW, I'm primarily concerned about testing PPS loads and not brute force bandwidth.
Simple solution. Write some DDoS software that folks can install on their own machines. Make its so that the software is only triggered by commands from a device under the same administrative control, i.e. it uses a shared secret that is set up when folks install the software. So far there are two pieces of software, one pieces does the DDoSing, and the other piece controls it. You now need a third bit of software that sends DDoS requests to the controllers, and the controllers don't actually act upon such requests, but queue them until their administrators OK the DDoSing. Think of it a bit like a moderated mailing list. If you product that set of software, I'll bet that a lot of folks would be interested in working together to do DDoS stress testing of each others networks, at times of their own choosing. --Michael Dillon
participants (11)
-
deleskie@gmail.com
-
Edward B. DREGER
-
Gadi Evron
-
Jeffrey Lyon
-
kris foster
-
Mark Foster
-
Michael Gazzerro
-
michael.dillon@bt.com
-
Patrick W. Gilmore
-
Roland Dobbins
-
Valdis.Kletnieks@vt.edu