Ethical DDoS drone network
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma? Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative. Thoughts? -- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc. Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
I would say to roll your own binary hardcoded to only hit 1 IP address, and have it held on a law enforcement approved network under the supervision of a qualified agent. 0.02 On Sun, Jan 4, 2009 at 8:06 PM, Jeffrey Lyon <jeffrey.lyon@blacklotus.net>wrote:
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative.
Thoughts?
-- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc.
Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sun, Jan 4, 2009 at 6:06 PM, Jeffrey Lyon <jeffrey.lyon@blacklotus.net> wrote:
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes?
Well, for starters, you wold have to own (in the traditional sense) all of the hosts involved. :-) - - ferg -----BEGIN PGP SIGNATURE----- Version: PGP Desktop 9.6.3 (Build 3017) wj8DBQFJYW08q1pz9mNUZTMRApqvAJ9cctPxYzLqqeJyzO+k0cmnFpPn/QCgkI+V /jMXCouqNrsCCluieKHegdk= =jUJU -----END PGP SIGNATURE----- -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet fergdawgster(at)gmail.com ferg's tech blog: http://fergdawg.blogspot.com/
Am 05.01.2009 um 03:06 schrieb Jeffrey Lyon:
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative.
Thoughts?
hello, , http://mirror.informatik.uni-mannheim.de/pub/ccc/streamdump/saal3/Tag3-Saal3... and http://mirror.informatik.uni-mannheim.de/pub/ccc/streamdump/saal3/Tag3-Saal3... have fun!!! Marc -- Les Enfants Terribles - WWW.LET.DE Marc Manthey 50672 Köln - Germany Hildeboldplatz 1a Tel.:0049-221-3558032 Mobil:0049-1577-3329231 mail: marc@let.de jabber :marc@kgraff.net IRC: #opencu freenode.net PGP/GnuPG: 0x1ac02f3296b12b4d twitter: http://twitter.com/macbroadcast web: http://www.let.de Opinions expressed may not even be mine by the time you read them, and certainly don't reflect those of any other entity (legal or otherwise). Please note that according to the German law on data retention, information on every electronic information exchange with me is retained for a period of six months.
On Sun, 4 Jan 2009, Jeffrey Lyon wrote:
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
The company I work for has not approached this particular dilemma yet. I'm not sure what legitimate internal security purposes you're looking to fulfill, but I think you need to ask yourself a few questions first (not an all-inclusive list, but food for thought nonetheless): 1. What is the purpose of this legit botnet? In other words, what business objective does it achieve? 2. Do you have the people in-house to write the software, or would you be willing to take a chance on using something that exists 'in the wild'? Depending on how security-minded your shop is, your corporate security folks and legal counsel might take a dim view toward using untrusted software on your internal network, especially if source code is not available. That particular monster can get out of control very quickly. 3. Do you have a sufficient number of machines that are controlled by you to populate this botnet and achieve my goals (see point 1)? 4. How will this botnet be isolated from the rest of your internal network, and would that isolation limit or even negate the botnet's usefulness? 5. If the answer to question 4 is "no isolation", how will you demonstrably control the botnet's propagation? 6. Depending on the answer to question 5, there might be regulatory compliance (HIPAA, FERPA, GLB, SOX, internal security/privacy policies, contractual obligations, etc...) issues to consider.
Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative.
Infection, even for ethical purposes, is still infection. jms
On Sun, 4 Jan 2009 21:06:34 -0500 "Jeffrey Lyon" <jeffrey.lyon@blacklotus.net> wrote:
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
As long as some part of the system (hosts/networks) from the bots to the target is not under your control or prepared for this sort of activity, you may not get a satisfactory answer on this. Its quite likely these days a third party playing the unwitting participant in this botnet may find it objectionable. Is creating and running a botnet the answer? What exactly are you trying to protect against? DDoS? There are potentially various sorts of penetration tests and design reviews you could go through as an alternative to running a so-called "ethical" botnet. Further information on what you're trying to protect against may solicit some useful strategies. John
On Sun, 4 Jan 2009, John Kristoff wrote:
On Sun, 4 Jan 2009 21:06:34 -0500 "Jeffrey Lyon" <jeffrey.lyon@blacklotus.net> wrote:
Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
As long as some part of the system (hosts/networks) from the bots to the target is not under your control or prepared for this sort of activity, you may not get a satisfactory answer on this. Its quite likely these days a third party playing the unwitting participant in this botnet may find it objectionable.
Is creating and running a botnet the answer? What exactly are you trying to protect against? DDoS?
There are potentially various sorts of penetration tests and design reviews you could go through as an alternative to running a so-called "ethical" botnet. Further information on what you're trying to protect against may solicit some useful strategies.
A legal botnet is a distributed system you own. A legal DDoS network doesn't exist. The question is set wrong, no?
John
Agreed, Gadi. It wouldn't be an attack if it were ethical. Technically, that would be "load testing" or "stress testing". Might I suggest this to help? http://www.opensourcetesting.org/performance.php On Sun, Jan 4, 2009 at 9:55 PM, Gadi Evron <ge@linuxbox.org> wrote:
On Sun, 4 Jan 2009, John Kristoff wrote:
On Sun, 4 Jan 2009 21:06:34 -0500 "Jeffrey Lyon" <jeffrey.lyon@blacklotus.net> wrote:
Say for instance one wanted to create an "ethical botnet," how would
this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma?
As long as some part of the system (hosts/networks) from the bots to the target is not under your control or prepared for this sort of activity, you may not get a satisfactory answer on this. Its quite likely these days a third party playing the unwitting participant in this botnet may find it objectionable.
Is creating and running a botnet the answer? What exactly are you trying to protect against? DDoS?
There are potentially various sorts of penetration tests and design reviews you could go through as an alternative to running a so-called "ethical" botnet. Further information on what you're trying to protect against may solicit some useful strategies.
A legal botnet is a distributed system you own.
A legal DDoS network doesn't exist. The question is set wrong, no?
John
On Sun, Jan 04, 2009 at 09:55:20PM -0600, Gadi Evron wrote:
A legal botnet is a distributed system you own.
A legal DDoS network doesn't exist. The question is set wrong, no?
kind of depends on what the model is. a botnet for hire to "red-team" my network might be just the ticket. --bill
On Sun, Jan 4, 2009 at 10:27 PM, <bmanning@vacation.karoshi.com> wrote:
On Sun, Jan 04, 2009 at 09:55:20PM -0600, Gadi Evron wrote:
A legal botnet is a distributed system you own. A legal DDoS network doesn't exist. The question is set wrong, no? kind of depends on what the model is. a botnet for hire to "red-team" my network might be just the ticket.
You probably don't have to entirely "own" the distributed system for it to be legal. You could just control it with proper authorization. A legal botnet is one whose deployment and operations doesn't break any laws in any of the relevant jurisdictions. The ways to achieve this are legal considerations, not technical considerations. I'm not thinking this list is really a good place to ask a question about legality and get an answer you can rely on. You need to confer with your lawyers about how exactly your botnet can or can't be built and still be legal. This may depend on what country your botnet operates in, where you are, where your nodes are, etc. But thoroughly control and restrain every possible factor that could ever make your botnet illegal, and the result should (imho) be legal... This is not an exhaustive enumeration, but some situations that often make illegal botnets illegal are: (A) The botnet operator runs code on computers without authorization, or the botnet software exploits security vulnerabilities in victim computers to install without permission i.e. operator gains unauthorized access to a computer to deploy botnet nodes, or the software is a worm. This problem is avoided if you take measures to guarantee you own every node, or if you guarantee you have full permission for every computer you will possibly run botnet software on, to the full extent of the botnet node's activities. And you ensure botnet software used never automatically "spreads itself" like a worm. This way, all access you gain to node PCs is authorized. (B) Botnet node software conducts unauthorized activities after it is installed on the host PC. e.g. Theft of services. Perhaps an authorized user of the PC did install the software, but they installed it for an entirely different purpose, the botnet node is hidden software, not noted in the product brochure or other prominent information about the software. This problem is avoided if you make sure the person giving permission to install the software is aware of the botnet node and all its expected activities, before a botnet node can be brought up. (C) Traffic generated by a botnet could be illegal. For example, traffic in excess of agreements you have in place, or in violation of your ISP's TOS, TOU, or AUP, may be questionable. Ethically: You need permission from owners of the source and destination networks the botnet generates traffic on, not just the source and destination computers. For example, you have agreements for 10 gigs, but your botnet test accidentally sends 50 gigs towards your remote site, or one of the thousands of nodes saturates a shared link at its local site that belongs to someone else. An attempt to simulate a DDoS against your own network could inadvertently turn out to be a real DoS on someone else's network as well as yours, for example one of your providers' networks. This is best avoided by maintaining tight control over any distributed stress testing, and massively distributed stress testing should be quarantined by all available means. The destination of any testing must be a computer you have permission to blow up. And the amount of traffic generated by any botnet node on its LAN need be acceptable. Always retain rigid controls over any traffic generated, and very strong measures to prevent an unauthorized third party from ever being able to make your nodes generate any traffic. At a bare minimum, strong PKI (no MD5 or SHA-1) and digitally-signed timestamped commands for starting a test, with some mechanism to prevent unauthorized creation or replay of commands. Plus multiple failsafe mechanisms to allow a test to be rapidly halted. i.e. all nodes ping a "control point" once every 30 seconds. if two pings are dropped, the node stops in its tracks. So you can kill a runaway botnet by unplugging your control hosts. -- -J
There are some assumptions here. First are you considering volumetric DDOS attacks? Second, if you plan on harvesting wild bots and using them to serve your purpose then I don't see how this can be ethical unless they are just clients from your own network making it less distributed. You would then have to have this in your AUP allowing you to do this. Hmm, I really don't know what you would gain by this. Not knowing what your network looks like...but assuming your somewhat scaled, I would think this could all be done in the lab. -----Original Message----- From: Jeffrey Lyon [mailto:jeffrey.lyon@blacklotus.net] Sent: Sunday, January 04, 2009 8:07 PM To: nanog@merit.edu Subject: Ethical DDoS drone network Say for instance one wanted to create an "ethical botnet," how would this be done in a manner that is legal, non-abusive toward other networks, and unquestionably used for legitimate internal security purposes? How does your company approach this dilemma? Our company for instance has always relied on outside attacks to spot check our security and i'm beginning to think there may be a more user friendly alternative. Thoughts? -- Jeffrey Lyon, Leadership Team jeffrey.lyon@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc. Look for us at HostingCon 2009 in Washington, DC on August 10th - 12th at Booth #401.
TAB> Date: Mon, 5 Jan 2009 11:54:06 -0500 TAB> From: "BATTLES, TIMOTHY A (TIM), ATTLABS" TAB> assuming your somewhat scaled, I would think this could all be done TAB> in the lab. And end up with a network that works in the lab. :-) - bw * delay - effects of flow caching, where applicable - jitter (esp. under load) - packet dups and loss (esp. under load) - packet reordering and assiciated side-effects - upstream/sidestream throughput (esp. under load) No, reality is far more complex. Some things do not lend themselves to _a priori_ models, nor even "TFAR" generalizations. Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
True, real world events differ, but so do denial of service attacks. Distribution in the network, PPS, BPS, Packet Type, Packet Size, etc.. Etc.. Etc.. So really I don't get the point either in staging a real life do it yourself test. So, you put pieces of your network in jeopardy night after night during maintenance windows to determine if what?? Your vulnerable to DDOS? We all know we are, it's just a question of what type and how much right? So we identify our choke points. We all know them. We look at the vendor data on how much PPS it can handle and quickly dismiss that. So what's the next step? Put the device that IS the choke point and pump it full of all different flavors until it fails. No harm no foul an now we have data regarding how much and what takes the device out. If the network is scaled, well we now know that we have x amount of devices that can fail if the DDOS goes X PPS with Y packet types. What I don't get is what you would be doing trying to accomplish this on a production network. Worse case is you break something. Best case is you don't. So if best case scenario is reach, what have you learned? Nothing! So what do you do next ramp it up? Seems silly. -----Original Message----- From: Edward B. DREGER [mailto:eddy+public+spam@noc.everquick.net] Sent: Monday, January 05, 2009 12:03 PM To: nanog@merit.edu Subject: RE: Ethical DDoS drone network TAB> Date: Mon, 5 Jan 2009 11:54:06 -0500 TAB> From: "BATTLES, TIMOTHY A (TIM), ATTLABS" TAB> assuming your somewhat scaled, I would think this could all be done TAB> in the lab. And end up with a network that works in the lab. :-) - bw * delay - effects of flow caching, where applicable - jitter (esp. under load) - packet dups and loss (esp. under load) - packet reordering and assiciated side-effects - upstream/sidestream throughput (esp. under load) No, reality is far more complex. Some things do not lend themselves to _a priori_ models, nor even "TFAR" generalizations. Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
BATTLES, TIMOTHY A (TIM), ATTLABS wrote:
True, real world events differ, but so do denial of service attacks. Distribution in the network, PPS, BPS, Packet Type, Packet Size, etc.. Etc.. Etc.. So really I don't get the point either in staging a real life do it yourself test. So, you put pieces of your network in jeopardy night after night during maintenance windows to determine if what?? Your vulnerable to DDOS? We all know we are, it's just a question of what type and how much right? So we identify our choke points. We all <snip>
packet types. What I don't get is what you would be doing trying to accomplish this on a production network. Worse case is you break something. Best case is you don't. So if best case scenario is reach, what have you learned? Nothing! So what do you do next ramp it up? Seems silly.
I'll personally agree with you, though there are fringe cases. For example, one or more of your peers might falter before you do. While I'm sure they won't enjoy you hurting their other customers, knowing that your peer's router is going to crater before your expensive piece of hardware is usually good knowledge. Since it's controlled, you can minimize the damage of testing that fact. Another test is automatic measures and how well they perform. This may or may not be useful in a closed environment, though in a closed environment, they'll definitely need to mirror the production environment depending on what criteria they use for automatic measures. A non-forging botnet which sends packets (valid or malformed) to an accepting recipient is strictly another internet app, and has a harm ratio related to some p2p apps. IP forging, of course, could cause unintended blowback, which could have severe legal ramifications. That being said, I'd quit calling it a botnet. I'd call it a distributed application that stress tests DDoS protection measures, and it's advisable to let your direct peers know when you plan to run it. They might even be interested in monitoring their equipment (or tell you up front that you'll crater their equipment). Jack
On Jan 6, 2009, at 6:52 AM, Jack Bates wrote:
(or tell you up front that you'll crater their equipment).
This is the AUP danger to which I was referring earlier. Also, note that the miscreants will attack intermediate systems such as routers they identify via tracerouting from multiple points to the victim - there's no way to test that externally without violating AUPs and/or various criminal statutes in multiple jurisdictions. And then there are managed-CPE and hosting scenarios, which complicate matters further. Tim's comments about understanding the performance envelopes of all the system/infrastructure elements are spot-on - that's a primary input into design criteria (or should be). With this knowledge in hand, one can test the most important things internally. But prior to testing, one should ensure that the architecture and the element configurations are hardened with all the relevant BCPs, and scaled for capacity. The main purpose of the testing would be to verify correct implementation and ensure all the failure modes have been accounted for and ameliorated to the degree possible, and also as an opsec drill. What I've seen over and over again is a desire to test because it's 'cool', but no desire to spend the time in the design and implementation (or re-implementation) phases to ensure that things are hardened in the first place, nor to spell out security policies and procedures, train, etc. Actual *security* (as opposed to checklisting) consists of attention to lots of tedious details, drudgery and scut-work, involving the coordination of multiple groups and the attendant politics. It isn't 'sexy', it isn't 'cool', it isn't 'fun', but it pays off at 4AM on a holiday weekend. Testing should become a priority only after one has done everything one knows to do within one's span of control, IMHO - and I've yet to run across this happy circumstance in any organization who've asked me about this kind testing, FWIW. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
In my opinion, the real thing you can puzzle out of this kind of testing is the occasional hidden dependency. I've seen ultra-robust servers fail because a performance monitoring application living on them was timing out in a remote query, and I've also seen devices fail well below their expected load because they were using multiple layers of encapsulation (IP over MPLS over IP over Ethernet over MPLS over Frame-Relay ...) and one of the hidden middle-layers was badly optimized. The advantage of performing this DDoS-style load testing on yourself is that *you can turn it off once you experience the failure* and then go figure out why it broke when it did. This is a lot more pleasant than trying to figure it out at 2:30 in the morning with insufficient coffee. David Barak Need Geek Rock? Try The Franchise: http://www.listentothefranchise.com --- On Mon, 1/5/09, BATTLES, TIMOTHY A (TIM), ATTLABS <tmbattles@att.com> wrote:
From: BATTLES, TIMOTHY A (TIM), ATTLABS <tmbattles@att.com> Subject: RE: Ethical DDoS drone network To: "Edward B. DREGER" <eddy+public+spam@noc.everquick.net>, nanog@merit.edu Date: Monday, January 5, 2009, 4:16 PM True, real world events differ, but so do denial of service attacks. Distribution in the network, PPS, BPS, Packet Type, Packet Size, etc.. Etc.. Etc.. So really I don't get the point either in staging a real life do it yourself test. So, you put pieces of your network in jeopardy night after night during maintenance windows to determine if what?? Your vulnerable to DDOS? We all know we are, it's just a question of what type and how much right? So we identify our choke points. We all know them. We look at the vendor data on how much PPS it can handle and quickly dismiss that. So what's the next step? Put the device that IS the choke point and pump it full of all different flavors until it fails. No harm no foul an now we have data regarding how much and what takes the device out. If the network is scaled, well we now know that we have x amount of devices that can fail if the DDOS goes X PPS with Y packet types. What I don't get is what you would be doing trying to accomplish this on a production network. Worse case is you break something. Best case is you don't. So if best case scenario is reach, what have you learned? Nothing! So what do you do next ramp it up? Seems silly.
-----Original Message----- From: Edward B. DREGER [mailto:eddy+public+spam@noc.everquick.net] Sent: Monday, January 05, 2009 12:03 PM To: nanog@merit.edu Subject: RE: Ethical DDoS drone network
TAB> Date: Mon, 5 Jan 2009 11:54:06 -0500 TAB> From: "BATTLES, TIMOTHY A (TIM), ATTLABS"
TAB> assuming your somewhat scaled, I would think this could all be done TAB> in the lab.
And end up with a network that works in the lab. :-)
- bw * delay - effects of flow caching, where applicable - jitter (esp. under load) - packet dups and loss (esp. under load) - packet reordering and assiciated side-effects - upstream/sidestream throughput (esp. under load)
No, reality is far more complex. Some things do not lend themselves to _a priori_ models, nor even "TFAR" generalizations.
Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
On Jan 6, 2009, at 7:23 AM, David Barak wrote:
In my opinion, the real thing you can puzzle out of this kind of testing is the occasional hidden dependency.
Yes - but if your lab accurately reflects production, you can discover this kind of thing in the lab (and one ought to already have a lab setup which reflects production for many reasons having nothing to do with security). ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
-- On Mon, 1/5/09, Roland Dobbins <rdobbins@cisco.com> wrote:
From: Roland Dobbins <rdobbins@cisco.com> Subject: Re: Ethical DDoS drone network To: "NANOG list" <nanog@merit.edu> Date: Monday, January 5, 2009, 6:39 PM On Jan 6, 2009, at 7:23 AM, David Barak wrote:
In my opinion, the real thing you can puzzle out of this kind of testing is the occasional hidden dependency.
Yes - but if your lab accurately reflects production, you can discover this kind of thing in the lab (and one ought to already have a lab setup which reflects production for many reasons having nothing to do with security).
I agree - having a lab of that type is absolutely ideal. However, the ideal and the real diverge tremendously in large and mid-size enterprise networks, because most enterprises just don't have enough lab equipment to adequately model all of the possible scenarios, and including the cost of a lab in the rollout immediately doubles all capital expenditures. The types of problems that the ultra-large DoS can ferret out are the kind which *don't* show up in anything smaller than a 1:1 or 1:2 scale model. Consider for a moment a large retail chain, with several hundred or a couple thousand locations. How big a lab should they have before deciding to roll out a new network something-or-other? Should their lab be 1:10 scale? A more realistic figure is that they'll consider themselves lucky to be between 1:50 and 1:100, and that lab is probably understaffed at best. Having a dedicated lab manager is often seen as an expensive luxury, and many businesses don't have the margin to support it. David Barak Need Geek Rock? Try The Franchise: http://www.listentothefranchise.com
On Jan 6, 2009, at 8:01 AM, David Barak wrote:
The types of problems that the ultra-large DoS can ferret out are the kind which *don't* show up in anything smaller than a 1:1 or 1:2 scale model.
In my experience, once one has an understanding of the performance envelopes and has built a lab which contains examples of the functional elements of the system (network infrastructure, servers, apps, databases, clients, et. al.), one can extrapolate pretty accurately well out to orders of magnitude. The problem is that many organizations don't do the above prior to freezing the design and initiating deployment. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
Roland Dobbins wrote:
In my experience, once one has an understanding of the performance envelopes and has built a lab which contains examples of the functional elements of the system (network infrastructure, servers, apps, databases, clients, et. al.), one can extrapolate pretty accurately well out to orders of magnitude.
The problem is that many organizations don't do the above prior to freezing the design and initiating deployment.
Sadly, I think money and time have a lot to do with this. Technology is a moving target, and everyone is constantly struggling to keep up while maintaining performance/security. I've seen this out of software developers, too. I'd say I've seen more outages due to a simple command typed into a router cli crashing the router than DDoS traffic. Perhaps I've been lucky with the latter. Jack
On Jan 6, 2009, at 8:45 AM, Jack Bates wrote:
Sadly, I think money and time have a lot to do with this.
Even more than this, it's a skillset and mindset issue. Many organizations don't know enough about how the underlying technologies work to understand that they need to incorporate these costs and allocate these resources as part of the project spend, nor do they think to ask around (or even use the Search Engine of Their Choice) to find out about the 'unknown unknowns'. To mount a successful defense, one must learn to think like an attacker. This seems to be a relatively rare attitude, unfortunately. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
On Mon, Jan 5, 2009 at 4:11 PM, Roland Dobbins <rdobbins@cisco.com> wrote:
In my experience, once one has an understanding of the performance envelopes and has built a lab which contains examples of the functional elements of the system (network infrastructure, servers, apps, databases, clients, et. al.), one can extrapolate pretty accurately well out to orders of magnitude.
It's one of those things where the difference between theory and practice is smaller in theory than it is in practice, though... But yeah, sometimes things like load balancers fail, or routers run out of table space, or whatever. I've had enough enterprise customers worry about what will happen to their VPN sites if some neighborhood kid annoys his gamer buddies and gets a few Gbps of traffic to knock down their DSLAM and its upstream feeds or whatever.
The problem is that many organizations don't do the above prior to freezing the design and initiating deployment.
Back in the mid-90s I had one networking software development customer that had a room with 500 PCs on racks, and some switches that would let them dump groups of 50s of them together with whatever server they were testing. That was a lot more impressive back then when PCs were full-sized devices that needed keyboards and monitors (grouped on KVMs, at least), as opposed to being 1Us or blades or virtual machines. ---- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.
David Barak wrote:
Consider for a moment a large retail chain, with several hundred or a couple thousand locations. How big a lab should they have before deciding to roll out a new network something-or-other? Should their lab be 1:10 scale? A more realistic figure is that they'll consider themselves lucky to be between 1:50 and 1:100, and that lab is probably understaffed at best. Having a dedicated lab manager is often seen as an expensive luxury, and many businesses don't have the margin to support it.
At the very least they should have a complete mock location (for an IT perspective) in a lab. Identical copies of all local servers and a carbon copy of their official template network. This is how AOL does it. Every change is tested in the mock remote site before the official template is changed and the template is pushed out to all the production sites. Justin
Justin Shore wrote:
David Barak wrote:
Consider for a moment a large retail chain, with several hundred or a couple thousand locations. How big a lab should they have before deciding to roll out a new network something-or-other? Should their lab be 1:10 scale? A more realistic figure is that they'll consider themselves lucky to be between 1:50 and 1:100, and that lab is probably understaffed at best. Having a dedicated lab manager is often seen as an expensive luxury, and many businesses don't have the margin to support it.
At the very least they should have a complete mock location (for an IT perspective) in a lab. Identical copies of all local servers and a carbon copy of their official template network. This is how AOL does it. Every change is tested in the mock remote site before the official template is changed and the template is pushed out to all the production sites.
That's useful for testing changes to the remote site itself, but it doesn't do anything for testing changes to the entire WAN. I've seen _many_ routing problems appear in large WANs that simply can't be replicated with fewer than a hundred or even a thousand routers. The vendors may have tools to simulate such, since they need them for their own QA, support, etc. but they rarely give them to customers because that'd be another product they have to support... S
On Jan 7, 2009, at 1:05 AM, Stephen Sprunk wrote:
I've seen _many_ routing problems appear in large WANs that simply can't be replicated with fewer than a hundred or even a thousand routers.
Users can simulate many of these conditions themselves using various open-source and commercial tools, which've been available for many years. And again, it comes back to understanding the performance envelope of one's equipment, even without simulation. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
RD> Date: Wed, 7 Jan 2009 08:50:46 +0800 RD> From: Roland Dobbins RD> > I've seen _many_ routing problems appear in large WANs that simply RD> > can't be replicated with fewer than a hundred or even a thousand RD> > routers. RD> Users can simulate many of these conditions themselves using various many != all It appears to be a question of what incremental benefit does one gain from real-world testing? RD> open-source and commercial tools, which've been available for many RD> years. I think that everyone agrees: No live testing until "adequate" lab testing has been performed. The disagreement seems to be over when/if live testing is necessary, and how much. Because it just wouldn't be a NANOG thread without analogies *grin*, I offer the following: drug certification, aircraft certification, automobile crash testing, database benchmarking. Even when a system is highly deterministic, such as a database, one still expects _real-world_ testing. Traffic flows on large networks are highly stochastic... and this includes OPNs, which I posit are futile to attempt to model. RD> And again, it comes back to understanding the performance envelope RD> of one's equipment, even without simulation. Very true. If one deploys an OSPF-happy network thinking that it scales O(n), one is in for a rude shock. Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
On Jan 7, 2009, at 9:40 AM, Edward B. DREGER wrote:
Even when a system is highly deterministic, such as a database, one still expects _real-world_ testing. Traffic flows on large networks are highly stochastic... and this includes OPNs, which I posit are futile to attempt to model.
Sure. In many cases, it seems that there's a lot of talk about testing, after-the-fact, with relatively little analysis performed prior-to-the- fact to inform the design, including baseline security requirements. When one has a network/system in which the basic security BCPs haven't been implemented, it makes little sense to expend scarce resources testing when those resources could be better-employed hardening and increasing the resiliency and robustness of said network/system. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // +852.9133.2844 mobile All behavior is economic in motivation and/or consequence.
RD> Date: Wed, 7 Jan 2009 09:48:16 +0800 RD> From: Roland Dobbins RD> When one has a network/system in which the basic security BCPs RD> haven't been implemented, it makes little sense to expend scarce RD> resources testing when those resources could be better-employed RD> hardening and increasing the resiliency and robustness of said RD> network/system. Very true. "Hey, it really _did_ break!" is hardly a useful approach. Your post awakened my inner cynic: Perhaps there are people who look to stress-testing OPNs in hopes that the weakest link is elsewhere, so that they may point the proverbial finger instead of fixing internal problems. #include "cost-shifting/patchining,smtp-auth,spf,urpf,et-cetera.h" Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
--- On Tue, 1/6/09, Justin Shore <justin@justinshore.com> wrote:
Consider for a moment a large retail chain, with several hundred or a couple thousand locations. How big a lab should they have before deciding to roll out a new network something-or-other? Should their lab be 1:10 scale? A more realistic figure is that they'll consider
David Barak wrote: themselves lucky to be between 1:50 and 1:100, and that lab is probably understaffed at best. Having a dedicated lab manager is often seen as an expensive luxury, and many businesses don't have the margin to support it.
At the very least they should have a complete mock location (for an IT perspective) in a lab. Identical copies of all local servers and a carbon copy of their official template network. This is how AOL does it. Every change is tested in the mock remote site before the official template is changed and the template is pushed out to all the production sites.
I don't disagree at all: that is a straightforward way to anticipate *most* problems. What is does not and cannot validate is whether there is a scaling issue, and this is what doing "live" testing does give you. David Barak Need Geek Rock? Try The Franchise: http://www.listentothefranchise.com
I propose that we create two Internets. One can be the "testing" Internet, and the other can be "production". To ensure that both receive adequate treatment, they can trade places every few days. If something breaks, it can be moved from "production" to "testing". The detection of hyperbole, sarcasm, and mathematical invalidity is left as an exercise to the reader. ;-) Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
participants (17)
-
BATTLES, TIMOTHY A (TIM), ATTLABS
-
Bill Stewart
-
bmanning@vacation.karoshi.com
-
David Barak
-
Edward B. DREGER
-
Gadi Evron
-
Jack Bates
-
James Hess
-
Jeffrey Lyon
-
John Kristoff
-
Justin M. Streiner
-
Justin Shore
-
macbroadcast
-
Paul Ferguson
-
Roland Dobbins
-
Stephen Sprunk
-
Zach