Re: Vulnerbilities of Interconnection
"Again, it seems more likely and more technically effective to attack internally than physically. Focus again here on the cost/benefit analysis from both the provider and disrupter perspective and you will see what I mean." Is there a general consensus that cyber/internal attacks are more effective/dangerous than physical attacks. Anecdotally it seems the largest Internet downages have been from physical cuts or failures. 2001 Baltimore train tunnel vs. code red worm (see keynote report) 1999 Mclean fiber cut - cement truck AT&T cascading switch failure Utah fiber cut (date??) Not sure where the MAI mess up at MAE east falls Utah fiber cut (date??) Then again this is the biased perspetive of the facet I'm researching Secondly it seems that problems arise from physical cuts not because of a lack of redundant paths but a bottlneck in peering and transit - resulting in ripple effects seen with the Baltimore incident. ----- Original Message ----- From: "William B. Norton" <wbn@equinix.com> Date: Thursday, September 5, 2002 3:04 pm Subject: Re: Vulnerbilities of Interconnection
This obviously would be a thesis of Equinix and other collo space
At 02:45 PM 9/5/2002 -0400, alex@yuriev.com wrote: providers,>since this is exactly the service that they provide. It won't, hower, be a
thesis of any major network that either already has a lot of infrastructure>in place or has to be a network that is supposed to survive a physical attack.
Actually, the underlying assumption of this paper is that major networks already have a large global backbone that need to interconnect in n-regions. The choice between Direct Circuits and Colo-based cross connects is discussed and documented with costs and tradeoffs. Surviving a major attack was not the focus of the paper...but...
When I did this research I asked ISPs how many Exchange Points they felt were needed in a region. Many said one was sufficient, that they were resilient across multiple exchange points and transit relationships, and preferred to engineer their own diversity separate from regional exchanges. A bunch said that two was the right number, each with different operating procedures, geographic locations, providers of fiber, etc. , as different as possible. Folks seemed unanimous about there not being more than two IXes in a region, that to do so would splinter the peering
population.
Bill Woodcock was the exception to this last claim, positing (paraphrasing) that peering is an local routing optimization and that many inexpensive (relatively insecured) IXes are acceptable. The loss of any one simply removes the local routing optimization and that transit is always an alternative for that traffic.
A couple physical security considerations came out of that
research:> > 1) Consider that man holes are not always secured, providing access to
metro fiber runs, while there is generally greater security
within
colocation environments
This is all great, except that the same metro fiber runs are used to get carriers into the super-secure facility, and, since neither those who originate information, nor those who ultimately consume the information are located completely within facility, you still have the same problem. If we add to it that the diverse fibers tend to aggregate in the basement of the building that houses the facility, multiple carriers use the same manholes>for their diverse fiber and so on.
Fine - we both agree that no transport provider is entirely protected from physical tampering if its fiber travels through insecure passageways. Note that some transport capacity into an IX doesn't necessarily travel along the same path as the metro providers, particularly those IXes located outside a metro region. There are also a multitude of paths, proportional to the # of providers still around in the metro area, that provide alternative paths into the IX. Within an IX therefore is a concentration of alternative providers, and these alternative providers can be used as needed in the event of a path cut.
2) It is faster to repair physical disruptions at fewer points, leveraging cutovers to alternative providers present in the collocation IX model, as opposed to the Direct Circuit model where provisioning additional capacities to many end points may take days or months.
This again is great in theory, unless you are talking about someone who is planning on taking out the IX not accidently, but deliberately. To illustrate this, one just needs to recall the infamous fiber cut in McLean in 1999 when a backhoe not just cut Worldcom and Level(3) circuits, but somehow let a cement truck to pour cement into Verizon's manhole that was used by Level(3) and Worldcom.
Terrorists in cement trucks?
Again, it seems more likely and more technically effective to attack internally than physically. Focus again here on the cost/benefit analysis from both the provider and disrupter perspective and you will see what I mean.
Alex
Hi, sgorman1@gmu.edu wrote:
"Again, it seems more likely and more technically effective to attack internally than physically. Focus again here on the cost/benefit analysis from both the provider and disrupter perspective and you will see what I mean."
Is there a general consensus that cyber/internal attacks are more effective/dangerous than physical attacks. Anecdotally it seems the largest Internet downages have been from physical cuts or failures.
It depends on what you consider and internet outage. Or how you define that. IMHO. Jane
2001 Baltimore train tunnel vs. code red worm (see keynote report) 1999 Mclean fiber cut - cement truck AT&T cascading switch failure Utah fiber cut (date??) Not sure where the MAI mess up at MAE east falls Utah fiber cut (date??)
Then again this is the biased perspetive of the facet I'm researching
Secondly it seems that problems arise from physical cuts not because of a lack of redundant paths but a bottlneck in peering and transit - resulting in ripple effects seen with the Baltimore incident.
----- Original Message ----- From: "William B. Norton" <wbn@equinix.com> Date: Thursday, September 5, 2002 3:04 pm Subject: Re: Vulnerbilities of Interconnection
This obviously would be a thesis of Equinix and other collo space
At 02:45 PM 9/5/2002 -0400, alex@yuriev.com wrote: providers,>since this is exactly the service that they provide. It won't, hower, be a
thesis of any major network that either already has a lot of infrastructure>in place or has to be a network that is supposed to survive a physical attack.
Actually, the underlying assumption of this paper is that major networks already have a large global backbone that need to interconnect in n-regions. The choice between Direct Circuits and Colo-based cross connects is discussed and documented with costs and tradeoffs. Surviving a major attack was not the focus of the paper...but...
When I did this research I asked ISPs how many Exchange Points they felt were needed in a region. Many said one was sufficient, that they were resilient across multiple exchange points and transit relationships, and preferred to engineer their own diversity separate from regional exchanges. A bunch said that two was the right number, each with different operating procedures, geographic locations, providers of fiber, etc. , as different as possible. Folks seemed unanimous about there not being more than two IXes in a region, that to do so would splinter the peering
population.
Bill Woodcock was the exception to this last claim, positing (paraphrasing) that peering is an local routing optimization and that many inexpensive (relatively insecured) IXes are acceptable. The loss of any one simply removes the local routing optimization and that transit is always an alternative for that traffic.
A couple physical security considerations came out of that
research:> > 1) Consider that man holes are not always secured, providing access to
metro fiber runs, while there is generally greater security
within
colocation environments
This is all great, except that the same metro fiber runs are used to get carriers into the super-secure facility, and, since neither those who originate information, nor those who ultimately consume the information are located completely within facility, you still have the same problem. If we add to it that the diverse fibers tend to aggregate in the basement of the building that houses the facility, multiple carriers use the same manholes>for their diverse fiber and so on.
Fine - we both agree that no transport provider is entirely protected from physical tampering if its fiber travels through insecure passageways. Note that some transport capacity into an IX doesn't necessarily travel along the same path as the metro providers, particularly those IXes located outside a metro region. There are also a multitude of paths, proportional to the # of providers still around in the metro area, that provide alternative paths into the IX. Within an IX therefore is a concentration of alternative providers, and these alternative providers can be used as needed in the event of a path cut.
2) It is faster to repair physical disruptions at fewer points, leveraging cutovers to alternative providers present in the collocation IX model, as opposed to the Direct Circuit model where provisioning additional capacities to many end points may take days or months.
This again is great in theory, unless you are talking about someone who is planning on taking out the IX not accidently, but deliberately. To illustrate this, one just needs to recall the infamous fiber cut in McLean in 1999 when a backhoe not just cut Worldcom and Level(3) circuits, but somehow let a cement truck to pour cement into Verizon's manhole that was used by Level(3) and Worldcom.
Terrorists in cement trucks?
Again, it seems more likely and more technically effective to attack internally than physically. Focus again here on the cost/benefit analysis from both the provider and disrupter perspective and you will see what I mean.
Alex
Is there a general consensus that cyber/internal attacks are more effective/dangerous than physical attacks. Anecdotally it seems the largest Internet downages have been from physical cuts or failures.
It depends on what you consider and internet outage. Or how you define that. IMHO.
Lets bring this discussion to a some common ground - What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities? Alex
On Fri, 2002-09-06 at 10:01, alex@yuriev.com wrote:
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
Keep in mind that traffic in the global internet after N x 500 kgs of explosives are simultaneously detonated will upsurge, directed towards major news sites. I'll go back to lurking now. Ryan
Hi Alex, alex@yuriev.com wrote:
Is there a general consensus that cyber/internal attacks are more effective/dangerous than physical attacks. Anecdotally it seems the largest Internet downages have been from physical cuts or failures.
It depends on what you consider and internet outage. Or how you define that. IMHO.
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
N? Well, if you define N as the number of interconnect facilities, such as all the Equinix sites (and I'm not banging on Equinix, it's just where we started all this) then I think globally, it wouldn't make that much difference. People in Tokyo would still be able to reach the globe and both coasts of the US. Maybe some sites in the interior of the US would be difficult to reach. I'd have to run a model to be sure, but every one of the major seven have rerouting methodologies that would recover from the loss. And I don't think they exclusively peer at Equinix. The more I think about it, the more sure I am that they don't. However I could be wrong. Wouldn't be the first time. Jane
Alex
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
N? Well, if you define N as the number of interconnect facilities, such as all the Equinix sites
Lets say that N is 4 and they are all in the US, for the sake of the discussion.
(and I'm not banging on Equinix, it's just where we started all this) then I think globally, it wouldn't make that much difference. People in Tokyo would still be able to reach the globe and both coasts of the US.
This presumes that the networks peer with the same AS numbers everywhere in the world, which I dont think they do. The other thing to think about is that the physical transport will be affected as well. Wavelenth customers will lose their paths. Circuit customers that rely on some equipment located at the affected sites, losing their circuits. Alex
Hi Alex, alex@yuriev.com wrote:
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
N? Well, if you define N as the number of interconnect facilities, such as all the Equinix sites
Lets say that N is 4 and they are all in the US, for the sake of the discussion.
Which four? Makes a big difference. And there, we just got proprietary/classified. I've often wondered what difference there would be in attacking cable heads instead of colo sites. Cut off the country from everywhere. How bad would that be.
(and I'm not banging on Equinix, it's just where we started all this) then I think globally, it wouldn't make that much difference. People in Tokyo would still be able to reach the globe and both coasts of the US.
This presumes that the networks peer with the same AS numbers everywhere in the world, which I dont think they do.
Hadn't thought of that. I'm not sure then of the impact.
The other thing to think about is that the physical transport will be affected as well. Wavelenth customers will lose their paths. Circuit customers that rely on some equipment located at the affected sites, losing their circuits.
For individual users, it might be devastating. Overall, globally, that's a different story. Jane
Wow, nothing like jumping into the middle of a running discussion after deleting all previous messages unread :) On Fri, 6 Sep 2002, Pawlukiewicz Jane wrote:
Hi Alex,
alex@yuriev.com wrote:
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
N? Well, if you define N as the number of interconnect facilities, such as all the Equinix sites
Lets say that N is 4 and they are all in the US, for the sake of the discussion.
Which four? Makes a big difference. And there, we just got proprietary/classified. I've often wondered what difference there would be in attacking cable heads instead of colo sites. Cut off the country from everywhere. How bad would that be.
I was under the impression that OCS/Homeland Security had already done a little study, perhaps aided by some other 3 letter agencies and some Telco's, for this very thing. I was also under the impression that the number of sites had to be sigificantly higher than 4 to do any real damage.
(and I'm not banging on Equinix, it's just where we started all this) then I think globally, it wouldn't make that much difference. People in Tokyo would still be able to reach the globe and both coasts of the US.
This presumes that the networks peer with the same AS numbers everywhere in the world, which I dont think they do.
Hadn't thought of that. I'm not sure then of the impact.
Additionally, a majority of peering, big peering, isn't on public exchanges is it? So, you'd have to find all the places that the larger providers connect to eachother and perhaps target these. Even with this there are the public exchanges so things 'should' fail over to them... Overall I recall the outcome from the study being that the internet was a significantly difficult target to completely kill, and even making a performance impact was somewhat difficult... I will say though, that my memory is a bit foggy on this particular study, I didn't participate in it, and I didn't read the actual results. Any info I have on it is third hand via a lawyer, so take all this with a grain of salt :)
The other thing to think about is that the physical transport will be affected as well. Wavelenth customers will lose their paths. Circuit customers that rely on some equipment located at the affected sites, losing their circuits.
For individual users, it might be devastating. Overall, globally, that's a different story.
This was about the result I heard, you can easily cut out 'mom and pop' ISP, but cutting out a large provider is a tougher task with bombs... we already know its possible with the right routing 'update' :(
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
N? Well, if you define N as the number of interconnect facilities, such as all the Equinix sites
Lets say that N is 4 and they are all in the US, for the sake of the discussion.
Which four? Makes a big difference. And there, we just got proprietary/classified. I've often wondered what difference there would be in attacking cable heads instead of colo sites. Cut off the country from everywhere. How bad would that be.
I was under the impression that OCS/Homeland Security had already done a little study, perhaps aided by some other 3 letter agencies and some Telco's, for this very thing. I was also under the impression that the number of sites had to be sigificantly higher than 4 to do any real damage.
That study probably came from the same people who believe that Echelon can intercept every single email sent, in addition to every phone conversation and fax. Bankruptcies of two fiber carriers showed rather clear that those companies themselves do not know where do they have what and what depends on what.
(and I'm not banging on Equinix, it's just where we started all this) then I think globally, it wouldn't make that much difference. People in Tokyo would still be able to reach the globe and both coasts of the US.
This presumes that the networks peer with the same AS numbers everywhere in the world, which I dont think they do.
Hadn't thought of that. I'm not sure then of the impact.
Additionally, a majority of peering, big peering, isn't on public exchanges is it? So, you'd have to find all the places that the larger providers connect to eachother and perhaps target these. Even with this there are the public exchanges so things 'should' fail over to them...
Interconnect sites are not public peering. It is simply a location where the networks exchange traffic with each other.
This was about the result I heard, you can easily cut out 'mom and pop' ISP, but cutting out a large provider is a tougher task with bombs... we already know its possible with the right routing 'update' :(
Tell it to those whose primary facility was in one tower of WTC and backup facility in another. Alex
On Fri, 6 Sep 2002, Pawlukiewicz Jane wrote: :would be difficult to reach. I'd have to run a model to be sure, but :every one of the major seven have rerouting methodologies that would :recover from the loss. And I don't think they exclusively peer at Even if we were to model it, the best data we could get for the "Internet" would be BGP routing tables. These are also subjectve views of the rest of the net. We could take a full table, map all the ASN adjacencies, and then pick arbtrary ASN's to "fail", then see who is still connected, but we are still dealing with connectivity relatve to us and our peers, even 5+ AS-hops away. I would imagine this is one of the tasks CAIDA.org is probably working on, as it seems to fall within their mission. So even if we all agreed upon a common disaster to hypothesize on, there would be little common ground to be had, as our interpretations could only be political arguments over what is most important, because there is no technically objective view of the network to forge agreement on. Cheers, -- batz
Hi, batz wrote:
On Fri, 6 Sep 2002, Pawlukiewicz Jane wrote:
:would be difficult to reach. I'd have to run a model to be sure, but :every one of the major seven have rerouting methodologies that would :recover from the loss. And I don't think they exclusively peer at
Even if we were to model it, the best data we could get for the "Internet" would be BGP routing tables. These are also subjectve views of the rest of the net. We could take a full table, map all the ASN adjacencies, and then pick arbtrary ASN's to "fail", then see who is still connected, but we are still dealing with connectivity relatve to us and our peers, even 5+ AS-hops away.
I want to make sure I understand this. As I understand it, this would work regarding routing only. It would be a model that would have a result of ones and zeros, so to speak, meaning either you're connected or you're not. What this doesn't take into consideration, I believe, is the effects of congestion regarding increased traffic due to news traffic and rerouting that takes place whenever there is a loss of a site.
I would imagine this is one of the tasks CAIDA.org is probably working on, as it seems to fall within their mission.
So even if we all agreed upon a common disaster to hypothesize on, there would be little common ground to be had, as our interpretations could only be political arguments over what is most important, because there is no technically objective view of the network to forge agreement on.
I totally agree. I think what I envision as not a huge impact would be devastating to others. That's mostly because I'm looking at it globally, like, if you take all routes as the denominator, and the lost routes as the numerator, four colo sites, even the big ones, wouldn't be *that* much effect. Proportionally. At first. Of course, if you're a smallish ISP operator and your one peering site happens to be at one of the four sites, you're done. Jane
Cheers,
-- batz
"Jane" == Pawlukiewicz Jane <pawlukiewicz_jane@bah.com> writes:
>> Even if we were to model it, the best data we could get for >> the "Internet" would be BGP routing tables. These are also >> subjectve views of the rest of the net. We could take a full >> table, map all the ASN adjacencies, and then pick arbtrary >> ASN's to "fail", then see who is still connected, but we are >> still dealing with connectivity relatve to us and our peers, >> even 5+ AS-hops away. Jane> I want to make sure I understand this. As I understand it, Jane> this would work regarding routing only. It would be a model Jane> that would have a result of ones and zeros, so to speak, Jane> meaning either you're connected or you're not. What this Jane> doesn't take into consideration, I believe, is the effects Jane> of congestion regarding increased traffic due to news Jane> traffic and rerouting that takes place whenever there is a Jane> loss of a site. I believe you are correct. Modelling the connectivity matrix [1] is good as a first approximation. The next thing to do would be to estimate the transition probabilities between ASi and ASj (you could do this by looking at the adjacencies one step out [2], for example. There are other methods of estimating the transition probabilities but most are foiled by a lack of available data. You can get a pretty good adjacency map doing table dumps from all of the route servers, looking glasses, etc.) Once you have the TP matrix, construct a vector of initial conditions to represent likely traffic sources -- i.e. the ASs containing CNN and the BBC, for example -- and look at how the traffic dissipates through an n-step Markov process [3]. This will tell you something about how heavily loaded with traffic certain ASs (the accumulation points) become, at least as a ratio to "normal", but since we have no information about channel capacity available within each AS, it doesn't say much about actual congestion. It will, however, suggest where congestion is likely to occur if links have not been overprovisioned by some ratio. I think ;) The trick is in estimating the transition probabilities. I'm not sure this is a good method. Using adjacencies from one hop out assumes transit to two hops out. Using n hops out implies transit to n+1 hops and the bigger n gets the less accurately it will start representing the real mesh since it starts implicitly assuming symmetric transit everywhere once n is greater than, say, 4 or 5 or whatever the average AS path length is these days. -w [1] C_ij = 1 if i and j connected or i == j, 0 otherwise [2] If A_i is the number of adjacencies for ASi, then set transition probabilities P_ij proportional to (C_ij * A_j) / Sum_k (C_jk * A_k), normalized so that Sum_j P_ij = 1. [3] n-th step transition probabilities are (P_ij)^n -- William Waites <ww@styx.org> +1 416-717-2661 finger ww@styx.org for PGP keys Idiosyntactix Research Laboratories http://www.irl.styx.org -- NetBSD: ça marche!
On Fri, Sep 06, 2002 at 01:55:40PM -0400, batz wrote:
On Fri, 6 Sep 2002, Pawlukiewicz Jane wrote: :would be difficult to reach. I'd have to run a model to be sure, but :every one of the major seven have rerouting methodologies that would :recover from the loss. And I don't think they exclusively peer at
ASN's to "fail", then see who is still connected, but we are still dealing with connectivity relatve to us and our peers, even 5+ AS-hops away.
I would imagine this is one of the tasks CAIDA.org is probably working on, as it seems to fall within their mission.
Coming up with the as interconnection data is actaully fairly easy if you parse route-views data. This obviously doesn't cover every possible interconnection that exists but it does provide a large swath of data to review for the interconnection postulation. Looking at that data, (this is an old snapshot) the top ten networks are: (in #10->#1 order) conn ASN ----+---- 161 3356 229 1 242 2914 248 209 274 6461 277 3561 295 3549 328 7018 484 701 493 1239 - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
I'm guessing increased packet loss and latency :) Oh yeah, horrible loss of life and another blow to the economy. - Daniel Golding
alex@yuriev.com reportedly said...
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
Alex
alex@yuriev.com wrote:
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
OK, what if 60 Hudson, 25 Broadway, LinX and AmsIX were all put out of commission? What about the major sites terminating undersea cables in an effort to isolate the US? Or major satellite uplink points? Or all of them? -- Tim
Okay, If we're going to go off the deep end here, how about the effect of a small yield air burst over $importantplace? Not designed to maximize casualties/damage but rather EMP? A large number of senior military officials got that 'deer-in-the-headlights' look a few decades back when a deserter supplied "Soviet state of the art" fighter turned out to have tube based electronics. :) It's not much of a stretch from crashing civilian airliners into high rises to "firing for effect" with nuclear weapons. Look at what's going on with Iraq right now. I know, but you're saying that's why the Internet was invented, to provide diverse communications even in a nuclear war. The Internet and its electronics and equipment was a much different animal when that flag was first run up the pole. I wonder if anyone has checked to see if anyone would salute today. Oh, wait, that's what this whole discussion is about, isn't it. ;) Best regards, _________________________ Alan Rowland -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Tim Thorne Sent: Friday, September 06, 2002 12:58 PM To: nanog@merit.edu Subject: Re: Vulnerbilities of Interconnection alex@yuriev.com wrote:
Lets bring this discussion to a some common ground -
What kind of implact on the global internet would we see should we observe nearly simultaneous detonation of 500 kilogramms of high explosives at N of the major known interconnect facilities?
OK, what if 60 Hudson, 25 Broadway, LinX and AmsIX were all put out of commission? What about the major sites terminating undersea cables in an effort to isolate the US? Or major satellite uplink points? Or all of them? -- Tim
*********** REPLY SEPARATOR *********** On 9/6/2002 at 1:42 PM Al Rowland wrote:
Okay,
If we're going to go off the deep end here, how about the effect of a small yield air burst over $importantplace? Not designed to maximize casualties/damage but rather EMP? A large number of senior military officials got that 'deer-in-the-headlights' look a few decades back when a deserter supplied "Soviet state of the art" fighter turned out to have tube based electronics. :)
Said tube electronics were apparently more survivable against EMP effects. Or was that the point you were making? I think the real surprise was a toggle switch that Belenko said was supposed to be flipped only when told over the radio by higher headquarters. It changed the characteristics of the radar.... sort of a "go to war" mode vs. the standard training mode. An interesting, if not totally professional evaluation of something like this is in Steven Coonts book "America" where terrorists take over an American nuclear submarine armed with a new type of Tomahawk warhead - an EMP warhead. One of the early targets is AOL HQ in Reston, VA., (I almost cheered). Coonts has an inflated idea of what an outage there would do the the internet... but there is a lot of other stuff fairly nearby, isn't there? -- Jeff Shultz Network Support Technician Willamette Valley Internet 503-769-3331 (Stayton) 503-390-7000 (Salem) tech@wvi.com ...most of us have as our claim to fame the ability to talk to inanimate objects and convince them they want to listen to us. -- Valdis Kletnieks in a.s.r
At 2:01 PM -0700 2002/09/06, Jeff Shultz wrote:
Said tube electronics were apparently more survivable against EMP effects. Or was that the point you were making? I think the real surprise was a toggle switch that Belenko said was supposed to be flipped only when told over the radio by higher headquarters. It changed the characteristics of the radar.... sort of a "go to war" mode vs. the standard training mode.
I wouldn't be too surprised. The Patriot has a clock problem, and can't be left turned on for an extended period of time. There are plenty of military systems everywhere in the world that have various operational issues that may not materially reduce their effectiveness in their official role, but which may make them less suitable for other roles.
An interesting, if not totally professional evaluation of something like this is in Steven Coonts book "America" where terrorists take over an American nuclear submarine armed with a new type of Tomahawk warhead - an EMP warhead. One of the early targets is AOL HQ in Reston, VA., (I almost cheered).
These things exist. I would be more concerned about drive-by attacks with HERF (High Energy Radio Frequency) guns, capable of generating an EMP field that can wipe out RAM on any computer device that is not suitably protected (Tempest shielding or being in a SCIF?). These things can be made relatively portable and undetectable until such time as they are turned on -- unlike nuclear devices that can be detected by Geiger counters, etc.... A drive-by with a van would be a lot easier to organize than hi-jacking a nuclear-equipped submarine. BTW, AOL headquarters is in Sterling, not Reston. It's not that far away, so I can understand why people not from that area would not be aware of the difference.
Coonts has an inflated idea of what an outage there would do the the internet... but there is a lot of other stuff fairly nearby, isn't there?
What do you mean by "nearby"? Do you count the "TerraPOP"? Do you count Langley? -- Brad Knowles, <brad.knowles@skynet.be> "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, Historical Review of Pennsylvania. GCS/IT d+(-) s:+(++)>: a C++(+++)$ UMBSHI++++$ P+>++ L+ !E W+++(--) N+ !w--- O- M++ V PS++(+++) PE- Y+(++) PGP>+++ t+(+++) 5++(+++) X++(+++) R+(+++) tv+(+++) b+(++++) DI+(++++) D+(++) G+(++++) e++>++++ h--- r---(+++)* z(+++)
*********** REPLY SEPARATOR *********** On 9/6/2002 at 11:26 PM Brad Knowles wrote:
At 2:01 PM -0700 2002/09/06, Jeff Shultz wrote:
Said tube electronics were apparently more survivable against EMP effects. Or was that the point you were making? I think the real surprise was a toggle switch that Belenko said was supposed to be flipped only when told over the radio by higher headquarters. It changed the characteristics of the radar.... sort of a "go to war" mode vs. the standard training mode.
I wouldn't be too surprised. The Patriot has a clock problem, and can't be left turned on for an extended period of time. There are plenty of military systems everywhere in the world that have various operational issues that may not materially reduce their effectiveness in their official role, but which may make them less suitable for other roles.
Coonts has an inflated idea of what an outage there would do the
Actually I suspect it was an anti-jamming feature. Think about it.... the jammers would all be programmed based on the training mode, which presumably we would have heard before. All off the sudden this thing is broadcasting an entirely new signal... <snip> the
internet... but there is a lot of other stuff fairly nearby, isn't there?
What do you mean by "nearby"? Do you count the "TerraPOP"? Do you count Langley?
I thought that MAE-East was somewhere around there? I know that there is a fair amount of high-tech in that particular area. I don't know how far away Langley itself is.... another target was basically "The Mall" where it took out a couple of fly-by-wire Airbuses. Interesting book from a techno-thriller standpoint. Just don't confuse it with reality.<G> -- Jeff Shultz Network Support Technician Willamette Valley Internet 503-769-3331 (Stayton) 503-390-7000 (Salem) tech@wvi.com ...most of us have as our claim to fame the ability to talk to inanimate objects and convince them they want to listen to us. -- Valdis Kletnieks in a.s.r
On Fri, 06 Sep 2002 14:01:24 PDT, Jeff Shultz <jeffshul@wvi.com> said:
Coonts has an inflated idea of what an outage there would do the the internet... but there is a lot of other stuff fairly nearby, isn't there?
*You* know that a hit on 60 Hudson would probably be worse (especially considering all the OTHER stuff that would be in blast range). *I* know that. The rest of the NANOG readership knows that. However, the organization based in Reston probably has on the order of 1,500 times more subscribers than the NANOG list does... ;)
...most of us have as our claim to fame the ability to talk to inanimate objects and convince them they want to listen to us. -- Valdis Kletnieks in a.s.r
Wow - I'm famous. ;) -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
On Fri, 6 Sep 2002 Valdis.Kletnieks@vt.edu wrote:
On Fri, 06 Sep 2002 14:01:24 PDT, Jeff Shultz <jeffshul@wvi.com> said:
Coonts has an inflated idea of what an outage there would do the the internet... but there is a lot of other stuff fairly nearby, isn't there?
*You* know that a hit on 60 Hudson would probably be worse (especially considering all the OTHER stuff that would be in blast range). *I* know that. The rest of the NANOG readership knows that.
We had examples of that on Sep 11th and it wasnt -that- bad...
However, the organization based in Reston probably has on the order of 1,500 times more subscribers than the NANOG list does... ;)
...most of us have as our claim to fame the ability to talk to inanimate objects and convince them they want to listen to us. -- Valdis Kletnieks in a.s.r
Wow - I'm famous. ;)
famous or infamous? :) Steve
Actually damage to the "net" could be done with relative ease. If you wanted to do some planning and a little staging work you could affect large amounts of traffic. Given recent press about large carriers moving their interconnects to a well known IX type company, all you would have to do is place some 7206VXR's (VXR == Very eXplosive Routers) in these co-lo's. Or servers, 5U server.... Nice and big. I wonder how much damage a couple of slots in a router full of Semtex would do. Then do that in multiple co-lo and use a IP packet as a trigger to pop them all..... Don't forget the spare parts depots as well. PS: All the money people spend on physical stuff to keep those on the outside out, would only help over pressure and other things on the inside. The issue is that Free Market places are going to do only as much as is needed to turn a profit, and not a penny more. This isn't the 60's or the 70's when ATT ran the infrastructure and had bunkers around the nation.... On Fri, Sep 06, 2002 at 05:41:29PM -0400, Valdis.Kletnieks@vt.edu wrote:
On Fri, 06 Sep 2002 14:01:24 PDT, Jeff Shultz <jeffshul@wvi.com> said:
Coonts has an inflated idea of what an outage there would do the the internet... but there is a lot of other stuff fairly nearby, isn't there?
*You* know that a hit on 60 Hudson would probably be worse (especially considering all the OTHER stuff that would be in blast range). *I* know that. The rest of the NANOG readership knows that.
However, the organization based in Reston probably has on the order of 1,500 times more subscribers than the NANOG list does... ;)
...most of us have as our claim to fame the ability to talk to inanimate objects and convince them they want to listen to us. -- Valdis Kletnieks in a.s.r
Wow - I'm famous. ;)
-- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
On fredag, sep 6, 2002, at 21:57 Europe/Stockholm, Tim Thorne wrote:
OK, what if 60 Hudson, 25 Broadway, LinX and AmsIX were all put out of commission?
To some extent - nothing for the above...if design right. The major networks should have designed their networks to route around this. If not - they have done a poor job. For others, the exchange points should be a way merely to off-load their transit connections. However - there is a point in what you are saying, from a national point of view - the exchange points should independently take care of traffic in the case a nation is isolated. But I don't think any of the above are designed for that in the first place... - kurtis -
Yet, it is reasonable that people expect x % of their traffic to use IX's. If those IX"s are gone then they will need to find another path, and may need to upgrade alternate paths. I guess the question is. At what point does one build redundancy into the network. I suspect its a balancing act between reducancy, survival (network) and costs vs revenues. not sure I'd call it a "poor job" for not planning all possible failure modes, or for not having links in place for them. On Wed, Sep 11, 2002 at 06:00:40PM +0200, Kurt Erik Lindqvist wrote:
On fredag, sep 6, 2002, at 21:57 Europe/Stockholm, Tim Thorne wrote:
OK, what if 60 Hudson, 25 Broadway, LinX and AmsIX were all put out of commission?
To some extent - nothing for the above...if design right. The major networks should have designed their networks to route around this. If not - they have done a poor job. For others, the exchange points should be a way merely to off-load their transit connections.
However - there is a point in what you are saying, from a national point of view - the exchange points should independently take care of traffic in the case a nation is isolated. But I don't think any of the above are designed for that in the first place...
- kurtis -
On Thu, 12 Sep 2002, John M. Brown wrote:
I guess the question is.
At what point does one build redundancy into the network.
I suspect its a balancing act between reducancy, survival (network) and costs vs revenues.
In 1982 AT&T was still a monopoly, could spend whatever it took and the primary threat was missles from the Soviet Union. AT&T had ten Class 1 Regional Centers in the country. Regional Centers were the "top" of the telephone network routing hierarchy fully connected to other regional centers. http://www.rand.org/publications/RM/RM3097/RM3097.appb.html I don't know how AT&T came to the conclusion that 10 was the perfect compromise between cost, reliability and survivability. They had lots of smart people who knew networks working on the problem, so I'm assuming they had a decent justification to back up the choice.
On Thu, 12 Sep 2002, John M. Brown wrote:
Yet, it is reasonable that people expect x % of their traffic to use IX's. If those IX"s are gone then they will need to find another path, and may need to upgrade alternate paths.
I guess the question is.
At what point does one build redundancy into the network.
No, it doesnt necessarily use IX's, in the event of there being no peered path across an IX traffic will flow from the originator to their upstream "tier1" over a private transit link, then that "tier1" will peer with the destination's upstream "tier1" over a private fat pipe then that will go to the destination via their transit private link. I'm only aware of a few providers who transit across IX's and I think the consensus is that its a bad thing so it tends to be just small people for whom the cost of the private link is relatively high. I suspect the catch would be that in the event of major switching nodes being taken out there would be considerable congestion on the transit links and most likely on the private peering of the tier1's also.
I suspect its a balancing act between reducancy, survival (network) and costs vs revenues.
I imagine in todays capitalist world its not so much balanced as weighted heavily in economics and how best to not spend the cash!
not sure I'd call it a "poor job" for not planning all possible failure modes, or for not having links in place for them.
Well the trouble is in the real world we cant have the budgets we'd like to implement our plans and end up compromising.. theres the catch. Note however that the email below is a mix of IX's and data centres, and the two are not the same. Here we are discussing IX's. I think its a different matter if we lose a data centre as you then risk losing the aforementioned private transit/peer links which will probably go through that location. Then you'd see more disruption. With that in mind consider last years outage at 60 Hudson.. the main areas it affected was switching IP/calls in New York (but that was hosed anyway) and probably the next area was Europe with lots of the East Coast cable landings going through their, I know most people I spoke to were seeing congestion and outages going to US locations. But hey, things still worked! Steve
On Wed, Sep 11, 2002 at 06:00:40PM +0200, Kurt Erik Lindqvist wrote:
On fredag, sep 6, 2002, at 21:57 Europe/Stockholm, Tim Thorne wrote:
OK, what if 60 Hudson, 25 Broadway, LinX and AmsIX were all put out of commission?
To some extent - nothing for the above...if design right. The major networks should have designed their networks to route around this. If not - they have done a poor job. For others, the exchange points should be a way merely to off-load their transit connections.
However - there is a point in what you are saying, from a national point of view - the exchange points should independently take care of traffic in the case a nation is isolated. But I don't think any of the above are designed for that in the first place...
- kurtis -
On Fri, 13 Sep 2002, Stephen J. Wilcox wrote:
At what point does one build redundancy into the network.
No, it doesnt necessarily use IX's, in the event of there being no peered path across an IX traffic will flow from the originator to their upstream "tier1" over a private transit link, then that "tier1" will peer with the destination's upstream "tier1" over a private fat pipe then that will go to the destination via their transit private link.
But will these links have enough spare capacity so congestion doesn't happen?
I'm only aware of a few providers who transit across IX's and I think the consensus is that its a bad thing so it tends to be just small people for whom the cost of the private link is relatively high.
I apologize in advance for naming names here, but I think it is important for making my point. A while back (I think last year, but I'm not sure) the AMS-IX had a huge outage because the power failed in two of the main locations. One of the locations didn't at that time have battery or generator backed up power (although they used three diversely routed inputs from the power company) and the other location only had batteries, which didn't last long. Nearly everything was still reachable over transit rather than peering with only minor congestion. However, some networks got their transit in the same buildings as where they connect to the AMS-IX, so both their peering and transit was gone and they were unreachable. If you think this was only true for small networks: think again. Surfnet suffered the same problem. Surfnet one of the largest (if not _the_ largest) Dutch network, connecting all the universities in the country at multi-gigabit speeds. However, they only connected to other networks in a single building at that time. I don't know if this is still the case. Now this is only one big network and a few small ones that suffered. However, things could have been much worse for people in the rest of the Netherlands, because even with all the rerouting going on almost all traffic still flowed through Amsterdam. So any outage in Amsterdam that takes down more than a single building would cripple the majority of Dutch networks. Obviously, something like this doesn't happen all the time, but luck has a tendency to run out from time to time. A plane crash (a 747 went down in an Amsterdam suburb 10 years ago) or a good sized flood (lots of stuff is below sea level in NL) will do it.
I suspect the catch would be that in the event of major switching nodes being taken out there would be considerable congestion on the transit links and most likely on the private peering of the tier1's also.
I'm more worried about long distance fiber running through rural areas. Much more bang for your backhoe renting buck.
not sure I'd call it a "poor job" for not planning all possible failure modes, or for not having links in place for them.
Well the trouble is in the real world we cant have the budgets we'd like to implement our plans and end up compromising.. theres the catch.
I don't think it's just a matter of money. In 1999, I helped roll out a completely new network. EVERYTHING in it, except the ports customers connect to, had a backup. Management originally wanted to connect every location to at least three others. (We got this requirement dropped because it essentially means you're buying a third circuit that doesn't do anything useful until the two others are down; traffic engineering to for both regular operation and the different failure modes is too complex.) Still, I couldn't convince them to move the second transit connection to another city where both our network and the transit network were also present in the same building. A year or so after I left I was in the building where that entire network connects to its transit network over two independent routers at both ends and the power went down and they couldn't get the generators online... Eventually the utility power came back online before the batteries were empty. All of this is on the ground floor in a place that's below sea level only a block or so from a river.
On Fri, 13 Sep 2002, Iljitsch van Beijnum wrote:
On Fri, 13 Sep 2002, Stephen J. Wilcox wrote:
At what point does one build redundancy into the network.
No, it doesnt necessarily use IX's, in the event of there being no peered path across an IX traffic will flow from the originator to their upstream "tier1" over a private transit link, then that "tier1" will peer with the destination's upstream "tier1" over a private fat pipe then that will go to the destination via their transit private link.
But will these links have enough spare capacity so congestion doesn't happen?
Well the policy among major isps tends to be around 50% max utilisation per circuit so they should have capacity to reroute. you're most likely to hit issues on the local isp's transit connection which is unlikely to have the capacity to shift a large amount of their peered traffic onto altho medium isps can probably reroute to another IXP a large amount anyway..
I'm only aware of a few providers who transit across IX's and I think the consensus is that its a bad thing so it tends to be just small people for whom the cost of the private link is relatively high.
I apologize in advance for naming names here, but I think it is important for making my point.
A while back (I think last year, but I'm not sure) the AMS-IX had a huge outage because the power failed in two of the main locations. One of the locations didn't at that time have battery or generator backed up power (although they used three diversely routed inputs from the power company) and the other location only had batteries, which didn't last long.
Nearly everything was still reachable over transit rather than peering with only minor congestion. However, some networks got their transit in the same buildings as where they connect to the AMS-IX, so both their peering and transit was gone and they were unreachable. If you think this was only true for small networks: think again. Surfnet suffered the same problem. Surfnet one of the largest (if not _the_ largest) Dutch network, connecting all the universities in the country at multi-gigabit speeds. However, they only connected to other networks in a single building at that time. I don't know if this is still the case.
Yes, there is a large amount of that happening in London where I'm more familiar with individual ISP's networks.. they tend to exist in one or two locations and pass traffic through a single location because of economies on bandwidth scaling. Altho I dont know of any medium/large ones like that.. I personally have always maintained multiple sites with sufficient capacity to handle the failure of another site since day one however perhaps I was lucky enough to be able to draw on a company with enough cash to be willing to do that. I regularly (every month or two) see something major happen at a site and on the whole things continue working just fine around it! Steve
Now this is only one big network and a few small ones that suffered. However, things could have been much worse for people in the rest of the Netherlands, because even with all the rerouting going on almost all traffic still flowed through Amsterdam. So any outage in Amsterdam that takes down more than a single building would cripple the majority of Dutch networks. Obviously, something like this doesn't happen all the time, but luck has a tendency to run out from time to time. A plane crash (a 747 went down in an Amsterdam suburb 10 years ago) or a good sized flood (lots of stuff is below sea level in NL) will do it.
I suspect the catch would be that in the event of major switching nodes being taken out there would be considerable congestion on the transit links and most likely on the private peering of the tier1's also.
I'm more worried about long distance fiber running through rural areas. Much more bang for your backhoe renting buck.
not sure I'd call it a "poor job" for not planning all possible failure modes, or for not having links in place for them.
Well the trouble is in the real world we cant have the budgets we'd like to implement our plans and end up compromising.. theres the catch.
I don't think it's just a matter of money. In 1999, I helped roll out a completely new network. EVERYTHING in it, except the ports customers connect to, had a backup. Management originally wanted to connect every location to at least three others. (We got this requirement dropped because it essentially means you're buying a third circuit that doesn't do anything useful until the two others are down; traffic engineering to for both regular operation and the different failure modes is too complex.) Still, I couldn't convince them to move the second transit connection to another city where both our network and the transit network were also present in the same building.
A year or so after I left I was in the building where that entire network connects to its transit network over two independent routers at both ends and the power went down and they couldn't get the generators online... Eventually the utility power came back online before the batteries were empty. All of this is on the ground floor in a place that's below sea level only a block or so from a river.
Yet, it is reasonable that people expect x % of their traffic to use IX's. If those IX"s are gone then they will need to find another path, and may need to upgrade alternate paths.
I guess the question is.
At what point does one build redundancy into the network.
No, it doesnt necessarily use IX's, in the event of there being no peered path across an IX traffic will flow from the originator to their upstream "tier1" over a private transit link, then that "tier1" will peer with the destination's upstream "tier1" over a private fat pipe then that will go to the destination via their transit private link.
I'm only aware of a few providers who transit across IX's and I think the consensus is that its a bad thing so it tends to be just small people for whom the cost of the private link is relatively high.
I think you are missing a one critical point - IX in this case is not an exchange. It is a point where lots of providers have lots of gear in a highly congested area. However they connect to each other in that area does not matter. Now presume those areas are gone (as in compeletely gone). What is the possible impact? Alex
I suspect its a balancing act between reducancy, survival (network) and costs vs revenues.
not sure I'd call it a "poor job" for not planning all possible failure modes, or for not having links in place for them.
It depends on your perspective and what you expect from the net and what you see it doing for you in the future. As we move more advanced services to the net, we will also expect much more from it - also in terms of crisis. Just like the net was one of the prime sources of information during 9-11. In the event of a emergency, I would very much like to be as able to reach my bank via the net as walking into their offices. I do agree that it is a balance, but I am not so sure that everyone have realised this. I am not even sure that all the carriers that you would expect to have this planning have it... - kurtis -
On Mon, 16 Sep 2002, Kurtis Lindqvist wrote:
I suspect its a balancing act between reducancy, survival (network) and costs vs revenues.
not sure I'd call it a "poor job" for not planning all possible failure modes, or for not having links in place for them.
It depends on your perspective and what you expect from the net and what you see it doing for you in the future.
That's a good question. Is the net really a criticial resource? If a life gets saved through involvement of the internet, it is news. Lifes are saved by calling for assistence through the telephone network every day all over the world as a matter of routine. Try ranking how bad the following outages would be: power, gas (for heating and cooking), water, the phone network, radio/tv, the net. I think radio/tv and the net would have to share last place.
As we move more advanced services to the net, we will also expect much more from it
Sure, if you run telephony over IP, you'll want your IP to be as good as you need your voice service to be.
also in terms of crisis. Just like the net was one of the prime sources of information during 9-11.
It's not how much something is used, but how bad it would be to go without it. During september 11th, the phone service didn't work very well, and the internet did a lot better. I think just about anyone would have traded the latter for the former in a second.
In the event of a emergency, I would very much like to be as able to reach my bank via the net as walking into their offices.
Yes, because banks are such a critical resource when there is an emergency...
I do agree that it is a balance, but I am not so sure that everyone have realised this. I am not even sure that all the carriers that you would expect to have this planning have it...
When your OC3 goes down because nettles have grown into the telco's A/C exhaust, you start getting cynical... And that was in the good old days when business was booming. I'm sure they're cutting corners left right and center at the moment.
That's a good question. Is the net really a criticial resource? If a life gets saved through involvement of the internet, it is news. Lifes are saved by calling for assistence through the telephone network every day all over the world as a matter of routine.
Well, it's more a question of what you want the Internet to become? We can't expect to have better quality than the most critical application that we run over it....
It's not how much something is used, but how bad it would be to go without it. During september 11th, the phone service didn't work very well, and the internet did a lot better. I think just about anyone would have traded the latter for the former in a second.
I think we are using the word "emergency" with different meanings here. Assume that a nation is cut of from it's Internet access in one way or the other due to war, natural disasters etc. In these scenarios a not functioning Internet might actually be a problem due to the time of the outage.
In the event of a emergency, I would very much like to be as able to reach my bank via the net as walking into their offices.
Yes, because banks are such a critical resource when there is an emergency...
If there was a natural disaster and I could not reach my banks office I would very much like to be able to use the on-line bank instead...
that was in the good old days when business was booming. I'm sure they're cutting corners left right and center at the moment.
Agreed - and this is where I think we have real problem.... - kurtis -
On 12:34 AM 9/16/02, Kurtis Lindqvist wrote:
Just like the net was one of the prime sources of information during 9-11.
On the morning of 9/11 I was alone in a colo in Reston VA (3000 miles from home), and I found it very difficult to get information (other than the most basic facts, planes hitting WTC and Pentagon, crashing in PA, airplanes grounded, WTC towers collapsing) on that day. Web servers were overloaded, downloads repeatedly stalled. The big thing the 'net helped with most was sending information (via email and IM) to my friends and family back home in CA, letting them know that I was OK, and to receive information (via email and IM) from those watching TV and learn second hand what they had learned from TV. My cell phone only worked intermittently, due to heavy network congestion on the cell network surrounding DC. When I finally got done at the colo and went back to my friend's house in Vienna later that afternoon, *that* is when I was finally able to learn details about what had happened during the day and see video of the WTC etc. - via TV footage. When I got back to the office, I learned that the big screen TV that had previously been located in the exercise room had been moved to the center of the office so that everyone could more easily see it, and everyone could hear it. Meanwhile, they all had high speed Internet connections to the computers sitting on their desks. Why bring in the TV if the 'net was "one of the prime sources of information"? jc
Well, they probably didn't have multicast enabled, and so couldn't get one of the many news feeds set up that day :) Regards Marshall Eubanks JC Dill wrote:
On 12:34 AM 9/16/02, Kurtis Lindqvist wrote:
Just like the net was one of the prime sources of information during 9-11.
On the morning of 9/11 I was alone in a colo in Reston VA (3000 miles from home), and I found it very difficult to get information (other than the most basic facts, planes hitting WTC and Pentagon, crashing in PA, airplanes grounded, WTC towers collapsing) on that day. Web servers were overloaded, downloads repeatedly stalled. The big thing the 'net helped with most was sending information (via email and IM) to my friends and family back home in CA, letting them know that I was OK, and to receive information (via email and IM) from those watching TV and learn second hand what they had learned from TV. My cell phone only worked intermittently, due to heavy network congestion on the cell network surrounding DC. When I finally got done at the colo and went back to my friend's house in Vienna later that afternoon, *that* is when I was finally able to learn details about what had happened during the day and see video of the WTC etc. - via TV footage.
When I got back to the office, I learned that the big screen TV that had previously been located in the exercise room had been moved to the center of the office so that everyone could more easily see it, and everyone could hear it. Meanwhile, they all had high speed Internet connections to the computers sitting on their desks. Why bring in the TV if the 'net was "one of the prime sources of information"?
jc
-- Regards Marshall Eubanks This e-mail may contain confidential and proprietary information of Multicast Technologies, Inc, subject to Non-Disclosure Agreements T.M. Eubanks Multicast Technologies, Inc 10301 Democracy Lane, Suite 410 Fairfax, Virginia 22030 Phone : 703-293-9624 Fax : 703-293-9609 e-mail : tme@multicasttech.com http://www.multicasttech.com Test your network for multicast : http://www.multicasttech.com/mt/ Status of Multicast on the Web : http://www.multicasttech.com/status/index.html
On 12:34 AM 9/16/02, Kurtis Lindqvist wrote:
Just like the net was one of the prime sources of information during 9-11.
From a network operations perspective, anyone who has not heard William LeFebvre's "CNN.com: Facing a World Crisis" has missed out. It talks about how the company that hosts cnn.com handled the crises and how it affected
The internet sucked as a means of getting information on 9/11. I spent about 20 minutes hitting every news site I could think of, and they had all tanked. I set an away msg on IM: "Internet news sucks, I'm going to watch CNN." I made it to the support center when the first tower fell. It took that long for someone to come tell me that all this stuff was going on, for me to give up on the internet and actually make it to a useful news source. When I finally did go back to my desk to work we turned on a radio that all of us could hear from our cube farm and tried to resume normal operations while keeping up to date. them from a network perspective. I've been unable to locate any decent transcripts/recordings of this talk, but I heard it at LISA 2001. Absolutely amazing presentation if you haven't seen it or heard it. William said they changed a lot of the way they do things at the company that hosts CNN.com since 9/11. I don't believe they were the only ones. Gerald
When I finally did go back to my desk to work we turned on a radio that all of us could hear from our cube farm and tried to resume normal operations while keeping up to date.
From a network operations perspective, anyone who has not heard William LeFebvre's "CNN.com: Facing a World Crisis" has missed out. It talks about how the company that hosts cnn.com handled the crises and how it affected them from a network perspective. I've been unable to locate any decent transcripts/recordings of this talk, but I heard it at LISA 2001. Absolutely amazing presentation if you haven't seen it or heard it.
The company that "hosted" CNN demonstrated that for all the claims of their connectivity, it was not really there. If I recall correctly, CNN came up when a certain company from MA company-ized CNN. Alex
On Mon, 16 Sep 2002 alex@yuriev.com wrote:
When I finally did go back to my desk to work we turned on a radio that all of us could hear from our cube farm and tried to resume normal operations while keeping up to date.
From a network operations perspective, anyone who has not heard William LeFebvre's "CNN.com: Facing a World Crisis" has missed out. It talks about how the company that hosts cnn.com handled the crises and how it affected them from a network perspective. I've been unable to locate any decent transcripts/recordings of this talk, but I heard it at LISA 2001. Absolutely amazing presentation if you haven't seen it or heard it.
The company that "hosted" CNN demonstrated that for all the claims of their connectivity, it was not really there. If I recall correctly, CNN came up when a certain company from MA company-ized CNN.
The news coverage on Sep 11th was unprecedented, I dont believe there is any similar incident in which the whole of the world (not just a region or nation) has been focused on watching the news as an event unfolds. So to be fair (I assume) CNN hadnt asked for a service that could handle that particular load, if they had then they probably would have not been knocked off the air. And now we're a year on, how many news agencies have invested in a service that can carry 1 million streams or however many they got, I doubt any so if we have another Sep 11th type event dont expect anything to be different in the unicast world.. Steve
Date: Mon, 16 Sep 2002 13:15:43 -0400 (EDT) From: alex
The company that "hosted" CNN demonstrated that for all the claims of their connectivity, it was not really there. If I recall correctly, CNN came up when a certain company from MA company-ized CNN.
Which brings us back to discussions of SPOF and distributed sources. The MA company in question helps demonstrate the latter. The former has been discussed recently. As network engineers/operators, we want to make things better and more reliable. However, there are sayings to the effect of "the trouble about doing things right the first time is people do not appreciate how difficult it was" and "people would rather brag about a good towing contract than to stay out of the mud". * How many times have you heard: - "I don't care if I get hacked; I don't have anything important on my computer" - "Nobody would be interested in breaking in to us" - "I'm happy with my current provider because they don't go down that often" * Ever submit a proposal for doing a job the right way, then lost the bid because someone else was cheaper in the short run (and more expensive in the long run)? * Ever had to argue that maintaining RFC compliance really was the correct thing to do? When a problem occurs, people get angry; only then does change become important. Until then, it's a question of "how much does it _cost_", *not* "what are we _investing_". Money talks, and a pound of cure is worth an ounce of prevention, plus carries the bragging rights of "I was a victim". How often does a huge news even occur? The bombings a year ago? The release of the Starr report? Do people get mad at the news source, or just chalk it up to "the Internet is having troubles"? Good, fast, cheap -- pick two (and make sure at least one is "cheap"). Bottom line: When something is expected, and the lack thereof rarely bites in a painful place, proper implementation is not considered a valuable feature. The market for perfection is a very small one. Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.
On Mon, 16 Sep 2002, Gerald wrote:
Just like the net was one of the prime sources of information during 9-11.
The internet sucked as a means of getting information on 9/11. I spent about 20 minutes hitting every news site I could think of, and they had all tanked. I set an away msg on IM: "Internet news sucks, I'm going to watch CNN."
There are several ways why "internet news" wasn't as good as TV news: 1. Using an infrastructure that is built for many-to-many communication for few-to-many communication is problematic 2. Look at the budgets for online and TV news 3. This type of situation doesn't lend itself well to typing in the news What the net did do, was permit people to communicate while the phone network suffered from massive congestion.
William said they changed a lot of the way they do things at the company that hosts CNN.com since 9/11. I don't believe they were the only ones.
Can you name a few examples of the things they changed?
I had really hoped Bill, or someone who knew Bill and this talk could give more input on it. I found a vague summary of the whole talk: http://www.tcsa.org/lisa2001/cnn.txt
The internet sucked as a means of getting information on 9/11. I spent about 20 minutes hitting every news site I could think of, and they had all tanked. I set an away msg on IM: "Internet news sucks, I'm going to watch CNN."
3. This type of situation doesn't lend itself well to typing in the news
To answer the comment I've got off list, I was looking for images at the least of what was going on. These were not small (middle-of-nowhere) cities. Text on IRC or Usenet were not giving me the visual I was looking for, and pages like slashdot were vague to begin with.
William said they changed a lot of the way they do things at the company that hosts CNN.com since 9/11. I don't believe they were the only ones.
Can you name a few examples of the things they changed?
From the link above:
Aftermath - volatility worse than ever before - automate swing process ([it was] on todo list for a year) - faster page reduction - network redesign - increased WAN bandwidth (if the servers could have handled the load, the WAN link would have been saturated) - Standing phone bridge reservation - review crisis procedures Gerald
On Mon, 16 Sep 2002, Iljitsch van Beijnum wrote:
The internet sucked as a means of getting information on 9/11. I spent about 20 minutes hitting every news site I could think of, and they had all tanked. I set an away msg on IM: "Internet news sucks, I'm going to watch CNN."
There are several ways why "internet news" wasn't as good as TV news:
1. Using an infrastructure that is built for many-to-many communication for few-to-many communication is problematic
2. Look at the budgets for online and TV news
3. This type of situation doesn't lend itself well to typing in the news
What the net did do, was permit people to communicate while the phone network suffered from massive congestion.
I had a good experience using the Internet for news on 9/11, because I used it in a way that fit the model.. I didn't bother trying to load cnn.com or whatever, but rather.. I sat in IRC, talking to people whom I trust to various degrees, who were in turn watching every conceiveable news source available, they transcribed, and summerized, some setup mp3 streams of the EMS/Police radios from DC and NYC, other people read old news sources online. People at ground zero went outside and took pictures, setup webcams, etc.. I have to say that I doubt I missed anything... So sure, the internet sucks as a 1:1 replacement for TV (at least without multicast)... but so what? I think my experience was better... I wouldn't have bothered wasting my time drooling over the TV anyways... welcome to News 2.0.
On måndag, sep 16, 2002, at 18:02 Europe/Stockholm, JC Dill wrote:
When I got back to the office, I learned that the big screen TV that had previously been located in the exercise room had been moved to the center of the office so that everyone could more easily see it, and everyone could hear it. Meanwhile, they all had high speed Internet connections to the computers sitting on their desks. Why bring in the TV if the 'net was "one of the prime sources of information"?
You actually more or less described what I meant, although I wasn't very clear. In principle the Internet was the place where people went for information, which is exactly what you saw - the congestion and overload. My initial point was then, that as this seems to be the case, perhaps we should engineer the network to meet these demands. - kurtis -
On Thu, 5 Sep 2002 sgorman1@gmu.edu wrote:
Is there a general consensus that cyber/internal attacks are more effective/dangerous than physical attacks. Anecdotally it seems the largest Internet downages have been from physical cuts or failures.
I think you have a sampling bias problem. The "biggest" national/international network disruptions have generally been the result of operator error or software error. Its not always easy to tell the difference. It may be better for carrier PR spin control to blame a software/router/switch vendor. Until recently physical disruptions have been due to causes which don't effect the stock price, carriers were more willing to talk about them. Carriers usually don't fire people due to backhoes, hurricanes, floods, or train derailments. What does this say about the effect of an external or internal cyber-attack? Not much. Naturally occuring physical and procedural disruptions have different properties than a directed attack. Not the least is hurricanes and trains don't read NANOG, and generally don't alter their behavior based on the "recommendations" posted. Wouldn't you prefer a good game of chess?
participants (25)
-
Al Rowland
-
alex@yuriev.com
-
batz
-
Brad Knowles
-
Christopher L. Morrow
-
Daniel Golding
-
E.B. Dreger
-
Gerald
-
Greg Maxwell
-
Iljitsch van Beijnum
-
Jared Mauch
-
JC Dill
-
Jeff Shultz
-
John M. Brown
-
Kurt Erik Lindqvist
-
Kurtis Lindqvist
-
Marshall Eubanks
-
Pawlukiewicz Jane
-
Ryan Fox
-
Sean Donelan
-
sgorman1@gmu.edu
-
Stephen J. Wilcox
-
tim.thorne@btinternet.com
-
Valdis.Kletnieks@vt.edu
-
William Waites