Re: Vulnerbilities of Interconnection
It is interesting that you cite Paul Baran since his work at RAND was arguably the first to recommend a distributed network. The start of the quasi myth that the Internet was developed to withstand a nuclear attack. If my memory serves me correct that was Baran's assignment, but it was not the driving force behind the creation of ARPANET the two were non-sequitor. The point being from a structural stand point there is convincing empirical evidence that the Internet is not a distributed network. It is what physicists would call a small world or scale free network. Most of this work has been done at the router and AS level. For example the vast majority of routers have only a few connections, while a small minority has the vast majority of conections. The applied side of this I think I've posted about before, so please pardon the redundancy, is that Internet as the AS and router level is very resilent to random failures but highly susceptible to targeted failures. This has become so predominant in the literature it has left many folks asking questions along the lines of security. Just to give you a brief flavor the finding have been from Barabasi's group at Notre Dame in 2000, Cohen in Israel slightly later, and Callaway in 2001. Similat findings have been found from a spatial perspective with a group at BU's computer science department and the Notre Dame group as well. This has led a scramble to come up with new Internet topology generators - another laundry list of references and approaches - but the reported security implications have had a bumpier road. Are all these computer scientist and physicists wrong, I'm sure a good case could be made against them, but it leaves several open questions. There have been several good cases made for the resilence of the statas quo and good arguments that there could be problems. The question is how much of it is political. One side looking for problems they can point fingers at in the name of homeland security and one side denying any problem at all because it is a lot cheaper if there is no problem. ----- Original Message ----- From: Sean Donelan <sean@donelan.com> Date: Friday, September 13, 2002 7:49 pm Subject: Re: Vulnerbilities of Interconnection
On Fri, 13 Sep 2002 sgorman1@gmu.edu wrote:
Or you cut the lines coming into the city - i.e Chicago has about 5 diverse routes for fiber into the city. No explosives required and you get the same effect.
The early ideas for the arpanet/internet never said every point would work under all conditions. The premise was if you destroyed (which implies something is in fact destroyed) part of the network, the surviving partsof the network could function. It said nothing about the ability of the part of the network which was destroyed to function. It may be obviousthe destroyed portion of the network will not function, but sales people don't always go out of their way to explain the concept.
The Paradox of the Secrecy About Secrecy by Paul Baran, August 1964 [...] The overall problem here is highly reminiscent of the atomic energy discussions in the 1945-55 era--only those who were not cleared were able to talk about "classified" atomic weapons. This caused security officers to become highly discomfitured~by the ease with which unclassified clues were being combined to deduce highly accurate versions of material residing in the classified domain. This points up a commonly recurring difference of opinion (or philosophy) between the security officer and the technically trained observer. The more technical training an individual possesses, the less confidence he seems to have of the actual value of secrecy in protecting the spread of new developments in a ripe technology. True security does not always equate to blanket unthinking secrecy. While the security value of effective secrecy can be high, we must be realistic and acknowledge the constraints of living in a free society where effective secrecy in peacetime is almost impossible. Avoiding a touchy subject by falling back on edicts rather than rationality may automatically insure the continued existence of the touchy subject. [...]
On Sat, 14 Sep 2002 sgorman1@gmu.edu wrote:
The point being from a structural stand point there is convincing empirical evidence that the Internet is not a distributed network.
Due to policy constraints the commercial Internet more closely mimics a collection of decentralized networks. However, things are more complicated because IP permits networks of any size to interconnect a almost any level over almost any communications technology. We normally consider it an error when external traffic transits an intranet or extranet. But during a disaster, capacity is capacity.
a small minority has the vast majority of conections. The applied side of this I think I've posted about before, so please pardon the redundancy, is that Internet as the AS and router level is very resilent to random failures but highly susceptible to targeted failures. This has become so predominant in the literature it has left many folks asking questions along the lines of security.
This has come up on this list and others before. I've previously commented on some of the published papers. I'll try not to repeat everything. The missing piece from most of the previous papers I've read is how do you find out how much unused capacity exists? BGP views and Skitter data is very good at finding used paths, but it tells you nothing about how much "shadow" capacity exists. Generally, after a disaster you'll discover new capacity exists in the Internet which you couldn't detect before the disaster. Where does that capacity come from, and why couldn't you detect it before the disaster? And how much capacity exists in the shadows?
This has led a scramble to come up with new Internet topology generators - another laundry list of references and approaches - but the reported security implications have had a bumpier road. Are all these computer scientist and physicists wrong, I'm sure a good case could be made against them, but it leaves several open questions. There have been several good cases made for the resilence of the statas quo and good arguments that there could be problems. The question is how much of it is political. One side looking for problems they can point fingers at in the name of homeland security and one side denying any problem at all because it is a lot cheaper if there is no problem.
The research community comes up with a model how the Internet works. They hypothesize various outcomes based on that model. The final section of any research report is a list of problems requiring more research :-) The operational community looks at the model, goes behind the curtain and compares the model to their operational networks. We find the model doesn't match. The operational community comes back out from behind the curtain and tells the research community our network doesn't have that problem. And if it did, its fixed now :-) The research community asks can we please peek behind the curtain? The operational community says we respectfully must decline. Rinse. Repeat. I probably won't make it to the next NANOG in Eugene, but on Sunday afternoon NANOG is hosting a research/operation forum to allow researchers to solicit feedback from the operations community. The proposal deadline is Monday, September 16.
participants (2)
-
Sean Donelan
-
sgorman1@gmu.edu