Tony Li wrote:
A key observation here is that the point of an optical cross connect is to provide a real circuit, not a virtual one.
Tony - the way i see it, a "real" circuit is something which requires a technican to go on site to connect or disconnect it. Any circuit which can be routed under software control is a virtual circuit. In this case these wavelengths or whatever are no different from CBR flame delay or ATM circuits. Now, if one wants to get really philosophical, it all depends on time frame. A capacity planning software may issue orders to technicans to go and do things -- but it'll take days. If you're ok with that kind of response time, the "real" circuits are also virtual ;) The funny thing is, the change rates in traffic engineering world are pretty close to the human-assisted VC switching's :) --vadim
| > A key observation here is that the point of an optical cross connect is to | > provide a real circuit, not a virtual one. | | Tony - the way i see it, a "real" circuit is something which requires a | technican to go on site to connect or disconnect it. Then I guess you haven't made a phone call since the days when we had ladies on rollerskates. ;-) Tony
I am pleased to see that we managed to diverge from questions on OXCs to benefits/disadvantages of TE and circuit switching. Let me put in a few comments while we are at it: 1) I believe that all of the problems that are claimed to be solved by TE can also be solved by a well-designed network architecture and a good routing protocol. Unfortunately for the Internet, the development and research on IP routing protocols that are load-sensitive has come to a halt. I know that there are people working on these including some of us at Pluris, but in general, there is strong pushback on anyone that even suggests that a load-sensitive routing protocol is a better solution than TE. 2) In terms of a management perspective, I think that it is clearly advantageous to manage a single network with no overlay topology. ATM was not even close to this and MPLS although closer still does not meet the goal of the unified network. I am still trying to figure out what exactly is wrong with a combination of fast/dense/scalable routers and optical equipment without an overlay architecture. As an aside, I don't think managing on the order of 5000-10000 LSPs in a core backbone is easy at all. Overall, I think a well-engineered and legacy-free IP network will be competitive in terms of efficiency with an overlay network;however, there are some networks out there with ATM equipment and for those networks MPLS may make sense. Vadim, unfortunately, thinks are rarely all black or all white and one does need to accommodate different demands when building a real, sellable product. Bora Akyol Apparently forgetful architect at Pluris ;-) Tony Li wrote:
| > A key observation here is that the point of an optical cross connect is to | > provide a real circuit, not a virtual one. | | Tony - the way i see it, a "real" circuit is something which requires a | technican to go on site to connect or disconnect it.
Then I guess you haven't made a phone call since the days when we had ladies on rollerskates. ;-)
Tony
Bora, | 1) I believe that all of the problems that are claimed to be solved by | TE can also be solved by a well-designed network architecture and a good | routing protocol. Unfortunately for the Internet, the development and | research on IP routing protocols that are load-sensitive has come to a | halt. I know that there are people working on these including some of us | at Pluris, but in general, there is strong pushback on anyone that even | suggests that a load-sensitive routing protocol is a better solution | than TE. You should understand that that pushback is based on a technical history. There have been many experiments with load sensitive link state routing protocols. All of them, from the original days of the ARPAnet, have resulted in instability, with oscillations in the traffic matrix and high CPU loading as all of the routers SPF frequently to keep up. Personally, I believe that there is a solution buried somewhere in control theory, but the industry hasn't hit on the right human that knows enough about control theory AND routing protocols to make this work. This has been a pet peeve of mine since about 1987, and yes, everytime someone says that the answer is 'more thrust', we have an educational discussion. | 2) In terms of a management perspective, I think that it is clearly | advantageous to manage a single network with no overlay topology. ATM | was not even close to this and MPLS although closer still does not meet | the goal of the unified network. I am still trying to figure out what | exactly is wrong with a combination of fast/dense/scalable routers and | optical equipment without an overlay architecture. As an aside, I don't | think managing on the order of 5000-10000 LSPs in a core backbone is | easy at all. I don't think anyone is suggesting that managing 5000 of anything is easy. First, you don't need 5000 LSPs to perform traffic engineering. Only enough to redirect traffic away from hot spots. Second, this needs to be automated. This is a small subset of the fact that all of our network management needs automation. Otherwise, we can't possibly hope to get the operator expertise to continue to scale the network. Tony
Tony Have you reviewed the following paper? A. Khanna and J. Zhinky, "The Revised Arpanet Routing Metric" SIGCOMM 89. I believe this paper discusses what was wrong with the first two incarnations of the Arpanet routing protocol as well as solutions to them. The stats that are towards the end are quite interesting. Maybe Craig Partridge at BBN will know more about this, I will CC him in this email. Let me know if you can't find a copy, I have a hard copy that I can forward to you. Also see more comments within: Regards Bora Tony Li wrote:
Bora,
| 1) I believe that all of the problems that are claimed to be solved by | TE can also be solved by a well-designed network architecture and a good | routing protocol. Unfortunately for the Internet, the development and | research on IP routing protocols that are load-sensitive has come to a | halt. I know that there are people working on these including some of us | at Pluris, but in general, there is strong pushback on anyone that even | suggests that a load-sensitive routing protocol is a better solution | than TE.
You should understand that that pushback is based on a technical history. There have been many experiments with load sensitive link state routing protocols. All of them, from the original days of the ARPAnet, have resulted in instability, with oscillations in the traffic matrix and high CPU loading as all of the routers SPF frequently to keep up.
Personally, I believe that there is a solution buried somewhere in control theory, but the industry hasn't hit on the right human that knows enough about control theory AND routing protocols to make this work. This has been a pet peeve of mine since about 1987, and yes, everytime someone says that the answer is 'more thrust', we have an educational discussion.
If there was a desire to work on such a routing protocol in the community, we would definitely like to help. In the meantime, I will keep on working on such a protocol with a small group of people here.
| 2) In terms of a management perspective, I think that it is clearly | advantageous to manage a single network with no overlay topology. ATM | was not even close to this and MPLS although closer still does not meet | the goal of the unified network. I am still trying to figure out what | exactly is wrong with a combination of fast/dense/scalable routers and | optical equipment without an overlay architecture. As an aside, I don't | think managing on the order of 5000-10000 LSPs in a core backbone is | easy at all.
I don't think anyone is suggesting that managing 5000 of anything is easy. First, you don't need 5000 LSPs to perform traffic engineering. Only enough to redirect traffic away from hot spots. Second, this needs to be automated. This is a small subset of the fact that all of our network management needs automation. Otherwise, we can't possibly hope to get the operator expertise to continue to scale the network.
This last point is interesting: "Redirecting traffic from hot spots" This now is truly ad-hoc and only achieves short term fixes and a human intervention. I thought one of the purposes of TE was to reduce the amount of management necessary. There is a paper from INFOCOM or SIGCOMM (I am not sure) on the modification of OSPF metrics for TE which formalizes what some ISPs have been doing for a while. This paper is somewhat interesting in the fact, it also is a human intervention method but does not involve MPLS at all. The paper's title is "Internet Traffic Engineering by optimizing OSPF Weights" by Fortz and Thorup.
Tony
| This last point is interesting: "Redirecting traffic from hot spots" | This now is truly ad-hoc and only achieves short term fixes and a human | intervention. I thought one of the purposes of TE was to reduce the | amount of management necessary. You're making a large assumption. On a large network, hot spots can be determined by automated means. This has been happening in the phone network for decades. Tony
On Thu, 11 May 2000 11:36:23 -0700 Bora Akyol wrote:
There is a paper from INFOCOM or SIGCOMM (I am not sure) on the modification of OSPF metrics for TE which formalizes what some ISPs have been doing for a while. This paper is somewhat interesting in the fact, it also is a human intervention method but does not involve MPLS at all. The paper's title is "Internet Traffic Engineering by optimizing OSPF Weights" by Fortz and Thorup.
I found the article by just searching for the title on http://www.google.com. The URL I found, http://smg.ulb.ac.be/Preprints/Fortz99_29.html mentions it was in Infocon 2000... And oh, yes, most definitely worth the read. Has anyone started to apply these techiques to networks other than AT&Ts? If so, results? thanks, fletcher
On Sun, 14 May 2000, Fletcher E Kittredge wrote: [OSPF traffic engineering paper, url below ]
Has anyone started to apply these techiques to networks other than AT&Ts? If so, results?
I found this sentence to be quite instructive. \begin{quote} In connection with link failures, we note that one could pre-compute a good weight setting for each possible link failure and have them ready for loading when needed. \end{quote} That has some interesting operational issues, one would think, especially in large networks that are dense partial meshes, with associated link failure scenarios. /vijay
Tony, All the control systems I know of that allow rapid change of rate use mass (a known fixed quatity) as a major damping component. Same goes for knowing the amount of reactant at any time in chemical processes. I see no comparable function on packet switched networks, thus the hardness in creating fast stable control systems. Tell me what the knowns are for an ISP network and compare it to the knowns of an airplane (mass, moment, thrust, control surfaces, characteristics of air, ...) No wonder you can make a F18 turn faster than an ISP routing table. On the other hand, being wrong on an ISP routing table has lower cost than an out of control F18. How many of us are old enough to have taken an analog circuits class in college? They say build a circuit that has the following resonse, and it sounds so easy in class. Three or four hours of lab work later, the fact that even simple stuff is hard gets drilled in deeply. jerry
participants (6)
-
Bora Akyol
-
Fletcher E Kittredge
-
Jerry Scharf
-
Tony Li
-
Vadim Antonov
-
Vijay Gill