On Mon, 9 Aug 1999, Vadim Antonov wrote:
Now, no matter how one jumps, most congestions only last seconds.
This isn't the case when you have half your bandwidth to any particular point down. Excess capacity in other portions of the network may be then used to carry a portion of the offered load via a suboptimal path.
Expecting any traffic engineering mechanism to take care of these is unrealistic. A useful time scale for traffic engineering is therefore
Expecting most congestions to last only seconds is also unrealistic. In most cases, there is no congestion, everything is taking the shortest path and then there is a loss of capacity and we have a problem. Expecting the physical circuit to never go down due to sonet protect and diverse routing is also a bit optimistic as regrooming may eventually reduce your "diverse" routing to a single path. This does not fly with customers, who want traffic moved, not excuses that the physically diverse pathing was regroomed by a telco to be non diverse. Backhoe fade induced loss MTTR is long enough that TE techniques have proven to be an effective mechanism of bypassing the outage without operator intervention.
at least days - which can be perfectly accomodated by capacity planning in fixed topology. At these time scales traffic matrices do not change
This assumes a decent fixed topology. The market has moved faster than predictions historically.
Backbones which neglect the capacity planning because they can "reroute" traffic at L2 level simply cheat their customers. If they _do not_ neglect capacity planning, they do not particularly need the L2 traffic engineering facilities.
Promising local ISP's are not neglecting capacity planning. The problem is that the effectively _random_ delivery of capacity and at points which are less than optimal.
Anyway, the simplest solution (having enough capacity, and physical topology matching L3 topology) appears to be the sanest way to build a stable and manageable network. Raw capacity is getting cheap fast; engineers aren't. And there is no magic recipe for writing complex _and_ reliable software. The simpler it is, the better it works.
There is _no_ disagreement on this topic. This paragraph is correct as it stands. With the exception of partial capacity loss, this is completely in line with most people's thinking. No one actually sits down and thinks "lets effectively route our traffic so that the sum (bit*mile) is the highest possible." That is just plain wrong. However, given real world constraints on capacity and delivery, TE is a useful tool today. /vijay