I have no idea how those two blank messages got out. My apologies.
Ideally, the capacity of LAN *attachments* (not the total bandwidth of the LAN switch!) should be about the same as the capacity of backbone links, multiplied by the number of backbone links attached to each backbone router.
Agreed, which is why the GIGAswitch works and why, once Cisco has a reasonable 100BaseT interface processor, a Grand Junction will work.
Anyway, that boils down to the LAN switch being the single point of failure.
Statistically speaking, the things that go wrong with modern machines that have no moving parts are usually software or configuration related. Since the LAN switch is mostly hardware with a little bit of bridging and spanning tree stuff, it is a lot less complicated than the average router. I think router software and configuration errors are going to pretty much drive the failure rates for the next few years. I'm not worried about the LAN switches since there are so many worse/nearer things to worry about.
All in all, the backbone router should be scalable to the point that it does not need any clustering; and redundancy is provided by "internal" duplication.
Agreed. I had a conversation over coffee a few months ago and this topic came up; my conclusion was that we were in for a really rough ride since each previous time that the Internet backbone (or the average high end customer) needed more bandwidth, there was something available from the datacom folks to fit the bill. (9.6 analog, 56K digital, 1.5M digital, 45M digital.) Furthermore, the various framing and tariff issues moved right along and have made it possible to provision Internet service in ways that would have made no sense just a few years ago (witness frame relay and SMDS.) But this time, we are well and truly screwed. The datacom people have gradually moved to the "one world" model of ATM, and have put all their effort behind the mighty Cell. They thought their grand unification of voice, data, VOD, etc, was finally a way to stop dinking around with lots of incompatible systems and oddball gateways (those of you who have tried to get inter-LATA ISDN or international T1/E1 working know what I mean). So the datacom community's answer to the Internet community is "if you're not able to get what you need from T3, use ATM which will be infinitely scalable and can be used as an end-to-end system." Who knows? If DEC can make a 90MHz NVAX chip after some PhD somewhere "proved" that the multibyte instruction encoding made certain kinds of parallism impossible and that the fastest VAX would run at 60MHz, and if Intel can make a sewing machine's CPU into a 64-bit monster with three kinds of virtual memory, then perhaps the ATM folks can figure out how to do all the lookasides and exceptions they need to do in their little itty bitty hundred-picosecond window. But I don't think so. I think we are going to have to get clever, and that we have used up our bank account of times when brute force coming out of the sky can save us. This time, gentlemen, the calvary is not coming. As much as I despise the topic Matt and Vadim have been discussing today, I think we're looking at some kind of load sharing. Probably static load sharing with no adaptation to traffic type or flow, since as Jerry pointed out a few weeks back, that's a slippery slope and a lot of people have died on it.