On Jan 18, 2008 4:16 PM, <michael.dillon@bt.com> wrote:
Sooner or later, somebody is going to try to apply Google's approach to hardware in a network backbone. Imagine a network backbone with no Cisco or Juniper boxes in it, just lots of commodity boxes with triple-redundancy everywhere (quintuple in NFL cities).
Michael, There's a missing piece here. You'd need a way to go from the 1-gige interfaces that commodity hardware can keep up with to the 10gige-plus interfaces that the backbone requires. Suppose you build 10-gige mux/demuxes for $2000 each so that you can break the backbone data rates down to 1gbps. The mux/demux would have one 10-gige port and 12 1-gige ports. Packets received on the 1-gige ports would be transmitted on the 10-gige port in the order received. Packets received on the 10-gige port would be transmitted on the 1-gige ports in a more or less round-robin fashion. Two of the 1-gige ports would always be configured as backups with the carrier held low until a piece of equipment attached to one of the active ports failed. You could then build a highly available 3 x 10gige port plus 22x1-gige port "router" with the following components: 3 $2000 10gige mux/demuxes 10 $3000 1U servers (packet forwarders, 5 gig-e ports each) 1 $3000 1U server (BGP route manager, 2 gig-e ports) 2 $3000 1U servers (hot spares, 5 gig-e ports each) 2 $2000 24-port gig-e switch (interlink the 13 servers with redundancy) 62 gig-e cables 18 rack units $50,000 total. But you can start to get Cisco and Juniper routers with 3 10-gige interfaces in the neighborhood of $50k and they neither take up 18 rack units nor consume as much electricity as those 13 servers. On the other hand, commodity memory is cheap. You could expand those 1-gige software-based forwarders to handle 100M routes in the FIB for maybe another $10k. Since the theoretical limit for the count of prefixes /24 and shorter is less than 34M, that could be handy. A similar expansion in Cisco or Juniper big iron is not just expensive, its hard. And too, the notion of a Linux routing cluster is undeniably hot. :) Regards, Bill Herrin -- William D. Herrin herrin@dirtside.com bill@herrin.us 3005 Crane Dr. Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004