lists.nanog.org
Sign In Sign Up
Manage this list Sign In Sign Up

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
thread

Re:

Vadim Antonov

9 Jan 2001 9 Jan '01
6:32 a.m.

You mean you really have any other option when you want to interconnect few 300 Gbps backbones? :) Both mentioned boxes are in 120Gbps range fabric capacity-wise. If you think that's enough, i'd like to point out at the DSL deployment rate. Basing exchange points at something which is already inadequate is a horrific mistake, IMHO. Exchange points are major choke points, given that 80% or so of traffic crosses an IXP or bilaterial private interconnection. Despite the obvious advantages of the shared IXPs, the private interconnects between large backbones were a forced solution, purely for capacity reasons. --vadim On Mon, 8 Jan 2001, Daniel L. Golding wrote:

...

There are a number of boxes that can do this, or are in beta. It would be a horrific mistake to base an exchange point of any size around one of them. Talk about difficulty troubleshooting, not to mention managing the exchange point. Get a Foundry BigIron 4000 or a Riverstone SSR. Exchange point in a box, so to say. The Riverstone can support the inverse-mux application nicely, on it's own, as can a Foundry, when combined with a Tiara box.

Daniel Golding NetRail,Inc. "Better to light a candle than to curse the darkness"

On Mon, 8 Jan 2001, Vadim Antonov wrote:

...

There's another option for IXP architecture, virtual routers over a scalable fabric. This is the only approach which combines capacity of inverse-multiplexed parallel L1 point-to-point links and flexibility of L2/L3 shared-media IXPs. The box which can do that is in field trials (though i'm not sure the current release of software supports that functionality).

--vadim

0 0
Reply
Sign in to reply online Use email software

Back to the thread

Back to the list

HyperKitty Powered by HyperKitty version 1.3.12.