On Wed, 26 Oct 2005, Andre Oppermann wrote:
Blaine Christian wrote:
It does seem appropriate to consider Gigabit sized routing/forwarding table interconnects and working on TCP performance optimization for BGP specifically, if any improvement remains. Combine those things with a chunky CPU and you are left with pushing data as fast as possible into the forwarding plane (need speedy ASIC table updates here).
I guess you got something wrong here. Neither BGP nor TCP (never has been) are a bottleneck regarding the subject of this discussion.
i think he's describing initial table gather/flood and later massage of that into FIB on cards ... which relates to his earlier comment about 'people still care about how fast initial convergence happens' (which is true)
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
You may want to read this thread from the beginning. The problem is not the routing plane or routing protocol but the forwarding plane or ASIC's
it's actually both... convergence is very, very important. Some of the conversation (which I admit I have only watched spottily) has covered this too.
or whatever. Both have very different scaling properties. The forwarding plane is at an disadvantage here because at the same time it faces growth in table size and less time to perform a lookup . With current CPU's you can handle a 2M prefix DFZ quite well without killing the budget. For the
really? are you sure about that? are you referrinng to linecard CPU or RIB->FIB creation cpu? (be it monolithic design or distributed design)
forwarding hardware this ain't the case unfortunatly.
this could be... I'm not sure I've seen a vendor propose the cost differentials though.