On Sat, 2011-03-12 at 08:00 -0500, William Herrin wrote:
You're either building a bunch of big TCAMs or a radix trie engine with sufficient parallelism to get the same aggregate lookup rate. If there's a materially different 3rd way to build a FIB, one that works at least as well, feel free to educate me. And while RIB churn doesn't grow in lockstep with table size, it does grow.
Radix trie traversal can be pipelined, with every step in the search being done in separate memory bank. The upper levels of tries are small, and the lower levels contain a lot of gunk which is not used often - so they can be cached on-chip. FIB lookup is much easier than executing instructions like CPUs do precisely because packets are not dependent on each other, so you don't need to stall pipeline (like CPUs do on jumps, I'll skip the discussion of things like branch prediction and speculative execution). This didn't stop folks at Intel producing cheap silicon which executes instructions at astonishing speeds. Where TCAMs really shine is packet classification - but you don't generally need huge TCAM to hold ACLs in.
Your favorite router manufacturer has made vague assertions about how they would build one given sufficient customer demand. So make a demand.
OFRV has a track record of producing grossly over-engineered devices, hardware-wise. I've heard a very senior hardware guy who came from OFRV claiming that they do that deliberately to increase barriers to entry for competition, though this doesn't make sense to me. --vadim