On 3/12/11 5:00 AM, William Herrin wrote:
On Sat, Mar 12, 2011 at 2:17 AM, Joel Jaeggli <joelja@bogus.com> wrote:
I'm super-tired of the "but tcams are an expensive non-commodity part not subject to economies of scale". this has been repeated ad nauseam since the raws workshop if not before.
You don't have to build a lookup engine around a tcam and in fact you can use less power doing so even though you need more silicon to achieve increased parallelism.
Hi Joel,
You're either building a bunch of big TCAMs or a radix trie engine with sufficient parallelism to get the same aggregate lookup rate.
The trie is working acceptably in 120Gb/s linecards today.
If there's a materially different 3rd way to build a FIB, one that works at least as well, feel free to educate me.
Don't need one, that's the point, heroic measures are not in fact required. On the trie side it's the key length O(m) that ditactates the lookup time in the trie and the worst case is therefore bounded by the straight-forward proposition (how to I forward to this /128). having generally shorter routes would got long way towards extending the route capacity of the device, but the table sizes kills you on installing and updating the fib not on the search part.
And while RIB churn doesn't grow in lockstep with table size, it does grow.
It does, it just has to not grow faster than our ability to manage it. so long as the remains the case managing the RIB remains in the domain of straight forward capacity management. compressing rib churn out of fib updates and tweaks to bgp state-machines generally is I think an area that has a lot of room for innovation, but that doesn't have to involve the forwarding plane.
Either way when you boost from 1M to 10M you're talking about engineering challenges with heat dissipation and operating challenges with power consumption, not to mention more transistors.
We don't need 10 million routes today, we need 2 million in the class of device that currently has 512K (36Mbit CAM) (and has been in production for some time) in 3-5years... That today can/is being done with 64MB RLDRAM. In same time-frame the networks that currently need 2 million route FIBS will need 4-5M. To harp on rldram since I'm somewhat familiar with it, clearly we need faster parts with lower power consumption and it's timely that ddr3 derived parts should begin sampling at some point.
I'll be convinced it can be done for less than 2x cost when someone actually does it for less than 2x cost.
part of the exercise is neither building nor buying the capacity before you need it. the later the features needed crystalize into silicon the more likely they are to be usable.
Whether it's 2x cost or 1.2x cost, the point remains the same: we could have routers today that handle the terminal size of the IPv4 table without breaking the bank.
Your favorite router manufacturer has made vague assertions about how they would build one given sufficient customer demand. So make a demand.
previous $employer crossed the 800K prefix count in FIBS on couple devices a while ago. I generally don't have to many cases where vendors roll their eyes and laugh hysterically when I talk about what we project our FIB requirements as, that's reserved for other features.
Regards, Bill Herrin