On Mon, Oct 2, 2023 at 1:18 AM Nick Hilliard <nick@foobar.org> wrote:
The difficulty with this is that if you end up with a FIB overflow, your router will no longer route.
Hi Nick, That depends. When the FIB gets too big, routers don't immediately die. Instead, their performance degrades. Just like what happens with oversubscription elsewhere in the system. With a TCAM-based router, the least specific routes get pushed off the TCAM (out of the fast path) up to the main CPU. As a result, the PPS (packets per second) degrades really fast. With a DRAM+SRAM cache system, the least used routes fall out of the cache. They haven't actually been pushed out of the fast path, but the fast path gets a little bit slower. The PPS degrades, but not as sharply as with a TCAM-based router.
That said, there are cases where FIB compression makes a lot of sense, e.g. leaf sites, etc. Conversely, it's not a generally appropriate technology for a dense dfz core device. It's a tool in the toolbox, one of many.
The case for FIB compression deep in the core is... not as obvious as the case near the edge for sure. But I wouldn't discount it on any installation that has a reasonably defined notion of "upstream," as opposed to installations where the only sorts of interfaces are either lateral or downstream. Look at it this way: here are some numbers from last Friday's BGP report: BGP routing table entries examined: 930281 Prefixes after maximum aggregation (per Origin AS): 353509 Deaggregation factor: 2.63 Unique aggregates announced (without unneeded subnets): 453312 Obviously adjacent routes to the same AS aren't always going to have the same next hop. But I'll bet you that they do more often than not, even deep in the core. Even if only half the adjacent routes from the same AS have the same next hop when found deep in the core, according to these numbers that's still a 30% compression. If you keep a 10% slack for transients, you still have a 20% net gain in your equipment's capability versus no compression. Regards, Bill Herrin -- William Herrin bill@herrin.us https://bill.herrin.us/