At 12:16 -0400 7/19/02, Richard A Steenbergen wrote:
On Fri, Jul 19, 2002 at 11:00:38AM -0400, Daniel Golding wrote:
I think we are at the point where the vast majority of backbone routers can handle 200K+ routes, at least in terms of memory. The interesting point we are getting to, is that the most popular router in the world for multihoming can't handle the routing table. I'm referring to the Cisco 3640, which has largely supplanted the venerable 2501 as the low-end multihomer's edge router of choice.
With a reasonable number of features turned on (i.e. SSH, netflow, CEF), the 3640 can't handle two full views anymore, due to it's limitation of 128MB. While this may be a good thing for Cisco's sales numbers, in this winter of financial discontent, I wonder how this is effecting the average customer, and what is generally being installed to replace the 3640s.
If a 3640 customer can't handle multiple full views, why can't they filter some junk /24s themselves? This isn't really a good enough reason for backbone providers to do the filtering.
That was my thinking also. I would imagine a lot of customers what a full route view, it's what they are paying for especially if they are an ISP or multihomed large customer. They should have their own policies then.
As for the convergence time argument, the limiting factor is CPU time, not the number of routes or amount of data exchanged (though obviously more routes == more cpu). In the core, is there really that big a difference between 93k and 113k? On the borders, how much cpu time is saved vs how much cpu time is burned doing the filtering?
I would assume a flapping session with a large backbone would cause much higher load time and stress on the router then simply a large table. It's the reason why some backbones have Draconian route dampening policies, and rightly so. I would love to see some engineers from vendors weight in on this (did I just say that?). Most brag that they can handle large tables without a problem. A good question might be, if a large backbone started flapping 150,000 routes, what would that do to the peers. Perhaps a better issue much be CPU usage of complex route filters on large tables, as a limitation on performance.
Which leaves us with the question of, are there still MSFC1's or other devices with 128mb memory limits in these networks which are hurting at 113k? Is there actually a legitimate technical need to filter off 20k routes, or are the people doing it stuck in a mental time warp from the days when it was a necessity?
Or, is it really just people trying to do the "correct" thing? If you see "almost" no change in connectivity after removing 20k of cruft, and the very few people who are broken are the ones who needed attention called to their poor route announcing skills anyways, maybe it's a good thing for reasons other than router performance?
Interesting thought is, there are probably a great many engineers on this list that have /24s at their home, that dont enjoy being filtered. Some of us just get tired of reIPing our servers. dave
-- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] Smotons (Smart Photons) trump dumb photons