On Tue, Apr 09, 2002, Henry Yen wrote:
I don't exactly anticipate this ever happening. My observation is that the scaling will happen in the router area, i.e. as more and more smaller blocks get announced out of the class A/class B space, the ability of routers to hold more routes will tend to relax the typical filtering policies as time goes on. In other words, by the time we might encounter a problem, it'll no longer be a problem.
<topic mode=rant> Back when routers had small (relatively) small CPUs and (relatively) small amounts of RAM I'd say that the filtering (and other nice things such as flap dampening) was coined to stop these poor little routers from dying. But nowdays, routers have lots of CPU and lots of RAM. Somehow people equate this to "can hold/munge larger routing tables". Well, thats partly true. You've (practically) removed CPU and routing from the table, but the speed of light is still the same, and the routing protocols are still the same - so now what you'll be seeing is that "stability" is actually a function of your network characteristics _and_ router, rather than it mainly being the router. Transmitting 100,000 routes still takes time. Even if your time to parse and store your packet is 0, you'll still at least have the route fill delay (how long it takes for routing information to travel from your peer to you) and route propagation delay (how long it takes for your route to appear all over the internet.) Since those aren't 0, they can add up - and no amount of router CPU or router memory is going to (soley) fix it. </topic> 2c, take with some salt, etc. adrian -- Adrian Chadd "For a sucessful technology, reality must <adrian@creative.net.au> take precedence over public relations, for nature cannot be fooled" - Feynmann