On Tue, 10 Apr 2001, Majdi S. Abbas wrote:
On Tue, Apr 10, 2001 at 08:27:54AM -0400, Greg Maxwell wrote:
The reason they don't allocate /24's is because without aggregation the Internet is not scalable. Perhaps they are being too agressive, but the reasoning is sound.
Aggregation buys time, that's it. Aggregation does not make the current routing methods any more scalable.
I'd be inclined to think that anything which allows a system to stretch further without fundamental change implies increased scalability. That's not to say that it makes it infinitely scaleable, but is anything? Incidentally, how do people feel about the use of default routes to work around the problem of routing table size on tier-2 (!) networks and below? If all "small" edge networks pointed their default at one or more of their upstreams, and filtered their outbound traffic to remove things they wouldn't want to be able to get out anyway, it would be down to the larger NSPs to deal with the issue of routing table size for prefixes beyond a certain length. Doesn't really fix anything, as it reduces control over which path your outbound traffic takes, but I suppose at least it makes sure it'll go -somewhere-? On the flipside, who is actually less concerned about routing table size? The multihomed networks on the edges who can use a default if they want to, and are likely to be carrying less traffic and so have more resources to deal with routing, or the core networks who have capacity problems of their own? Curious (and thinking aloud), Patrick -- Patrick Evans - Net bloke, indie kid and lemonade drinker pre at pre dot org tompaulin dot pre dot org