On Sat, 29 Sep 2001, E.B. Dreger wrote:
Given your point about many companies wanting to multihome, I agree that we can easily exceed 1M routes.
It is of course important not to underestimate the demand for multihoming. But on the other hand, after being in this business for a while, it's very easy to overestimate. In the absense of hard numbers, I assert that there are less than 10k real multihomers. If we assume that future multihomers will behave and only announce a single route, going from 10k to 1M multihomers is a factor 100. Even in this business few things grow that fast... Paul's comparison to .COM is not a good one, because getting an additional domain doesn't cost you any hardware and multihoming does.
How many _should_ want to? Most everyone. How _many_ do? I don't have the answer.
Multihoming costs a lot of money, so I doubt we will ever see a billion multihomers (which was the upper limit nobody bothered to protest against on multi6, 100M was still considered possibly too low by some).
1. PI microallocations (e.g. /24) aligned on /19 (for example) boundaries. Need more space? Grow the subnet. One advert because IP space is contiguous.
In practice, this already happens. If you become an ISP, you get a /20 _allocated_ even if you don't get a lot of addresses _assigned_.
Cost: Change of policy at RIRs.
And many innocent IPv4 addresses suffer.
3. I'd suggest merging "best" routes according to next-hop, but the CPU load would probably be a snag. Flapping would definitely be a PITA, as it would involve agg/de-agg of netblocks. Maybe have a waiting period before agg/de-agg when a route changes... after said wait (which should be longer than the amount of time required to damp said route), proceed with netblock consolidation.
It would be an interesting project to make an algorithm that takes a BGP table, and creates the shortest possible FIB that has traffic being forwarded in accordance with this BGP table. For dual homed networks, you should always be able to drop more than half the routes and install a default instead. But the actual size is not really the problem: some redesign and you can put 2 GB of memory in a router. Also, it should be possible to encode this information much more efficiently, maybe even to the degree that a route takes only a few bytes of memory. (http://www.muada.com/projects/bitmaprouting.txt) The real problem is processing the updates. This scales as O(N log N) because more routes mean more updates but also each update takes longer because it has to be done on a larger table. Fortunately, BGP is pretty brain dead in this area. See http://www.research.microsoft.com/scripts/pubs/view.asp?TR_ID=MSR-TR-2000-74 to read about the count to infinity problem that BGP inherited from RIP. Fortunate, because a lot of improvement should be possible. Iljitsch van Beijnum