In a message written on Tue, Jan 25, 2011 at 05:07:16PM -0500, Valdis.Kletnieks@vt.edu wrote:
To burn through all the /48s in 100 years, we'll have to use them up at the rate of 89,255 *per second*.
That implies either *really* good aggregation, or your routers having enough CPU to handle the BGP churn caused by 90K new prefixes arriving on the Internet per second. Oh, and hot-pluggable memory, you'll need another terabyte of RAM every few hours. At that point, running out of prefixes is the *least* of your worries.
If you were allocating individual /48's, perhaps. But see, I'm a cable company, and I want a /48 per customer, and I have a couple of hundred thousand per pop, so I need a /30 per pop. Oh, and I have a few hundred pops, and I need to be able to aggreate regionally, so I need a /24. By my calculations I just used 16M /48's and I did it in about 60 seconds to write a paragraph. That's about 279,620 per second, so I'm well above your rate. To be serious for a moment, the problem isn't that we don't have enough /48's, but that humans are really bad at thinking about these big numbers. We're going from a very constrained world with limited aggregation (IPv4) to a world that seems very unconstrained, and building in a lot of aggregation. Remember the very first IPv6 addressing proposals had a fully structured address space and only 4096 ISP's at the top of the chain! If we aggregate poorly, we can absolutely blow through all the space, stranding it in all sorts of new and interesting ways. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/