Daniel Senie wrote:
There are basically two issues: the forwarding table and BGP processing. Information in the forwarding table needs to be found *really* fast. Fortunately, it's possible to create datastructures where this is possible, to all intends and purposes, regardless of the size of the table. However, memory is a concern here, as you only have a few hundred nanoseconds to look up something in the routing table at 10 Gbps speeds.
This is a solvable problem. Hardware lookups are quite sufficient. Forwarding bases stored in line cards can be aggregated to the extent the data permits. Any router with 10GigE interfaces that's going to care about actually filling such pipes will have advanced hardware forwarding technology and a price tag to support the development of same.
The bottom line in this discussion is all about cost. Technology can be had to do many things that are physically possible, but as you get closer to the limits of the physics, the costs go up. Further, the marginal costs (i.e., $/packet per second) go up quite rapidly. If the size of the FIB exceeds Moore's law (and BTW, for memory it's more like 2x every 3 years), then the costs go nuts as you have to scale up from hardware parts that can't keep up. That also adds complexity. All of these costs end up factored into the cost of routers, which the ISPs must in turn factor into the costs of providing service if they are to stay in business. The problem is that the decisions to advertise a global prefix must be paid by users around the globe. If there was a way that these costs were reallocated to the site that decided to be multihomed, then the economics of the situation would balance. Imagine paying US $10K/yr to advertise a single prefix and you would get to a point where people would make some more rational decisions that didn't pollute the global table.
Even 10 years ago it was evident the routing table structures chosen by different manufacturers had significantly different performance characteristics. As there is no single data structure to define the storage of this information, it may follow that there is no singular formula for the impact of scaling.
In fact, almost all implementations now use some form of radix trie.
Over the past several years, the CPUs in routers have been considerably below the speediest on the market. I suspect there's a fair bit of headroom at present between the route processing engines in core routers and the fastest CPUs presently offered for sale. As such, I have to wonder just how much growth we could handle instantaneously, and still stay within the CPU capabilities of today's available processors. Also consider that CPU power is far from the only issue. Higher speed memory continues to be developed along with higher speed bus architectures. System performance is made up of many factors.
Do you really want to keep all routers in the world on the CPU growth curve? Do you really want the cost of replacing all of that hardware every time Intel comes out with a new processor? Again, yes, this is technically possible, but it comes with a cost. In an ideal world, the cost of running the routing subsystem would be linear only in the amount of transit bandwidth at each hop. Unfortunately, the reality is that table growth and prefix flap drive costs up faster than that and ISPs are being squeezed between costs and prices. In the long run, these costs will be passed on to the end user, or all of the ISPs will end up out of business.
Lookout above! The sky is falling.
Not at all. It will be propped up by router prices. ;-)
I'm just saying we should be very conservative in allowing unreversible changes in unscalable aspects of IPv6.
I'd sure like to see a lot more thorough analysis than what you provided above before reaching that conclusion. History has certainly not sided with you. Back in the mid-1990s, we were told routers wouldn't scale, so we needed MPLS. While MPLS has found useful roles in the network, it wasn't needed as a replacement for IPv4 routing in the core. Several companies, including some startups, figured out ways to route packets quite quickly.
In the long run, I'd rather provide the ability to offer the services needed. This permits the companies looking for those services to flourish and help the economies of the world. While there are challenges to be addressed, I belive those challenges will be well met by the equipment marketplace, and that innovation also will help the economies of the world. Artificial restraint does not result in expanded services or product innovations. If I had a way to vore on this, I'd vote to let the markets work.
Letting the markets work is a fine thought, but there are a few issues that will not be addressed. The global DFZ routing table is a common resource, shared and polluted equally by everyone around the world. In a purely free market world, Adam Smith suggests that everyone will act in their own best interests and pollute until the environment is no longer useful. This is frequenly known as the "tragedy of the commons". In such situations, we normally install other mechanisms to ensure that pollution is constrained, either economically or through regulation. If you take a look at the way that the phone network works, for example, adding a new area code to the NANP is painful because it means that all of the phone switches have to be updated and so the phone network routing table is a regulated entity. In the decentralized world of the Internet, we have a bigger problem in that we do not have a clear entity that impose the necessary regulatory pressures and there is no commercial pressure. All we can do is to ask people to be good Internet citizens and to act locally for the global good. The challenge, of course, is that this is in almost no one's immediate best interest. My preferred solution at this point is for the UN to take over management of the entire Internet and for them to issue a policy of one prefix per country. This will have all sorts of nasty downsides for national providers and folks that care about optimal routing, but it's the only way that I can see that will allow the Internet to continue to operate over the long term. Tony