Alan
......... Yakov Rekhter is rumored to have said: ] ] So, perhaps we should just look at the total amount of IP address space ] advertised by a provider in its routing advertisements, then divide ] this amount by the number of routes the provider advertises, and ] see whether the resulting number meets the goal. ]
But what is the goal?
One goal is to come up with a metric to measure how efficient the routing system works (especially wrt to aggregation), and how well individual providers manage to aggregate. This would allow us to look at parts of the system (e.g. individual providers), and see which of these parts would need improvements. Also if we have such a metric, we could look at how various mechanisms/incentives could influence the metric.
My capitalist nature says that the amount of address space one has should not be an issue. I'm not terribly sure on how that enters into the metric. I'd be in favor of something that directly associated 'goodness' or 'cost' with the amount of ip nodes one could route, or the ratio of routes to nodes.
Ideally, as you suggested, it would be really nice to have a metric that would tell us how efficiently the routing *and* addressing system works wrt to providing routes to actual hosts, rather than to blocks of addresses (after all, the purpose of the routing system is to provide connectivity to hosts, not just to host-less addresses). Moreover, if we would have such a metric, we might be able to come up with some mechanisms/incentives that would truly promote scalable Internet, so that such mechanisms/incentives would both (a) drive towards more efficient address space utilization (thus imposing a back pressure on consumption of one finite resource - IP address space), and (b) drive towards more routing aggregation (thus imposing a back pressure on consumption of another finite resource - forwarding tables). In practice, getting this metric requires some way of knowing the number of hosts per prefix. We don't have any technology to do this, so we rely on a "simplifying assumption" - we assumed that the amount of address space that an Internet Registry allocates to a site on average reflects the number of hosts within the site. I am well aware of all the traps associated with this "simplifying assumption", but at the moment that is all what we have. So, to sum things up, if we measure how efficiently we route wrt to address space (which we can do today), and if we'll augment this data with the "simplifying assumption", we can at least get some *rough* approximation of the "ideal" metric. Yakov.