Also, some of the original motivations behind CIDR starts to go out the window when you have enough IP space that you can hand out huge chunks ahead of immediate need. Who cares about efficient utilization or "but I only need a /35 and you gave me a whole /32, I'm wasting so much space" when there is not a space shortage. Just allocate enough space that you will never need to upgrade, you'll be doing more good than trying to predict or restrict your usage and creating more routing entries later when you need more space.
The original motivation behind this conversation stems from some long held concerns that the address plan for IPv6 may not encompass our visions of a network that serves a device dense world for many decades to come. http://www.potaroo.net/ispcol/2005-07/ipv6size.html contains one description of this concern. At issue here is two somewhat different views of the deployment scenarios of the network. One view is to attempt to use the larger address space to make deployment incredibly simple, and eliminate some of the overhead we have in IPv4 in our efforts to make the IPv4 address space last. So the IPv6 address plan says /48's to end sites with no assessment of end-site address utilization. The IPv6 address policy says use an HD Ratio of 0.8 to assess the requirement for additional address allocations. The IPv6 address policy says use a minimum initial allocation of a /32. In looking at these parameters, and making some pretty rough calculations of what a device-dense world would look like then it would appear that this plan would work for an end site population of between 50 to 200 billion, which would fit within this address plan - not comfortably, but it would fit. What if we have underestimated this population? In this view, the resolution is best expressed in RFC3177: "Therefore, if the analysis does one day turn out to be wrong, our successors will still have the option of imposing much more restrictive allocation policies" The other view is that by then the installed base will be so large (up to tens of billions of end sites) that any form of adjustment of the IPv6 address space will be extremely difficult, if not impossible. Within this view, the motivation is to set up an address plan that encompasses a margin for error in our assessments of future IPv6 network address requirements, while attempting to preserve as much of the essential elements of simplicity in the address plan as possible. To this end policy proposals to adjust the HD ratio to 0.94 have been proposed in APNIC, ARIN and RIPE this year, and the reaction from the addressing communities to this proposal have been largely positive. But there is still the lingering doubt (at least personally) that we really don't know how big and for how long we need to rely on IPv6, and the 3 bits (factor of 1 order of magnitude) you are likely to get from this HD ratio may not be enough. From this doubt came the second part of the proposal to define a second end site allocation point, namely that of a /56. This part of the proposal was not well received by any of these three policy fora. The reaction that this /56 end site allocation point has received has been twofold ; one that it impacts the existing deployed base of IPv6, and secondly, and perhaps more fundamentally, its not the RIR's role to define what an end-site allocation may be within any particular ISP, and this definition of /48, /56 and /64 looks very reminiscent of the old Class-full address plan of IPv4 over a decade ago. So it appears that this approach of simply defining another end-site allocation point does not have broad acceptance, and there is a clear message to go back and think some more about what is the issue here and how best to address it. The real issue here is NOT the definition of allocation points per se. The address plan used by an ISP ultimately falls within the business scope of the ISP itself. The central issue, if you accept that premise that end-site allocations are up to the ISPs, is how to define the algorithm to be used by the RIR to assess whether an existing allocation has been "fully utilized" in terms of end-site address allocations. So what algorithm could be used to assess address utilization in a manner that would provide _reasonable_ incentives to use the address space in a manner that preserves IPv6's future utility across many decades to come (and encompasses a pretty large margin for error in our very imperfect views of the future)? For me that's at the heart of this discussion, and of course I'd be very interested to hear what ideas others may have on this topic. regards, Geoff