I don't respond to many of these threads but I have to say I've contested this one too only to have to beaten into my head that a /64 is "appropriate".. it still hasn't stuck, but unfortunately rfc's for other protocols depend on the blocks to now be a /64.. It's a waste, even if we're "planning for the future", no one house needs a /64 sitting on their lan.. or at least none I can sensibly think of o_O. Ryan On Fri, Sep 27, 2013 at 12:57 AM, <bmanning@vacation.karoshi.com> wrote:
Yup. Seen/Heard all that. Even tooted that horn for a while. /64 is an artifical boundary - many/most IANA/RIR delegations are in the top /32 which is functionally the same as handing out traditional /16s. Some RIR client are "bigger" and demand more, so they get the v6 equvalent of /14s or smaller. Its the _exact_ same model as v4 in the previous decade. With the entire waste in the bottom /64.
Its tilting at windmills, but most of the community has "drunk the koolaide" on wasteful /v6 assignment. What a horrific legacy to hand to our children (and yes, it will hit that soon)
/bill
On Thu, Sep 26, 2013 at 01:18:50PM -0700, Darren Pilgrim wrote:
On 9/26/2013 1:07 PM, joel jaeggli wrote:
On Sep 26, 2013, at 12:29 PM, Darren Pilgrim <nanog@bitfreak.org> wrote:
On 9/26/2013 1:52 AM, bmanning@vacation.karoshi.com wrote:
sounds just like folks in 1985, talking about IPv4...
The foundation of that, though, was ignorance of address space exhaustion. IPv4's address space was too small for such large thinking.
The first dicussion I could find about ipv4 runnout in email archives is circa 1983
IPv6 is far beyond enough to use such allocation policies.
There are certain tendencies towards profligacy that might prematurely influence the question of ipv6 exhaustion and we should be on guard against them allocating enough /48s as part of direct assignments is probably not one of them.
That's just it, I really don't think we actually have an exhaustion risk with IPv6. IPv6 is massive beyond massive. Let me explain.
We have this idea of the "/64 boundary". All those nifty automatic addressing things rely on it. We now have two generations of hardware and software that would more or less break if we did away with it. In essence, we've translated an IPv4 /32 into an IPv6 /64. Not great, but still quite large.
Current science says Earth can support ten billion humans. If we let the humans proliferate to three times the theoretical upper limit for Earth's population, a /64 for each human would be at about a /35's worth of /64's. If we're generous with Earth's carrying capacity, a /36.
If we handed out /48's instead so each human could give a /64 to each of their devices, it would all fit in a single /52. Those /48's would number existance at a rate of one /64 per human, one /64 per device, and a 65535:1 device:human ratio. That means we could allocate 4000::/3 just for Earth humans and devices and never need another block for that purpose.
That's assuming a very high utilisation ratio, of course, but really no worse than IPv4 is currently. The problem isn't allocation density, but router hardware. We need room for route aggregation and other means of compartmentalisation. Is a 10% utilisation rate sparse enough? At 10% utilisation, keeping the allocations to just 4000::/3, we'd need less than a single /60 for all those /48's. If 10% isn't enough, we can go quite a bit farther:
- 1% utilisation would fit all those /48's into a /62. - A full /64 of those /48's would be 0.2% utilisation. - 0.1%? We'd have to steal a bit and hand out /47's instead. - /47 is ugly. At /52, we'd get .024% (one per 4096).
That's while maintaining a practice of one /64 per human or device with 65535 devices per human. Introduce one /64 per subnet and sub-ppm utilisation is possible. That would be giving a site a /44 and them only ever using the ::/64 of it.
Even with sloppy, sparse allocation policies and allowing limitless human and device population growth, we very likely can not exhaust IPv6.