On 9 feb 2011, at 21:26, William Herrin wrote:
You're kidding, right? How long did it take exactly to get where we are now with IPv6? 18, 19 years?
Tech like carrier NAT theoretically yeilds address reclamation in excess of 80%.
Most IPv4 space is unused anyway, but it's not being reclaimed much despite that. (How many IP addresses does the US federal government need? Few people would think ~ 10 /8s. Especially since many of them aren't even lit up.)
* Next protocol should really be designed to support interoperability with the old one from the bottom up. IPv6 does not
That's because it's not the headers that aren't incompatible (the protocol translation is ok even though it could have been a bit better) but the addresses.
No, it's because decisions were made to try to abandon the old DFZ table along with IPv4 and institute /64 as a standard subnet mask. But for those choices, you could directly translate the IPv4 and IPv6 headers back and forth, at least until one of the addresses topped 32 bits. The transition to IPv6 could be little different than the transition to 32-bit AS numbers -- a nuisance, not a crisis. You recompile your software with the new IN_ADDR size and add IP header translation to the routers, but there's no configuration change, no new commands to learn, etc.
It's not that simple. But I agree on one thing: it could have been better. What needs to be done to move from IPv4 to IPv6 is three things: 1. the headers 2. the addresses 3. the applications Today, only IPv6 applications can use IPv6 addresses and only when IPv6 applications use IPv6 addresses will there be IPv6 headers. It would have been better to roll out the headers first. That would have meant changes to routers, because routers touch the headers. But if translating between IPv6 with 32-bit addresses and IPv4 can be done with low overhead (which is more or less the case today, BTW) then it would have been possible to upgrade from IPv4 to IPv6 subnet by subnet. This way, there wouldn't have had to be any dual stack, and once people had deployed IPv6 they would have kept using it. Today, and especially some years ago, IPv6 would/will often atrophy after having been deployed initially because of lack of use. Apps could have been upgraded the same way they have now, with the only difference that using an IPv4-mapped IPv6 address meant generating an IPv6 packet, not an IPv4 packet as is done today. Once 128-bit addresses are being used going from an IPv6 subnet to an IPv4 subnet becomes problematic, but this can be solved with stateful translation so most stuff keeps working anyway. And remember, servers and apps that can't work through a stateful translator can still get 32-bit addresses so everything keeps working without trouble. However, it's easy to come up with all of this in 2011. In 1993 the world was very different, and many things we take for granted today were still open questions then. It's very illuminating to read up on the mailinglist discussions from back then. People complained about IPv6 a decade or half a decade ago, too. Back then there may have been a chance to come up with a different protocol as a successor for IPv4, but it's way too late for that now. Anyway, most non-legacy applications support 128-bit addresses now and we have a pretty decent NAT64 spec sitting in the RFC editor queue (even if I do say so myself) so it's just a matter of sitting tight until we're through the painful part of the transition.