On December 28, 2017 at 17:48 owen@delong.com (Owen DeLong) wrote:
My worry is when pieces of those /64s get allocated for some specific use or non-allocation. For example hey, ITU, here's half our /64s, it's only fair...and their allocations aren't generally available (e.g., only to national-level providers as is their mission.)
Why would anyone give the ITU such an allocation? Worst I could imagine would be giving them a /3 same as IANA, but more likely IANA would issue them /12s like any other RIR. Even with that, since that would relieve the RIRs of responsibility for national providers, I think it still works out ok.
Give? That makes it sound so voluntary, which likely it will be presented as. It's just an example of some not completely unlikely near-future scenario. It is true that if someone were to get a hypothetical ITU allocation they probably wouldn't need an RIR allocation so mostly zero-sum in that case. Unless policies developed (from ITU) which amounted to hoarding or very "wasteful" (by someone's definition) usage.
So the problem isn't for someone who holds a /64 any more than people who are ok w/ whatever IPv4 space they currently hold.
It's how one manages to run out of new /64s to allocate, just as we have with, say, IPv4 /16s. If you have one or more /16s and that's enough space for your operation then not a problem. If you need a /16 (again IPv4) right now, that's likely a problem.
There’s quite a bit of difference between 65536 /16s and 18 quintillion /64s.
That's where 128 bits starts to feel smaller and 2^128 addresses a little superfluous if you can't get /bits.
Again, even you haven’t presented a credible scenario in which we deplete the /64 inventory at IANA, let alone the inventory IETF holds in reserve that hasn’t been released to IANA. (IANA Holds most of a /3 of which they have issued a little more than 5 of the 512 /12s it contains, IETF is holding all of 5 and most of another 2 /3 blocks that haven’t been designated yet as unicast).
This all started as more of a hypothetical, that 128 bits isn't all that huge other than when one starts talking about 2^128 (or 2^64, whatever.) I'll admit the POTS system has been very conservative about expanding their number space even as service became more ubiquitous, including the growth of mobile service which generally fits right into the POTS number schemes. (quoting the rest just for completeness, no more from me.)
My wild guess is if we'd just waited a little bit longer to formalize IPng we'd've more seriously considered variable length addressing with a byte indicating how many octets in the address even if only 2 lengths were immediately implemented (4 and 16.) And some scheme to store those addresses in the packet header, possibly IPv4 backwards compatible (I know, I know, but here we are!)
Unlikely. Variable length addressing in fast switching hardware is “difficult” at best. Further, if you only use an octet (which is what I presume you meant by byte) to set the length of the variable length address, you have a fixed maximum length address of somewhere between 255 and 256 inclusive unless you create other reserved values for that byte and depending on whether you interpret 0 to be 256 or invalid.
I was thinking 256 (255, 254 probably at most) octets of address, not bits.
One could effect sub-octet (e.g., 63 bits) addresses in other ways when needed, as we do now, inside a 128 bit (anything larger than 63) address field.
I think that 256 octet addressing would be pretty unworkable in modern hardware, so you’d have to find some way of defining and then over time changing what the maximum allowed value in that field could be.
Yes, it would be space to grow. For now we might say that a core router is only obliged to route 4 or 16 octets.
But if the day came when we needed 32 octets it wouldn't require a packet redesign, only throwing some "switch" that says ok we're now routing 4/16/32 octet addresses for example.
Probably a single router command or two on a capable router.
This reflects a gross misunderstanding of how fast switching hardware works today.
Now you’ve got all kinds of tables and datastructures in all kinds of software that either need to pre-configure for the maximum size or somehow dynamically allocate memory on the fly for each session and possibly more frequently than that.
That's life in the fast lane! What can I say, the other choice is we run out of address space. One would hope there would be some lead-in time to any expansion, probably years of warning that it's coming.
Well... I’m betting the first /3 lasts at least 50-100 years. If that’s true, the other 5+ should provide plenty of lead time for that so long as we get cracking once we exhaust the first /3.
Or we have to implement IPvN (N > 6) with new packet designs which is almost certainly even more painful.
Meh. Not necessarily. It would, for example, be nice to have a destination ASN field in the packet header or some other provision for locator/ID split.
At least that variable length field would be a warning that one day it might be larger than 16 octets and it won't take 20+ years next time.
That’s very optimistic to say the least.
You don’t have to dig very deep into the implementation details of variable length addressing to see that there’s still, even today, 20 years after the decision was made, it’s not a particularly useful answer.
It's only important if one tends to agree that the day may come in the foreseeable future when 16 octets is not sufficient.
Nope. If you make variable part of the spec, then you require responsible manufacturers to implement variable even if it’s unlikely that the variable will change within the life of the hardware.
One only gets choices not ideals: a) run out of address space? b) Redesign the packet format entirely? c) Use a variable length address which might well be sufficient for 100 years?
If we run out of ipv6 addresses in less than 100 years, I will be very surprised.
Can you provide a credible scenario under which that happens?
Each would have their trade-offs.
And we'd've been all set, up to 256 bytes (2K bits) of address.
Not really. There’s a lot of implementation detail in there and I don’t think you’re going to handle 2Kbit addresses very well on a machine with 32K of RAM and 2MB of flash. (e.g. ESP8266 based devices and many other iOT platforms).
Today's smart phones are roughly as powerful as ~20 year old multi-million dollar supercomputers. No one thought that would happen either.
Not entirely true.
But as I said it comes down to choices. Running out of address space is not very attractive either as we have seen.
Meh... I think actually running out will be an improvement over where we are today.
Owen
If wishes were horses...but I think what I'm saying here will be said again and again.
Not likely… At least not by anyone with credibility.
Too many people answering every concern with "do you have any idea how many addresses 2^N is?!?!" while drowning out "do you have any idea how small that N is?
We may, someday, wish we had gone to some value of N larger than 128, but I seriously doubt it will occur in my lifetime.
I'll hang onto that comment :-)
Owen
-- -Barry Shein
Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*
-- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*