I find it a bit interesting to follow this thread... There was a discussion in March where Douglas Fischer shared this picture which shows that Amazon is already using 240/4 space internally. https://pasteboard.co/JRHNVKw.png And I heard it from other sources, too (not an AWS customer so wont try to verify it). Therefore it is probably nearly impossible to get it to public routeable. In my opinion it is like RFC 1918, 10.64/10 CGN space or the reserved test networks. Do what ever you want with it internally, but you won't get it into the DFZ. The problem is not to get it routable, but to get it usable in a way that you could assign it to customers without causing massive support problems. Am 18.11.2021 um 21:54 schrieb John Gilmore:
Steven Bakker <steven.bakker@ams-ix.net> wrote:
The ask is to update every ip stack in the world (including validation, equipment retirement, reconfiguration, etc)... This raises a great question.
Is it even *doable*? What's the *risk*? What will it *cost* to upgrade every node on the Internet? And *how long* might it take?
We succeeded in upgrading every end-node and every router in the Internet in the late '90s and early 2000's, when we deployed CIDR. It was doable. We know that because we did it! (And if we hadn't done it, the Internet would not have scaled to world scale.)
So today if we decide that unicast use of the 268 million addresses in 240/4 is worth doing, we can upgrade every node. If we do, we might as well support unicast on the other 16 million addresses in 0/8, and the 16 million in 127/8, and the other about 16 million reserved for 4.2BSD's pre-standardized subnet broadcast address that nobody has used since 1985. And take a hard look at another hundred million addresses in the vast empty multicast space, that have never been assigned by IANA for anybody or anything. Adding the address blocks around the edges makes sense; you only have to upgrade everything once, but the 268 million addresses becomes closer to 400 million formerly wasted addresses. That would be worth half again as much to end users, compared to just doing 240/4!
That may not be worth it to you. Or to your friends. But it would be useful to a lot of people -- hundreds of millions of people who you may never know. People who didn't get IP addresses when they were free, people outside the US and Europe, who will be able to buy and use them in 5 or 10 years, rather than leaving them unused and rotting on the vine.
We already know that making these one-line patches is almost risk-free. 240/4 unicast support is in billions of nodes already, without trouble. Linux, Android, MacOS, iOS, and Solaris all started supporting unicast use of 240/4 in 2008! Most people -- even most people in NANOG -- didn't even notice. 0/8 unicast has been in Linux and Android kernels for multiple years, again with no problems. Unicast use of the lowest address in each subnet is now in Linux and NetBSD, recently (see the drafts for specifics). If anyone knows of security issues that we haven't addressed in the drafts, please tell us the details! There has been some arm-waving about a need to update firewalls, but most of these addresses have been usable as unicast on LANs and private networks for more than a decade, and nobody's reported any firewall vulnerabilities to CERT.
Given the low risk, the natural way for these unicast extensions to roll out is to simply include them in new releases of the various operating systems and router OS's that implement the Internet protocols. It is already happening, we're just asking that the process be adopted universally, which is why we wrote Internet-Drafts for IETF. Microsoft Windows is the biggest laggard; they drop any packet whose destination OR SOURCE address is in 240/4. When standards said 240/4 was reserved for what might become future arcane (variable-length, anycast, 6to4, etc) addressing modes, that made sense. It doesn't make sense in 2021. IPv4 is stable and won't be inventing any new addressing modes. The future is here, and all it wants out of 240/4 is more unicast addresses.
By following the normal OS upgrade path, the cost of upgrading is almost zero. People naturally upgrade their OS's every few years. They replace their server or laptop with a more capable one that has the latest OS. Laggards might take 5 or 10 years. Peoples' home WiFi routers break, or are upgraded to faster models, or they change ISPs and throw the old one out, every 3 to 5 years. A huge proportion of end-users get automatic over-the-net upgrades, via an infrastructure that had not yet been built for consumers during the CIDR transition. "Patch Tuesday" could put some or all of these extensions into billions of systems at scale, for a one-time fixed engineering and testing cost.
We have tested major routers, and none so far require software updates to enable most of these addresses (except on the lowest address per subnet). At worst, the ISP would have to turn off or reconfigure a bogon filter with a config setting. Also, many "Martian address" bogon lists are centrally maintained (e.g. https://team-cymru.com/community-services/bogon-reference/ ) and can easily be updated. We have found no ASIC IP implementations that hardwire in assumptions about specific IP address ranges. If you know of any, please let us know, otherwise, let's let that strawman rest.
Our drafts don't propose to choose between public and private use of the newly usable unicast addresses (so the prior subject line that said "unicast public" was incorrect). Since the kernel and router implementation is the same in either case, we're trying to get those fixed first. There will be plenty of years and plenty of forums (NANOG, IETF, ICANN, IANA, and the RIRs) in which to wrestle the public-vs-private questions to the ground and make community decisions on actual allocations. But if we don't fix the kernels and routers first, none of those decisions would be implementable.
Finally, as suggested by David Conrad, there is a well understood process for "de-bogonizing" an address range on the global Internet, once support for it exists in OS's. Cloudflare used it on 1.1.1.1; RIPE used it on 128.0/16 and on 2a10::/12. You introduce a global BGP route for some part of the range, stand up a server on it, and use various distributed measurement testbeds to see who can reach that server. When chunks of the Internet can't, an engineer figures out where the blockage is, and communicates with that ISP or vendor to resolve the issue. Lather, rinse and repeat for a year or more, until reachability is "high enough".
Addresses that later end up allocated to private address blocks would never need 100% global reachability, but global testing would still help to locate low-volume OS implementations that might need to be updated. Addresses bought to number retail cellphones need not be as reachable as ones used on public-facing servers, etc. The beauty of a market for IP addresses, rather than one-size-fits-all allocation models, is that ones with different reachability can sell for different prices, at different times, into different niches where they can be put to use.
John