Re: IPv4 address shortage? Really?
Christopher Morrow <morrowc.lists@gmail.com> wrote:
Gbqq Haqrejbbq jbhyq ybir lbhe fbyhgvba! Cebcf!
I'm sure he would:) Though I can't claim a credit for the idea... it's way too old, so old, in fact, that many people have forgotten all about it. Mark Andrews <marka@isc.org> wrote:
This has been thought of before, discussed and rejected.
Of course, it was Discussed and Rejected. I fall to my knees and beg the forgiveness from those On High who bless us with Their Infinite Wisdom and Foresight. How could I presume to challenge Their Divine Providence? Mea culpa, mea maxima culpa. ...well, kind of. What you don't mention is that it was thought to be ugly and rejected solely on the aesthetic grounds. Which is somewhat different from being rejected because it cannot work. Now, I'd be first to admit that using LSRR as a substitute for straightforward address extension is ugly. But so is iBGP, CIDR/route aggregation, running interior routing over CLNS, and (God forbid, for it is ugly as hell) NAT. Think of it, dual stack is even uglier. At least, with LSRR-based approach you can still talk to legacy hosts without building completely new and indefinitely maintaining a parallel legacy routing infrastructure. Scott W Brim <scott.brim@gmail.com> wrote:
There are a number of reasons why you want IP addresses to be globally unique, even if they are not globally routed.
And do you have it now? The last time I checked, NAT was all over the place. Ergo - global address uniqueness (if defined as having unique interface address labels) is not necessary for practical data networking. In fact, looking at two or more steps in the source route taken together as a single address gives you exactly what you want - the global uniqueness, as long as you take care to alternate disjoint address spaces along the path and designate one of these spaces (the existing publicly routeable space) as the root from which addressing starts. Bill Manning <bmanning@vacation.karoshi.com> wrote:
just a bit of renumbering...
Ah, that's nice, but I don't propose expanding use of NAT. Or renumbering on massive scale. In fact I want to remind that NAT was never a necessity. It's a temporary fix which gave IPv4 a lot of extra mileage and became popular precisely because it didn't break networking too much while allowing folks to keep using the existing stuff. The real problem with NAT is called "P2P" (and I think it will become important enough to become the death of NAT). Jima <nanog@jima.tk> wrote:
This seems like either truly bizarre trolling,
I guess you haven't been around NANOG (and networking) too long, or you'd be careful to call me a troll:) What I want is to remind people that with a little bit of lateral thinking we can get a lot more mileage out of the good old IPv4. Its death was predicted many times already. (Let me remember... there was that congestion collapse, then it was the routing table overwhelming the IGPs, and then there was that shortage of class Bs and routing tables outgrowing RAM in ciscos, and then there was a heated battle over IP address ownership, and there was the Big Deal about n^2 growth of iBGP mesh). I don't remember what was the deal with Bob Metcalfe and his (presumably eaten) hat. Something about Moore's Law?
or the misguided idea of someone who's way too invested in IPv4 and hasn't made any necessary plans or steps to implement IPv6.
"Too invested in IPv4"? Like, the Internet and everybody on it? You know, I left the networking soapbox years ago, and I couldn't care less about the religious wars regarding the best ways to shoot themselves in the foot. The reason why I moved to different pastures was sheer boredom. The last interesting development in the networking technology was when some guy figured out that you can shuffle IP packets around faster than you can convert a lambda from photons to electrons - and thus has shown that there's no technological limitation to the bandwidth of Internet backbones.
you'd have to overhaul software on many, many computers, routers, and other devices. (Wait, why does this sound familiar?)
You probably missed the whole point - which is that unlike dual-stack solution using LSRR leverages existing, installed, and paid for, infrastructure.
too bad we don't have a plan that could be put into action sooner
The cynical old codgers like yours truly have predicted that the whole IPv6 saga would come precisely to that - when it was beginning. The reason for that is called the Second System Effect of which IPv6 is a classical example. A truly workable and clean solution back then would be to simply add more bits to IPv4 addresses (that's what options are for). Alas, a lot of people thought that it would be very neat to replace the whole piston engine with a turbine powerplant instead of limiting themselves to changing spark plugs and continuing on the way to the real job (namely, making moving bits from place A to place B as cheap and fast as possible). Now, we don't have a problem of running out of IPv4 addresses - NAT takes care of that, and will continue to do so for many years more. We do have a problem of having no way[*] to initiate connections to boxes hidden behind NATs. Which is bad for P2P applications. To solve this problem three things need to be done: 1) to create network infrastructure which can haul packets with extended addresses (be it IPv4 or IPv6) and extend network stacks in the OSes to support this transport; 2) to create a way to map domain names to those addresses; 3) to get applications to work with the extended addresses. You also need a clean migration path from legacy to the new architecture (which IPv6 lacks), just to stay in the realm of practical. 1 and 3 are seriously expensive. 2 is relatively easy as DNS has provisions for expansion which were actually used (instead of reinventing the wheel, thankfully). The issue #1 can be sidestepped by use of LSRR as I suggested. #2 and #3 are, to a considerable extent, already solved by the people implementing IPv6. Now, listen carefully: you don't have to use IPv6 packets to use IPv6 addresses. In fact, all we need to get benefits of extended addressing w/o going to the trouble of implementing native or tunneled IPv6 networks is a simple shim (either within kernels or within language libraries) which takes IPv6 address/socket and translates it into IPv4 socket with appropriate LSRR option information attached. (A small subset of IPv6 addresses could be set aside for the purpose... one /32 or such; just so you can still have IPv6 in all its glory if you wish). The entire amount of development for a Unix-like OS is about 500 lines of code (that's my educated guess based on the gut feeling developed while I was hacking various Unix kernels for living since v6). My question to the community - is it interesting enough? I.e. should I spend my time on writing I-D on IPv4 Extended Addressing, or will I be dismissed as a troll for my trouble? --vadim [*] chownat exceeds even my tolerance for ugliness, though the expoit itself is an elegant use of the quasi-paradox from probability theory.
...well, kind of. What you don't mention is that it was thought to be ugly and rejected solely on the aesthetic grounds. Which is somewhat different from being rejected because it cannot work.
Now, I'd be first to admit that using LSRR as a substitute for straightforward address extension is ugly. But so is iBGP, CIDR/route aggregation, running interior routing over CLNS, and (God forbid, for it is ugly as hell) NAT.
No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Tue, 08 Mar 2011 07:37:27 EST, Steven Bellovin said:
No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment.
Steve, you of all people should remember the other big reason why: pathalias tended to do Very Bad Things like violating the Principle of Least Surprise if there were two distinct nodes both called 'turtlevax' or whatever. That, and if you think BGP convergence sucks, imagine trying to run pathalias for a net the size of the current Internet. :)
On Mar 8, 2011, at 8:32 59AM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 08 Mar 2011 07:37:27 EST, Steven Bellovin said:
No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment.
Steve, you of all people should remember the other big reason why:
pathalias tended to do Very Bad Things like violating the Principle of Least Surprise if there were two distinct nodes both called 'turtlevax' or whatever. That, and if you think BGP convergence sucks, imagine trying to run pathalias for a net the size of the current Internet. :)
It wouldn't -- couldn't -- work that way. Leaving out longer paths (for many, many reasons) and sticking to 64-bit addresses, every host would have a 64-bit address: a gateway and a local address. For multihoming, there might be two or more such pairs. (Note that this isn't true loc/id split, since the low-order 32 bits aren't unique.) There's no pathalias problem at all, since we don't try to have a unique turtlevax section. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Tue, 08 Mar 2011 08:43:53 EST, Steven Bellovin said:
It wouldn't -- couldn't -- work that way. Leaving out longer paths (for many, many reasons) and sticking to 64-bit addresses, every host would have a 64-bit address: a gateway and a local address. For multihoming, there might be two or more such pairs. (Note that this isn't true loc/id split, since the low-order 32 bits aren't unique.) There's no pathalias problem at all, since we don't try to have a unique turtlevax section.
Sticking to 64-bit won't work, because some organizations *will* try to dig themselves out of an RFC1918 quagmire and get reachability to "the other end of our private net" by applying this 4 or 5 times to get through the 4 or 5 layers of NAT they currently have. And then some other dim bulb will connect one of those 5 layers to the outside world...
On Mar 8, 2011, at 11:21 09AM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 08 Mar 2011 08:43:53 EST, Steven Bellovin said:
It wouldn't -- couldn't -- work that way. Leaving out longer paths (for many, many reasons) and sticking to 64-bit addresses, every host would have a 64-bit address: a gateway and a local address. For multihoming, there might be two or more such pairs. (Note that this isn't true loc/id split, since the low-order 32 bits aren't unique.) There's no pathalias problem at all, since we don't try to have a unique turtlevax section.
Sticking to 64-bit won't work, because some organizations *will* try to dig themselves out of an RFC1918 quagmire and get reachability to "the other end of our private net" by applying this 4 or 5 times to get through the 4 or 5 layers of NAT they currently have. And then some other dim bulb will connect one of those 5 layers to the outside world...
Those are just a few of the "many, many reasons" I alluded to... The "right" fix there is to define AA records that only have pairs of addresses. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On 3/8/11 2:32 PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 08 Mar 2011 07:37:27 EST, Steven Bellovin said:
No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment. Steve, you of all people should remember the other big reason why:
pathalias tended to do Very Bad Things like violating the Principle of Least Surprise if there were two distinct nodes both called 'turtlevax' or whatever. That, and if you think BGP convergence sucks, imagine trying to run pathalias for a net the size of the current Internet. :)
No No. That was Mel Pleasant and me– the RABID REROUTERs. And people weren't all THAT surprised. But beyond that, I've actually done some analysis on doing nearly just that. If you think about it there are about 300,000 entries, and this is not beyond the capacity of an O(nlog(n)) algorithm like, for instance, Dijkstra in a modern world. And before you say, “Ew! SPF for Interdomain”, we had the precise same debate for IGP back in 1990 or so. The only big difference is that exposing of policy in SPF isn't that desirable. And quite frankly the idea has gone around a few times, the one that remains in my head was TRIAD, which was work done by Gritter and Cheriton. Eliot
On Tue, 2011-03-08 at 07:37 -0500, Steven Bellovin wrote:
...well, kind of. What you don't mention is that it was thought to be ugly and rejected solely on the aesthetic grounds. Which is somewhat different from being rejected because it cannot work.
No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path.
Let me get it right... an important factor in the architectural decision was that the current OFRV implementation of a router was buggy-by-design? Worse, when having a choice between something which already worked (slow as it were - the IPv4 options) and something which didn't exist at all (the new L3 frame format) the chosen one was the thing which didn't exist. Any wonder it took so long to get IPv6 into any shape resembling working?
It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment.
Not really. DNS change is trivial; and if 64-bit extended IPv4 address was choosen (instead of a new address family) 80% applications would only needed to be recompiled with a different header file having long long instead of int in s_addr. Most of the rest would only need a change in a data type and maybe in custom address-to-string formats. Compare that with try-one-address family and if failed try another logic which you need to build into every app with the dual-stack approach. Do you remember the mighty trouble with changing from 32-bit file sizes to 64-bit size_t in Linux? No? That's the point. Valdis.Kletnieks@vt.edu wrote:
Steve, you of all people should remember the other big reason why: pathalias tended to do Very Bad Things like violating the Principle of Least Surprise
As the guy who implemented the country-wide domain name e-mail router over UUCP, I remember this issue pretty well. In any case, it is not applicable if you structure 32-bit address spaces into a tree. Which maps very nicely onto the real-life Internet topology. Steven Bellovin wrote:
And then some other dim bulb will connect one of those 5 layers to the outside world...
A dim bulb has infinite (and often much subtler) ways of screwing routing in his employer's network. Protecting against idiots is the weakest argument I ever heard for architectural design. (Now, I don't deny value of designing UIs and implementation logic in a way which helps people to avoid mistakes... how could I, having been doing GPS Z to SQL just a few hours ago, in IMC:) So. You pretty much confirmed my original contention that the choice was made not because of technical merits of the LSRR or IPv4 extended address option but merely because people wanted to build beautifully perfect Network Two - at the expense of compatibility and ease of transition. Well, I think IPv4 will outlive IPv6 for precisely this reason. The real-life users don't care about what's under the hood - but they do care that the stuff they used to have working will keep working. And the real-life net admins would do whatever it takes to keep the users happy - even if it is ugly as hell. --vadim
On Wed, 09 Mar 2011 03:34:18 PST, Vadim Antonov said:
Steven Bellovin wrote:
And then some other dim bulb will connect one of those 5 layers to the outside world...
Broken attribution alert - I wrote that, not Steve..
A dim bulb has infinite (and often much subtler) ways of screwing routing in his employer's network. Protecting against idiots is the weakest argument I ever heard for architectural design.
Yes, a dim bulb can do other things. That doesn't mean it's OK to simply ignore totally predictable failure modes. Consider BGP - what happens when some dim bulb manages to create a routing loop? What would have happened if the BGP designers had said "We're not going to worry about this because there's other things the dim bulb can do to hose himself"?
participants (4)
-
Eliot Lear
-
Steven Bellovin
-
Vadim Antonov
-
Valdis.Kletnieks@vt.edu