Hi, Vint Cerf kindly sent through some more explanation. Regards, Mark. Begin forwarded message: Date: Sat, 3 Apr 2010 08:17:28 -0400 From: Vint Cerf <vint@google.com> To: Mark Smith <nanog@85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org> Cc: Andrew Gray <3356@blargh.com>, NANOG List <nanog@nanog.org> Subject: Re: legacy /8 When the Internet design work began, there were only a few fairly large networks around. ARPANET was one. The Packet Radio and Packet Satellite networks were still largely nascent. Ethernet had been implemented in one place: Xerox PARC. We had no way to know whether the Internet idea was going to work. We knew that the NCP protocol was inadequate for lossy network operation (think: PRNET and Ethernet in particular). This was a RESEARCH project. We assumed that national scale networks were expensive so there would not be too many of them. And we certainly did not think there would be many built for a proof of concept. So 8 bits seemed reasonable. Later, with local networks becoming popular, we shifted to the class A-D address structure and when class B was near exhaustion, the NSFNET team (I think specifically Hans-Werner Braun but perhaps others also) came up with CIDR and the use of masks to indicate the size of the "network" part of the 32 bit address structure. By 1990 (7 years after the operational start of the Internet and 17 years since its basic design), it seemed clear that the 32 bit space would be exhausted and the long debate about IPng that became IPv6 began. CIDR slowed the rate of consumption through more efficient allocation of network addresses but now, in 2010, we face imminent exhaustion of the 32 bit structure and must move to IPv6. Part of the reason for not changing to a larger address space sooner had to do with the fact that there were a fairly large number of operating systems in use and every one of them would have had to be modified to run a new TCP and IP protocol. So the "hacks" seemed the more convenient alternative. There had been debates during the 1976 year about address size and proposals ranged from 32 to 128 bit to variable length address structures. No convergence appeared and, as the program manager at DARPA, I felt it necessary to simply declare a choice. At the time (1977), it seemed to me wasteful to select 128 bits and variable length address structures led to a lot of processing overhead per packet to find the various fields of the IP packet format. So I chose 32 bits. vint On Fri, Apr 2, 2010 at 10:42 PM, Mark Smith < nanog@85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org> wrote:
On Fri, 02 Apr 2010 15:38:26 -0700 Andrew Gray <3356@blargh.com> wrote:
Jeroen van Aart writes:
Cutler James R wrote:
I also just got a fresh box of popcorn. I will sit by and wait
I honestly am not trying to be a troll. It's just everytime I glance over the IANA IPv4 Address Space Registry I feel rather annoyed about all those /8s that were assigned back in the day without apparently realising we might run out.
It was explained to me that many companies with /8s use it for their internal network and migrating to 10/8 instead is a major pain.
You know, I've felt the same irritation before, but one thing I am wondering and perhaps some folks around here have been around long enough to know - what was the original thinking behind doing those /8s?
I understand that they were A classes and assigned to large companies, etc. but was it just not believed there would be more than 126(-ish) of these entities at the time? Or was it thought we would move on to larger address space before we did? Or was it that things were just more free-flowing back in the day? Why were A classes even created? RFC 791 at least doesn't seem to provide much insight as to the 'whys'.
That's because RFC791 is a long way from the original design assumptions of the Internet Protocols.
"A Protocol for Packet Network Intercommunication", Vinton G. Cerf and Robert E. Kahn, 1974, says -
"The choice for network identification (8 bits) allows up to 256 distinct networks. This size seems sufficient for the foreseeable future."
That view seems to have persisted up until at least RFC761, January 1980, which still specified the single 8 bit network, 24 bit node address format. RFC791, September 1981, introduces classes. So somewhere within that period it was recognised that 256 networks wasn't going to be enough. I'm not sure why the 32 bit address size was persisted with at that point - maybe it was because there would be significant performance loss in handling addresses greater than what was probably the most common host word size at the time.
If you start looking into the history of IPv4 addressing, and arguably why it is so hard to understand and teach compared to other protocols such as Novell's IPX, Appletalk etc., everything that has been added to allow increasing the use of IP (classes, subnets, classless) while avoiding increasing the address size past 32 bits is a series of very neat hacks. IPv4 is a 1970s protocol that has had to cope with dramatic and unforeseen success. It's not a state of the art protocol any more, and shouldn't be used as an example of how things should be done today (As a minimum, I think later protocols like Novell's IPX and Appletalk are far better candidates). It is, however, a testament to how successfully something can be hacked over time to continue to work far, far beyond it's original design parameters and assumptions.
(IMO, if you want to understand the design philosophies of IPv6 you're better off studying IPX and Appletalk than using your IPv4 knowledge. I think IPv6 is a much closer relative to those protocols than it is to IPv4. For example, router anycast addresses was implemented and used in Appletalk.)
Possibly Vint Cerf might be willing to clear up some of these questions about the origins of IPv4 addressing.
Regards, Mark.
When the Internet design work began, there were only a few fairly large networks around. ARPANET was one. The Packet Radio and Packet Satellite networks were still largely nascent. Ethernet had been implemented in one place: Xerox PARC. We had no way to know whether the Internet idea was going to work. We knew that the NCP protocol was inadequate for lossy network operation (think: PRNET and Ethernet in particular). This was a RESEARCH project. We assumed that national scale networks were expensive so there would not be too many of them. And we certainly did not think there would be many built for a proof of concept. So 8 bits seemed reasonable. Later, with local networks becoming popular, we shifted to the class A-D address structure and when class B was near exhaustion, the NSFNET team (I think specifically Hans-Werner Braun but perhaps others also) came up with CIDR and the use of masks to indicate the size of the "network" part of the 32 bit address structure. By 1990 (7 years after the operational start of the Internet and 17 years since its basic design), it seemed clear that the 32 bit space would be exhausted and the long debate about IPng that became IPv6 began. CIDR slowed the rate of consumption through more efficient allocation of network addresses but now, in 2010, we face imminent exhaustion of the 32 bit structure and must move to IPv6. Part of the reason for not changing to a larger address space sooner had to do with the fact that there were a fairly large number of operating systems in use and every one of them would have had to be modified to run a new TCP and IP protocol. So the "hacks" seemed the more convenient alternative. There had been debates during the 1976 year about address size and proposals ranged from 32 to 128 bit to variable length address structures. No convergence appeared and, as the program manager at DARPA, I felt it necessary to simply declare a choice. At the time (1977), it seemed to me wasteful to select 128 bits and variable length address structures led to a lot of processing overhead per packet to find the various fields of the IP packet format. So I chose 32 bits. vint On Fri, Apr 2, 2010 at 10:42 PM, Mark Smith <[1]nanog@85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org> wrote: On Fri, 02 Apr 2010 15:38:26 -0700 Andrew Gray <[2]3356@blargh.com> wrote: > Jeroen van Aart writes: > > > Cutler James R wrote: > >> I also just got a fresh box of popcorn. I will sit by and wait > > > > I honestly am not trying to be a troll. It's just everytime I glance over > > the IANA IPv4 Address Space Registry I feel rather annoyed about all those > > /8s that were assigned back in the day without apparently realising we > > might run out. > > > > It was explained to me that many companies with /8s use it for their > > internal network and migrating to 10/8 instead is a major pain. > > You know, I've felt the same irritation before, but one thing I am wondering > and perhaps some folks around here have been around long enough to know - > what was the original thinking behind doing those /8s? > > I understand that they were A classes and assigned to large companies, etc. > but was it just not believed there would be more than 126(-ish) of these > entities at the time? Or was it thought we would move on to larger address > space before we did? Or was it that things were just more free-flowing back > in the day? Why were A classes even created? RFC 791 at least doesn't seem > to provide much insight as to the 'whys'. > That's because RFC791 is a long way from the original design assumptions of the Internet Protocols. "A Protocol for Packet Network Intercommunication", Vinton G. Cerf and Robert E. Kahn, 1974, says - "The choice for network identification (8 bits) allows up to 256 distinct networks. This size seems sufficient for the foreseeable future." That view seems to have persisted up until at least RFC761, January 1980, which still specified the single 8 bit network, 24 bit node address format. RFC791, September 1981, introduces classes. So somewhere within that period it was recognised that 256 networks wasn't going to be enough. I'm not sure why the 32 bit address size was persisted with at that point - maybe it was because there would be significant performance loss in handling addresses greater than what was probably the most common host word size at the time. If you start looking into the history of IPv4 addressing, and arguably why it is so hard to understand and teach compared to other protocols such as Novell's IPX, Appletalk etc., everything that has been added to allow increasing the use of IP (classes, subnets, classless) while avoiding increasing the address size past 32 bits is a series of very neat hacks. IPv4 is a 1970s protocol that has had to cope with dramatic and unforeseen success. It's not a state of the art protocol any more, and shouldn't be used as an example of how things should be done today (As a minimum, I think later protocols like Novell's IPX and Appletalk are far better candidates). It is, however, a testament to how successfully something can be hacked over time to continue to work far, far beyond it's original design parameters and assumptions. (IMO, if you want to understand the design philosophies of IPv6 you're better off studying IPX and Appletalk than using your IPv4 knowledge. I think IPv6 is a much closer relative to those protocols than it is to IPv4. For example, router anycast addresses was implemented and used in Appletalk.) Possibly Vint Cerf might be willing to clear up some of these questions about the origins of IPv4 addressing. Regards, Mark. References 1. mailto:nanog@85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org 2. mailto:3356@blargh.com
participants (1)
-
Mark Smith