Re: Using /126 for IPv6 router links
On 1/23/2010 9:47 PM, Owen DeLong wrote:
64 bits is enough networks that if each network was an almond M&M, you would be able to fill all of the great lakes with M&Ms before you ran out of /64s.
Did somebody once say something like that about Class C addresses?
The number of /24s in all of IPv4 would only cover 70 yards of a football field (in a single layer of M&Ms). Compared to the filling the three-dimensional full volume of all 5 great lakes, I am hoping you can see the vast difference in the comparison.
Of course--I was asking about the metaphorical message implying "More than we can imagine ever needing". I remember a day when 18 was the largest number of computers that would ever be needed. -- "Government big enough to supply everything you need is big enough to take everything you have." Remember: The Ark was built by amateurs, the Titanic by professionals. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Sat, 23 Jan 2010 22:04:31 CST, Larry Sheldon said:
I remember a day when 18 was the largest number of computers that would ever be needed.
First off, it was 5, not 18. :) Second, there's not much evidence that TJ Watson actually said it. http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquote Third, given that IBM had already been shipping accounting units with limited plugboard programmability (the model 405) for almost a decade at that point, it's reasonable to conclude that TJ was intentionally and specifically talking about high-end "if you have to ask you can't afford it" systems. And if you look at the Top500 list now, 65 years years later, it's still true - there's always 2-5 boxes that are *way* out in front, then a cluster in spots 5-20 or so, and then a *really* long tail on the way down to #500. http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV4006.html
On 1/24/2010 10:03 AM, Valdis.Kletnieks@vt.edu wrote:
On Sat, 23 Jan 2010 22:04:31 CST, Larry Sheldon said:
I remember a day when 18 was the largest number of computers that would ever be needed.
First off, it was 5, not 18. :)
Second, there's not much evidence that TJ Watson actually said it.
http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquote
I think the 18 was a UNIVAC blunder (don't remember who supposedly said it). Given their corporate history, I can believe it,
Third, given that IBM had already been shipping accounting units with limited plugboard programmability (the model 405) for almost a decade at that point, it's reasonable to conclude that TJ was intentionally and specifically talking about high-end "if you have to ask you can't afford it" systems. And if you look at the Top500 list now, 65 years years later, it's still true - there's always 2-5 boxes that are *way* out in front, then a cluster in spots 5-20 or so, and then a *really* long tail on the way down to #500.
http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV4006.html
It may surprise some folks to learn that there were several computer makers--IBM was not the first, not the best, and not the stupidest. -- "Government big enough to supply everything you need is big enough to take everything you have." Remember: The Ark was built by amateurs, the Titanic by professionals. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Jan 23, 2010, at 8:04 PM, Larry Sheldon wrote:
On 1/23/2010 9:47 PM, Owen DeLong wrote:
64 bits is enough networks that if each network was an almond M&M, you would be able to fill all of the great lakes with M&Ms before you ran out of /64s.
Did somebody once say something like that about Class C addresses?
The number of /24s in all of IPv4 would only cover 70 yards of a football field (in a single layer of M&Ms). Compared to the filling the three-dimensional full volume of all 5 great lakes, I am hoping you can see the vast difference in the comparison.
Of course--I was asking about the metaphorical message implying "More than we can imagine ever needing".
I remember a day when 18 was the largest number of computers that would ever be needed.
Do not make the mistake of assuming that just because I support using IPv6 as designed (at least for now) I am too young to remember those things myself. While I wasn't born early enough to remember the demand for 18 computers projection of T.J. Watson in the first person, I am quite familiar with the quote and the environment that fostered it. I am also familiar with the history of the internet and it's 8-bit address precursor. Yes, your point about demand expanding beyond expectation is well taken. However, I believe that the scale of the IP address space will accommodate at least a couple of orders of magnitude expansion beyond any anticipated amount of address space demand. Further, the current IPv6 addressing scheme does come with a safety valve if people like me turn out to be wrong. If we're wrong, it will only affect 1/8th of the address space and we can do something different with the other nearly 7/8ths, possibly setting a 5-10 year horizon for renumbering out of the first 1/8th into more condensed addressing schemes so that the original 1/8th isn't wastefully allocated. Finally, we come to another key difference between IPv4 and IPv6 which is one of its best features and one of the things that has created the greatest controversy among legacy IPv4 holders. There is no IPv6 globally routable unicast space which is not issued by an RIR under contract with the recipient. Unlike in IPv4 where the ability to reclaim addresses (whether abandoned, underutilized, or otherwise) is murky at best, all IPv6 addresses are subject to a nominal annual fee and a contract which allows the RIRs to maintain proper stewardship over them. If I were designing IPv6 today, would I reserve 1/2 the bits for the host address? No, I wouldn't do that. However, I do think there is benefit to a fixed-sized host field. However, the design we have is the design we have. It's too late to make fundamental changes prior to deployment. Stack implementations all have the ability to adapt to non-fixed-size networks if necessary down the road, but, for now, /64s are the best way to avoid surprises and move forward. Owen
-- "Government big enough to supply everything you need is big enough to take everything you have."
Remember: The Ark was built by amateurs, the Titanic by professionals.
Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca
ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Sun, 24 Jan 2010 08:57:17 -0800 Owen DeLong <owen@delong.com> wrote:
On Jan 23, 2010, at 8:04 PM, Larry Sheldon wrote:
On 1/23/2010 9:47 PM, Owen DeLong wrote:
64 bits is enough networks that if each network was an almond M&M, you would be able to fill all of the great lakes with M&Ms before you ran out of /64s.
Did somebody once say something like that about Class C addresses?
The number of /24s in all of IPv4 would only cover 70 yards of a football field (in a single layer of M&Ms). Compared to the filling the three-dimensional full volume of all 5 great lakes, I am hoping you can see the vast difference in the comparison.
Of course--I was asking about the metaphorical message implying "More than we can imagine ever needing".
I remember a day when 18 was the largest number of computers that would ever be needed.
Do not make the mistake of assuming that just because I support using IPv6 as designed (at least for now) I am too young to remember those things myself.
While I wasn't born early enough to remember the demand for 18 computers projection of T.J. Watson in the first person, I am quite familiar with the quote and the environment that fostered it. I am also familiar with the history of the internet and it's 8-bit address precursor.
Yes, your point about demand expanding beyond expectation is well taken. However, I believe that the scale of the IP address space will accommodate at least a couple of orders of magnitude expansion beyond any anticipated amount of address space demand. Further, the current IPv6 addressing scheme does come with a safety valve if people like me turn out to be wrong. If we're wrong, it will only affect 1/8th of the address space and we can do something different with the other nearly 7/8ths, possibly setting a 5-10 year horizon for renumbering out of the first 1/8th into more condensed addressing schemes so that the original 1/8th isn't wastefully allocated.
Finally, we come to another key difference between IPv4 and IPv6 which is one of its best features and one of the things that has created the greatest controversy among legacy IPv4 holders. There is no IPv6 globally routable unicast space which is not issued by an RIR under contract with the recipient. Unlike in IPv4 where the ability to reclaim addresses (whether abandoned, underutilized, or otherwise) is murky at best, all IPv6 addresses are subject to a nominal annual fee and a contract which allows the RIRs to maintain proper stewardship over them.
If I were designing IPv6 today, would I reserve 1/2 the bits for the host address? No, I wouldn't do that.
Actually, from what Christian Huitema says in his "IPv6: The New Internet Protocol" book, the original IPv6 address size was 64 bits, derived from Steve Deering's Simple Internet Protocol proposal. IIRC, they doubled it to 128 bits to specifically have 64 bits as the host portion, to allow for autoconfiguration.
However, I do think there is benefit to a fixed-sized host field. However, the design we have is the design we have. It's too late to make fundamental changes prior to deployment. Stack implementations all have the ability to adapt to non-fixed-size networks if necessary down the road, but, for now, /64s are the best way to avoid surprises and move forward.
Owen
-- "Government big enough to supply everything you need is big enough to take everything you have."
Remember: The Ark was built by amateurs, the Titanic by professionals.
Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca
ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Jan 24, 2010, at 4:45 PM, Mark Smith wrote:
Actually, from what Christian Huitema says in his "IPv6: The New Internet Protocol" book, the original IPv6 address size was 64 bits, derived from Steve Deering's Simple Internet Protocol proposal. IIRC, they doubled it to 128 bits to specifically have 64 bits as the host portion, to allow for autoconfiguration.
Actually, Scott Bradner and I share most of the credit (or blame) for the change from 64 bits to 128. During the days of the IPng directorate, quite a number of different alternatives were considered. At one point, there was a compromise proposal known as the "Big 10" design, because it was propounded at the Big Ten Conference Center near O'Hare. One feature of it was addresses of length 64, 128, 192, or 256 bits, determined by the high-order two bits. That deal fell apart for reasons I no longer remember; SIPP was the heir apparent at that point. Scott and I pushed back, saying that 64 bits was too few to allow for both growth and for innovative uses of the address. We offered 128 bits as a compromise; it was accepted, albeit grudgingly. The stateless autoconfig design came later. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Sun, 24 Jan 2010 17:01:21 EST, Steven Bellovin said:
Actually, Scott Bradner and I share most of the credit (or blame) for the change from 64 bits to 128.
During the days of the IPng directorate, quite a number of different alternatives were considered. At one point, there was a compromise proposal known as the "Big 10" design, because it was propounded at the Big Ten Conference Center near O'Hare. One feature of it was addresses of length 64, 128, 192, or 256 bits, determined by the high-order two bits. That deal fell apart for reasons I no longer remember;
I don't remember the details of Big 10, but I do remember the general objection to variable-length addresses (cf. some of the OSI-influenced schemes) was the perceived difficulty of building an ASIC to do hardware handling of the address fields at line rate. Or was Big 10 itself the compromise to avoid dealing with variable-length NSAP-style addresses ("What do you mean, the address can be between 7 and 23 bytes long, depending on bits in bytes 3, 12, and 17?" :)
On Jan 24, 2010, at 6:26 PM, Valdis.Kletnieks@vt.edu wrote:
On Sun, 24 Jan 2010 17:01:21 EST, Steven Bellovin said:
Actually, Scott Bradner and I share most of the credit (or blame) for the change from 64 bits to 128.
During the days of the IPng directorate, quite a number of different alternatives were considered. At one point, there was a compromise proposal known as the "Big 10" design, because it was propounded at the Big Ten Conference Center near O'Hare. One feature of it was addresses of length 64, 128, 192, or 256 bits, determined by the high-order two bits. That deal fell apart for reasons I no longer remember;
I don't remember the details of Big 10, but I do remember the general objection to variable-length addresses (cf. some of the OSI-influenced schemes) was the perceived difficulty of building an ASIC to do hardware handling of the address fields at line rate. Or was Big 10 itself the compromise to avoid dealing with variable-length NSAP-style addresses ("What do you mean, the address can be between 7 and 23 bytes long, depending on bits in bytes 3, 12, and 17?" :)
Precisely. The two bits could feed into a simple decoder that would activate one of four address handlers; depending on your design, they could all run in parallel, with only the output of one of them used. There were only four choices, all a multiple of 8 bytes. But my goal is not to revisit the design issues, but rather to clarify the historical record. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Sun, 24 Jan 2010 18:41:18 -0500 Steven Bellovin <smb@cs.columbia.edu> wrote:
On Jan 24, 2010, at 6:26 PM, Valdis.Kletnieks@vt.edu wrote:
On Sun, 24 Jan 2010 17:01:21 EST, Steven Bellovin said:
Actually, Scott Bradner and I share most of the credit (or blame) for the change from 64 bits to 128.
During the days of the IPng directorate, quite a number of different alternatives were considered. At one point, there was a compromise proposal known as the "Big 10" design, because it was propounded at the Big Ten Conference Center near O'Hare. One feature of it was addresses of length 64, 128, 192, or 256 bits, determined by the high-order two bits. That deal fell apart for reasons I no longer remember;
I don't remember the details of Big 10, but I do remember the general objection to variable-length addresses (cf. some of the OSI-influenced schemes) was the perceived difficulty of building an ASIC to do hardware handling of the address fields at line rate. Or was Big 10 itself the compromise to avoid dealing with variable-length NSAP-style addresses ("What do you mean, the address can be between 7 and 23 bytes long, depending on bits in bytes 3, 12, and 17?" :)
Precisely. The two bits could feed into a simple decoder that would activate one of four address handlers; depending on your design, they could all run in parallel, with only the output of one of them used. There were only four choices, all a multiple of 8 bytes.
But my goal is not to revisit the design issues, but rather to clarify the historical record.
I think there's a lot of value in knowing why things are the way they are. It's common enough to see things that at face value appear to be overly complicated e.g. classes or subnets in IPv4 compared to IPX's simple, flat network numbers, or appear unrealistic or "ridiculous" like IPv6's 128 bit addresses. However I've found once I know the problem that was trying to be solved, and what options there were to solve it, I then usually understand why the particular solution was chosen, and most of the time agree with it. The value I got out of Christian's book was not the going through the mechanisms of IPv6, but his perspective on what options there were to solve certain options, and why the choices were made. Regards, Mark.
During the days of the IPng directorate, quite a number of different alternatives were considered. At one point, there was a compromise proposal known as the "Big 10" design, because it was propounded at the Big Ten Conference Center near O'Hare. One feature of it was addresses of length 64, 128, 192, or 256 bits, determined by the high-order two bits. That deal fell apart for reasons I no longer remember; SIPP was the heir apparent at that point. Scott and I pushed back, saying that 64 bits was too few to allow for both growth and for innovative uses of the address. We offered 128 bits as a compromise; it was accepted, albeit grudgingly. The stateless autoconfig design came later.
--Steve Bellovin, http://www.cs.columbia.edu/~smb
This historical record finally made me understand why we have up to /128 prefixes with /128 addresses instead of what would suit best stateless autoconfig: up to /64 prefixes with /128 addresses. Rubens
participants (6)
-
Larry Sheldon
-
Mark Smith
-
Owen DeLong
-
Rubens Kuhl
-
Steven Bellovin
-
Valdis.Kletnieks@vt.edu