Using IPv6 with prefixes shorter than a /64 on a LAN
The subject says it all... anyone with experience with a setup like this ? I am particularly wondering about possible NDP breakage. cheers! Carlos -- -- ========================= Carlos M. Martinez-Cagnazzo http://www.labs.lacnic.net =========================
as a test case, i built a small home network out of /120. works just fine. my home network has been native IPv6 for about 5 years now, using a /96 and IVI. some thoughts. disable RD/RA/ND. none of the DHCPv6 code works like DHCP, so I re-wrote client and server code so that it does. static address assignment is a good thing for services like DNS/HTTP secure dynmaic update is your friend summary - its not easy, vendors don't want to help. but it can be done. --bill On Mon, Jan 24, 2011 at 10:59:59AM -0200, Carlos Martinez-Cagnazzo wrote:
The subject says it all... anyone with experience with a setup like this ?
I am particularly wondering about possible NDP breakage.
cheers!
Carlos
-- -- ========================= Carlos M. Martinez-Cagnazzo http://www.labs.lacnic.net =========================
bmanning@vacation.karoshi.com (bmanning) writes:
as a test case, i built a small home network out of /120. works just fine. my home network has been native IPv6 for about 5 years now, using a /96 and IVI.
some thoughts. disable RD/RA/ND. none of the DHCPv6 code works like DHCP, so I re-wrote client and server code so that it does. static address assignment is a good thing for services like DNS/HTTP secure dynmaic update is your friend
summary - its not easy, vendors don't want to help. but it can be done.
Right - /64 is an assumption that's hardcoded many places. But it does work.
Doing a little introspection, I found myself realizing that one of the most bothersome aspects of the /64 boundary (for me, just speaking for myself here) is exactly that, the tendency to the hardcoding of boundaries. C. On Mon, Jan 24, 2011 at 12:26 PM, Phil Regnauld <regnauld@nsrc.org> wrote:
bmanning@vacation.karoshi.com (bmanning) writes:
as a test case, i built a small home network out of /120. works just fine. my home network has been native IPv6 for about 5 years now, using a /96 and IVI.
some thoughts. disable RD/RA/ND. none of the DHCPv6 code works like DHCP, so I re-wrote client and server code so that it does. static address assignment is a good thing for services like DNS/HTTP secure dynmaic update is your friend
summary - its not easy, vendors don't want to help. but it can be done.
Right - /64 is an assumption that's hardcoded many places.
But it does work.
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6. That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6. But in terms of the "best practice" it is indeed 64-bit for every network, with the option of 126-bit prefixes for link networks and 128-bit loopback addresses. You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier. The entire point of IPv6 having a 128-bit address space was to facilitate this and put an end to having to determine the network prefix length based on an (often incorrect) estimation of the number of hosts the network will need to accommodate. Use of 126-bit prefixes for point-to-point connections (link networks) is acceptable. Use of 127-bit prefixes should be avoided as outlined in RFC 3627. So it really comes down to keeping it simple. Remember, we're dealing with exponentials here. A 64-bit address space isn't twice as large as a 32-bit address space; it's roughly 4.2 billion times larger. The 340 undecillion (that's 340 with 36 zeros after it) unique identifiers available with a 128-bit address space is blatantly excessive if you don't factor in that the host segment is always intended to be 64 of those bits. So if conservation of address space isn't the logic behind using a smaller prefix, then the question becomes what is? Most people tunnel-vision on the fact that Stateless Address Auto-Configuration requires that a 64-bit prefix be advertised to work and assume that the best way to disable it is to use a prefix-length other than 64. While you could do this, a far better way would be to simply not announce the prefix with the Autonomous bit set to true. Every IPv6 implementation respects the value of the "A" bit on a prefix advertisement and will not use Stateless configuration if it is not true, just as outlined in the RFC. Another thing to consider is that most processors today lack operations for values that are larger than 64-bit. By separating the host and network segment at the 64-bit boundary you may be able to take advantage of performance optimizations that make the distinction between the two (and significantly reduce the cost of routing decisions, contributing to lower latency). Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it. But yes. Basic NDP, routing, forwarding, etc. should work fine with anything shorter than 126. Just not really sure the logic behind doing so. On Mon, Jan 24, 2011 at 7:59 AM, Carlos Martinez-Cagnazzo <carlosm3011@gmail.com> wrote:
The subject says it all... anyone with experience with a setup like this ?
I am particularly wondering about possible NDP breakage.
cheers!
Carlos
-- -- ========================= Carlos M. Martinez-Cagnazzo http://www.labs.lacnic.net =========================
-- Ray Soucy Epic Communications Specialist Phone: +1 (207) 561-3526 Networkmaine, a Unit of the University of Maine System http://www.networkmaine.net/
On Mon, Jan 24, 2011 at 1:53 PM, Ray Soucy <rps@maine.edu> wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it.
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc. Routers can also act as amplifiers too, DDoSing every host within a multicast ND directed solicitation group (and THAT is even assuming a correctly functioning switch thats limiting the multicast travel) Add to it the assumption that every router gets certain things right (like everything correctly decrementing TTLs as assumed in RFC 4861 11.2 in order for hosts to detect off-link RA/ND messages and guard themselves against those), in these ways it's certainly at least somewhat worse than ARP. If you're able to bring down, or severely limit, a site by sending a couple thousand PPS towards the /64 it's on, or by varying the upper parts of the /64 to flood all the hosts with multicast traffic while simultaneously floodign the routers LRU ND cache well thats a cheap and easy attack and it WILL be used, and that can be done with the protocols working as designed, at least from my reading. Granted I don't have an IPv6 lab to test any of this. But I'd be willing to bet this exact scenario is readily and easily possible, it already is with ARP tables (and it DOES happen, it's just harder to make happen with ARP and IPv4 since the space is so small, esp when compared to a /64) IPv6 ND LRU Caches/tables aren't going to be anywhere near big enough to handle a single /64's worth of hosts. And if they're any significant amt smaller then it'd be trivial to cause a DoS by sweeping the address space. It would depend on the ND table limits/sizes, and any implementation specific timers/etc and garbage collection, and a some other details I don't have, but, I bet it'd be a really small flow in the scheme of things to completely stomp out a /64....someone I'm sure knows more about the implementations, and I'm betting this has been brought up before about IPv6/ND... So I pretty strongly disagree about your statement. Repetitively sweeping an IPv6 network to DoS/DDoS the ND protocol thereby flooding the ND cache/LRUs could be extremely effective and if not payed serious attention will cause serious issues.
On 24/01/2011 22:41, Michael Loftis wrote:
On Mon, Jan 24, 2011 at 1:53 PM, Ray Soucy<rps@maine.edu> wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it.
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc. Routers can also act as amplifiers too, DDoSing every host within a multicast ND directed solicitation group (and THAT is even assuming a correctly functioning switch thats limiting the multicast travel)
Add to it the assumption that every router gets certain things right (like everything correctly decrementing TTLs as assumed in RFC 4861 11.2 in order for hosts to detect off-link RA/ND messages and guard themselves against those), in these ways it's certainly at least somewhat worse than ARP.
If you're able to bring down, or severely limit, a site by sending a couple thousand PPS towards the /64 it's on, or by varying the upper parts of the /64 to flood all the hosts with multicast traffic while simultaneously floodign the routers LRU ND cache well thats a cheap and easy attack and it WILL be used, and that can be done with the protocols working as designed, at least from my reading. Granted I don't have an IPv6 lab to test any of this. But I'd be willing to bet this exact scenario is readily and easily possible, it already is with ARP tables (and it DOES happen, it's just harder to make happen with ARP and IPv4 since the space is so small, esp when compared to a /64) IPv6 ND LRU Caches/tables aren't going to be anywhere near big enough to handle a single /64's worth of hosts. And if they're any significant amt smaller then it'd be trivial to cause a DoS by sweeping the address space. It would depend on the ND table limits/sizes, and any implementation specific timers/etc and garbage collection, and a some other details I don't have, but, I bet it'd be a really small flow in the scheme of things to completely stomp out a /64....someone I'm sure knows more about the implementations, and I'm betting this has been brought up before about IPv6/ND...
So I pretty strongly disagree about your statement. Repetitively sweeping an IPv6 network to DoS/DDoS the ND protocol thereby flooding the ND cache/LRUs could be extremely effective and if not payed serious attention will cause serious issues.
Yes.... This is an issue for point-to-point links but using a longer prefix (/126 or similar) has been suggested as a mitigation for this sort of attack. I would assume that in the LAN scenario where you have a /64 for your internal network that you would have some sort of stateful firewall sitting infront of the network to stop any un-initiated sessions. This therefore stops any hammering of ND cache etc. The argument then is that the number of packets hitting your firewall / bandwidth starvation would be the the alternative line of attack for a DoS/DDos but that is a completely different issue.
On 1/25/2011 10:58 AM, Patrick Sumby wrote:
I would assume that in the LAN scenario where you have a /64 for your internal network that you would have some sort of stateful firewall sitting infront of the network to stop any un-initiated sessions. This therefore stops any hammering of ND cache etc. The argument then is that the number of packets hitting your firewall / bandwidth starvation would be the the alternative line of attack for a DoS/DDos but that is a completely different issue.
There are many IPv4 networks that don't implement firewall rules for subnets which contain servers. DDoS mitigation is handled differently. It would not be unexpected for these networks to do the same with IPv6. Jack
On Jan 26, 2011, at 12:44 AM, Jack Bates wrote:
DDoS mitigation is handled differently.
Concur 100%. Also note that firewalls don't provide any sort of useful DDoS protection, marketing claims aside, so reaction tools such as S/RTBH, et. al. are required to protect stateful firewalls in front of client access LANs, and everything behind said stateful firewalls, from DDoS. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 25, 2011, at 8:58 AM, Patrick Sumby wrote:
On 24/01/2011 22:41, Michael Loftis wrote:
On Mon, Jan 24, 2011 at 1:53 PM, Ray Soucy<rps@maine.edu> wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it.
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc. Routers can also act as amplifiers too, DDoSing every host within a multicast ND directed solicitation group (and THAT is even assuming a correctly functioning switch thats limiting the multicast travel)
I love this term... "repetitively sweeping a targets /64". Seriously? Repetitively sweeping a /64? Let's do the math... 2^64 = 18,446,744,073,709,551,616 IP addresses. Let's assume that few networks would not be DOS'd by a 1,000 PPS storm coming in so that's a reasonable cap on our scan rate. That means sweeping a /64 takes 18,446,744,073,709,551 sec. (rounded down). There are 86,400 seconds per day. 18,446,744,073,709,551 / 86,400 = 213,503,982,334 days. Rounding a year down to 365 days, that's 584,942,417 years to sweep the /64 once. If we increase our scan rate to 1,000,000 packets per second, it still takes us 584,942 years to sweep a /64. I don't know about you, but I do not expect to live long enough to sweep a /64, let alone do so repetitively. Owen
On Tue, 25 Jan 2011 13:42:29 -0500, Owen DeLong <owen@delong.com> wrote:
Seriously? Repetitively sweeping a /64? Let's do the math... ...
We've had this discussion before... If the site is using SLAAC, then that 64bit target is effectively 48bits. And I can make a reasonable guess at 24 of those bits. (esp. if I've seen the address of even one of the machines.) --Ricky
----- Original Message -----
On Tue, 25 Jan 2011 13:42:29 -0500, Owen DeLong <owen@delong.com> wrote:
Seriously? Repetitively sweeping a /64? Let's do the math... ...
We've had this discussion before...
If the site is using SLAAC, then that 64bit target is effectively 48bits. And I can make a reasonable guess at 24 of those bits. (esp. if I've seen the address of even one of the machines.)
I wouldn't say you could assume that because one machine is a particular manufacturer, that they are all the same. I would say you could certainly limit a scan to a set list of well-known 24-bit IDs (say ~100 or so?), that would still take a couple days at least to scan. Could there not be something implemented in the firewall to prevent an incoming scan causing an issue with ND ? If you block all incoming by default, why would the router try to do a ND on an address that is not allowed? -Randy
On Tue, 25 Jan 2011 16:32:59 -0500 "Ricky Beam" <jfbeam@gmail.com> wrote:
On Tue, 25 Jan 2011 13:42:29 -0500, Owen DeLong <owen@delong.com> wrote:
Seriously? Repetitively sweeping a /64? Let's do the math... ...
We've had this discussion before...
If the site is using SLAAC, then that 64bit target is effectively 48bits. And I can make a reasonable guess at 24 of those bits. (esp. if I've seen the address of even one of the machines.)
All you're really pointing out is "security" is a relative term. A lot of these threads devolve in to a waste of time because they're discussing the pros and cons of a single, possible security mechanism without considering it in context ("possible" because if it ends up having no or very little security value it isn't really a "security mechanism" at all). The value of a security mechanism can only be judged in the context of both what threats they mitigate and whether those threats are ones that are common and likely in the context they might be used in. Security is a weakest link problem, so the first thing that needs to be done is to identify the weakest links, before worrying about how to fix them. So what threat are people trying to prevent? Address scanning is only a means to an end - so what is the "end"? Only once that is defined can it be worked out whether address scanning is a likely method attackers will use, and whether then preventing address scanning is an effective mitigation. Regards, Mark.
(Top-posting because the whole message is context. Oh, and I'm lazy.) I do indeed love it when people break out IPv6 addressing as "there's so many addresses, we'll never ever go through them!" Sure, if they're only used as end-point identifiers. Say you want to crack out that 64k-port space into something bigger, because say p2p becomes so wide-spread and ingrained in our society, that 64k port space per IP becomes silly. So we say, break off another 16 bits and have a host just listen on not a /128, but on a /112. Cool, 4 billion "ports". That fixes the port space. Then someone comes along with a bright idea. "Hi!" she says, "Since hosts are already listening on a /112 of space (and thus all those pesky ND cache problems have been fixed!), we can start allocating cloud identifiers on peoples' hosts, so each cloud application instance gets a separate address prefix; thus any given host can run multiple cloud instances!" Let's call that a 32 bit address space, because I bet a 16 bit "cloud ID" doesn't scale. A 16 bit cloud identifier takes it down to a /96. A 32 bit cloud identifier takes it down to /80. Cool. Now you've got all these end-hosts all happily doing p2p between each other over a 16-bit extended port space, then running p2p and other apps inside a 32-bit cloud identifier so they can both run their own distributed apps/vms (eg diaspora), or donate/sell/whatever their clock cycles to others. What did that just do to your per-site /64? That you have no hope of ever seeing a user use up? It just turned that /64 into a /112 (16 bits of port space, 32 bits of cloud identifier space.) What's the next killer app that'll chew up more of your IPv6 space? I'm all for IPv6. And I'm all for avoiding conjecture and getting to the task at hand. But simply assuming that the IPv6 address space will forever remain that - only unique host identifiers - I think is disingenious at best. :-) Adrian On Tue, Jan 25, 2011, Owen DeLong wrote:
I love this term... "repetitively sweeping a targets /64".
Seriously? Repetitively sweeping a /64? Let's do the math...
2^64 = 18,446,744,073,709,551,616 IP addresses.
Let's assume that few networks would not be DOS'd by a 1,000 PPS storm coming in so that's a reasonable cap on our scan rate.
That means sweeping a /64 takes 18,446,744,073,709,551 sec. (rounded down).
There are 86,400 seconds per day.
18,446,744,073,709,551 / 86,400 = 213,503,982,334 days.
Rounding a year down to 365 days, that's 584,942,417 years to sweep the /64 once.
If we increase our scan rate to 1,000,000 packets per second, it still takes us 584,942 years to sweep a /64.
I don't know about you, but I do not expect to live long enough to sweep a /64, let alone do so repetitively.
Owen
-- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $24/pm+GST entry-level VPSes w/ capped bandwidth charges available in WA -
From: Adrian Chadd Sent: Tuesday, January 25, 2011 8:37 PM To: Owen DeLong Cc: nanog@nanog.org Subject: Re: Using IPv6 with prefixes shorter than a /64 on a LAN
(Top-posting because the whole message is context. Oh, and I'm lazy.)
I do indeed love it when people break out IPv6 addressing as "there's so many addresses, we'll never ever go through them!"
Sure, if they're only used as end-point identifiers.
Yeah, at some point v6 IP addresses might be used for something completely different. For example, rather than using a cookie to balance through a load balancer to get back to a server in a "sticky session", maybe you are redirected directly to an IP address on the server that represents your session. The IP address could be provisioned dynamically on the server as required, the user hits the main URL and is "redirected" to the unique IP address representing their session. If you have a 64-bit address, each active session can easily be given its own unique IP. I can see requirements at some point for servers to be able to handle thousands of IP addresses per interface.
On Jan 25, 2011, at 8:47 PM, George Bonser wrote:
From: Adrian Chadd Sent: Tuesday, January 25, 2011 8:37 PM To: Owen DeLong Cc: nanog@nanog.org Subject: Re: Using IPv6 with prefixes shorter than a /64 on a LAN
(Top-posting because the whole message is context. Oh, and I'm lazy.)
I do indeed love it when people break out IPv6 addressing as "there's so many addresses, we'll never ever go through them!"
Sure, if they're only used as end-point identifiers.
Yeah, at some point v6 IP addresses might be used for something completely different. For example, rather than using a cookie to balance through a load balancer to get back to a server in a "sticky session", maybe you are redirected directly to an IP address on the server that represents your session. The IP address could be provisioned dynamically on the server as required, the user hits the main URL and is "redirected" to the unique IP address representing their session.
There isn't a web farm big enough for that not to still work within a /64. Since a web farm network would be a /64 anyway, this isn't an increase in the consumption of IPv6 addresses.
If you have a 64-bit address, each active session can easily be given its own unique IP. I can see requirements at some point for servers to be able to handle thousands of IP addresses per interface.
Many already can. Owen
On Jan 26, 2011, at 11:37 AM, Adrian Chadd wrote:
But simply assuming that the IPv6 address space will forever remain that - only unique host identifiers - I think is disingenious at best. :-)
I think 'disingenuous' is too strong a word - 'overly optimistic' better reflects the position, IMHO. ;> In addition to all the extremely valid use-cases you outline, there's also the concept of one-time-use prefixes which likely will end up being used at the molecular level in manufacturing/supply-chain applications, lifetime assignments to individuals as a matter of citizenship which will be retired upon their deaths/disenfranchisement, nanite communications used to do things like clean plaque out of people's arteries in lieu of angioplasty, and a whole host of new applications we haven't even dreamed of, yet. The supreme irony of this situation is that folks who're convinced that there's no way we can even run out of addresses often accuse those of us who're plentitude-skeptics of old-fashioned thinking; whereas there's a strong case to be made that those very same vocal advocates of the plentitude position seem to be assuming that the assignment and consumption of IPv6 addresses (and networking technology and the Internet in general) will continue to be constrained by the current four-decade-old paradigm into the foreseeable future. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Wed, 26 Jan 2011 11:53:23 +0700 Roland Dobbins <rdobbins@arbor.net> wrote:
On Jan 26, 2011, at 11:37 AM, Adrian Chadd wrote:
But simply assuming that the IPv6 address space will forever remain that - only unique host identifiers - I think is disingenious at best. :-)
I think 'disingenuous' is too strong a word - 'overly optimistic' better reflects the position, IMHO.
;>
In addition to all the extremely valid use-cases you outline, there's also the concept of one-time-use prefixes which likely will end up being used at the molecular level in manufacturing/supply-chain applications, lifetime assignments to individuals as a matter of citizenship which will be retired upon their deaths/disenfranchisement, nanite communications used to do things like clean plaque out of people's arteries in lieu of angioplasty, and a whole host of new applications we haven't even dreamed of, yet.
The supreme irony of this situation is that folks who're convinced that there's no way we can even run out of addresses often accuse those of us who're plentitude-skeptics of old-fashioned thinking; whereas there's a strong case to be made that those very same vocal advocates of the plentitude position seem to be assuming that the assignment and consumption of IPv6 addresses (and networking technology and the Internet in general) will continue to be constrained by the current four-decade-old paradigm into the foreseeable future.
The correct assumption is that most people will try and usually succeed at follow the specifications, as that is what is required to successfully participate in a protocol (any protocol, not just networking ones). IPv4 history has shown that most people will. People who argue against current Ipv6 address use projections are doing so with an unstated assumption that most people won't follow the specifications. Once you make that assumption, then anything at all can be used as an example to created FUD about running out of addresses, including the equally valid example that people will close their eyes and bash the number pad when entering IPv6 prefix or address information. The only way to prevent absolutely the misconfiguration of protocol parameters is to not make them configurable. Pretty much impossible to do with networking prefixes or addresses.
On Jan 26, 2011, at 12:33 PM, Mark Smith wrote:
The correct assumption is that most people will try and usually succeed at follow the specifications, as that is what is required to successfully participate in a protocol (any protocol, not just networking ones). IPv4 history has shown that most people will.
Specification <> application, as in new applications. And, no, I don't think that 'most people will' - I've seen enough foolishness with regards to IPv4 misaddressing over the last quarter-century (pre- and post-CIDR) to share your optimism in that regard. ;> ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Wed, 26 Jan 2011 12:49:13 +0700 Roland Dobbins <rdobbins@arbor.net> wrote:
On Jan 26, 2011, at 12:33 PM, Mark Smith wrote:
The correct assumption is that most people will try and usually succeed at follow the specifications, as that is what is required to successfully participate in a protocol (any protocol, not just networking ones). IPv4 history has shown that most people will.
Specification <> application, as in new applications.
And, no, I don't think that 'most people will' - I've seen enough foolishness with regards to IPv4 misaddressing over the last quarter-century (pre- and post-CIDR) to share your optimism in that regard.
The Internet works most of the time doesn't it? I think that is evidence that most people get it right most of the time, and that misaddressing has minimal if any effect because it is ignored as non-complaint with the Internet's protocols (both implementation and operational ones). Usually the consequences of misaddressing are limited to those who've performed it. Mark
On Jan 25, 2011, at 9:49 PM, Roland Dobbins wrote:
On Jan 26, 2011, at 12:33 PM, Mark Smith wrote:
The correct assumption is that most people will try and usually succeed at follow the specifications, as that is what is required to successfully participate in a protocol (any protocol, not just networking ones). IPv4 history has shown that most people will.
Specification <> application, as in new applications.
And, no, I don't think that 'most people will' - I've seen enough foolishness with regards to IPv4 misaddressing over the last quarter-century (pre- and post-CIDR) to share your optimism in that regard.
Is there IPv4 brokenness in the world? Sure. Is the majority of IPv4 deployed in the world done so in a broken manner? I think that's a stretch. Most people try and usually succeed at implementing IPv4 at least reasonably in line with the specifications. Owen
On Wed, 2011-01-26 at 11:53 +0700, Roland Dobbins wrote:
On Jan 26, 2011, at 11:37 AM, Adrian Chadd wrote: The supreme irony of this situation is that folks who're convinced that there's no way we can even run out of addresses often accuse those of us who're plentitude-skeptics of old-fashioned thinking; whereas there's a strong case to be made that those very same vocal advocates of the plentitude position seem to be assuming that the assignment and consumption of IPv6 addresses (and networking technology and the Internet in general) will continue to be constrained by the current four-decade-old paradigm into the foreseeable future.
Both positions are wrong, but the plenitudinists are more right :-) As long as we allow ourselves to be limited in our thinking by numbers (which are infinite by their very nature), we will be - well, limited in our thinking. So let's get rid of the limitation in our minds. IPv6 provides *effectively* unlimited address space, even if it's only "for now". So let's USE it that way. Let's unlearn our limited thinking patterns. Let's go colonise infinity. And if we need to fix it in a few decades, so what? Nothing is forever. As Mark Twain suggested, let's "live like it's heaven on earth". Regards, K. PS: I saw a great t-shirt recently, ideal for your next IPv6 conference: "The time for action is past - now is the time for senseless bickering". -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) +61-2-64957160 (h) http://www.biplane.com.au/kauer/ +61-428-957160 (mob) GPG fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687 Old fingerprint: B386 7819 B227 2961 8301 C5A9 2EBC 754B CD97 0156
Figure I'll throw my 2 cents into this. The way I read the RFCs, IPv6 is not IP space. Its network space. Unless I missed it last time I read through them, the RFCs do not REQUIRE hardware/software manufacturers to support VLSM beyond /64. Autoconfigure the is the name of the game for the IPv6 guys. Subsequently, while using longer prefixes is possible currently, I'd never deploy it because it could be removed from code without mention. Because of the AutoConfigure piece, I consider IPv6 to be NETWORK Space, rather than IP Space like IPv4. I'm issued a /48 which can be comprised of 65536 /64 networks, not some silly number of hosts, which can't exist because they are all duplicates of each other (MAC address = host identifier) Anyway, that's how I see the question that started this whole thing, I'd suggest using link local and RFC 4193 for internal routing and your public space for things that need public access or need to be accessed publicly. Just because they SAY there's infinite space (like they said about IPv4) doesn't mean we have to be stupid and wasteful with our space. -C If I've misread, or completely missed an RFC, I apologize.
On Jan 31, 2011, at 9:35 PM, eric clark wrote:
Figure I'll throw my 2 cents into this.
The way I read the RFCs, IPv6 is not IP space. Its network space. Unless I missed it last time I read through them, the RFCs do not REQUIRE hardware/software manufacturers to support VLSM beyond /64. Autoconfigure the is the name of the game for the IPv6 guys.
You misread them. SLAAC is not supported beyond /64. VLSM support for static configuration is required.
Subsequently, while using longer prefixes is possible currently, I'd never deploy it because it could be removed from code without mention.
Correct... Just because you can does not mean it is a good idea.
Because of the AutoConfigure piece, I consider IPv6 to be NETWORK Space, rather than IP Space like IPv4. I'm issued a /48 which can be comprised of 65536 /64 networks, not some silly number of hosts, which can't exist because they are all duplicates of each other (MAC address = host identifier)
There is a valid point in that you should not be using autoconfigure or ND on point-to-point links.
Anyway, that's how I see the question that started this whole thing, I'd suggest using link local and RFC 4193 for internal routing and your public space for things that need public access or need to be accessed publicly.
Link Local is not routable, even internally. It's LINK local. In my opinion, RFC 4193 is just a bad idea and there's no benefit to it vs. GUA. Just put a good stateful firewall in front of your GUA. I mean, really, how many things do you have that don't need access to/from the internet. Maybe your printers and a couple of appliances. The rest... All those TiVOs, Laptops, Desktops, iPads, etc. all need public addresses anyway, so, why bother with the ULA?
Just because they SAY there's infinite space (like they said about IPv4) doesn't mean we have to be stupid and wasteful with our space.
Supplying every end site with a /48 of global address space is neither stupid or wasteful. It's a good design with some nice future-proofing and some very nice features available if people take better advantage of the capabilities offered as we move forward. Just because it's more than you can imagine using today does not mean that it is more than you will ever imagine using. I'm very happy that I have a /48 at home and I look forward to making better use of it as the Consumer Electronics vendors start to catch on that the internet is being restored to full functionality for end users. Owen
In my opinion, RFC 4193 is just a bad idea and there's no benefit to it vs. GUA. Just put a good stateful firewall in front of your GUA.
I mean, really, how many things do you have that don't need access to/from the internet. Maybe your printers and a couple of appliances.
The rest... All those TiVOs, Laptops, Desktops, iPads, etc. all need public addresses anyway, so, why bother with the ULA?
Because the ULA addressing is free, not that hard, and provides an extra layer of protection to prevent vandals from using up your printer ink or turning your fridge on defrost during the night. And some networks will have a lot more stuff that could use an extra layer of protection like that, for instance SCADA networks.
Supplying every end site with a /48 of global address space is neither stupid or wasteful. It's a good design with some nice future-proofing and some very nice features available if people take better advantage of the capabilities offered as we move forward.
Just because it's more than you can imagine using today does not mean that it is more than you will ever imagine using. I'm very happy that I have a /48 at home and I look forward to making better use of it as the Consumer Electronics vendors start to catch on that the internet is being restored to full functionality for end users.
Agreed. /48 is good for even the smallest home user living in a one bedroom apartment. They may not fully exploit it, but at the same time, they should not be treated as second class citizens when there is enough IPv6 address wealth to share around. --Michael Dillon
On Jan 31, 2011, at 10:26 PM, Michael Dillon wrote:
In my opinion, RFC 4193 is just a bad idea and there's no benefit to it vs. GUA. Just put a good stateful firewall in front of your GUA.
I mean, really, how many things do you have that don't need access to/from the internet. Maybe your printers and a couple of appliances.
The rest... All those TiVOs, Laptops, Desktops, iPads, etc. all need public addresses anyway, so, why bother with the ULA?
Because the ULA addressing is free, not that hard, and provides an extra layer of protection to prevent vandals from using up your printer ink or turning your fridge on defrost during the night.
Well, 2 out of 3 isn't bad, I suppose, but, do you really get even that? ULA addressing is free, except for the costs imposed by using it instead of GUA in most circumstances. I'll give you 0.5 for this one. ULA addressing is not that hard. Neither is GUA. In fact they both pose exactly the same difficulty. So, though I have to grant that it isn't that hard, you failed to show how this fact gives it any advantage over GUA. Additionally, it does create additional difficulties since you now need to maintain two address spaces instead of just one. So, since it's harder than GUA, but, still not that hard, I'll give you 0.5 for that one, too. The last one is specious at best. The stateful firewall provides all the protection there. The ULA doesn't really provide any because if the FW is compromised, you just bounce the print requests off of one of the hosts that has GUA+ULA. Sorry, 0 points here. So, let's see... 0.5+0.5+0 = 1.0 -- Nope, not even 2 out of 3.
And some networks will have a lot more stuff that could use an extra layer of protection like that, for instance SCADA networks.
If there were an extra layer of protection, sure, but, since there actually isn't, no joy there. If you want to isolate your SCADA network so it doesn't have anything on it that talks to the internet, then, ULA could be just fine, but, in that case, GUA or Link Local may be equally fine with all the same protections and less hassle if you decide to change the policy later.
Supplying every end site with a /48 of global address space is neither stupid or wasteful. It's a good design with some nice future-proofing and some very nice features available if people take better advantage of the capabilities offered as we move forward.
Just because it's more than you can imagine using today does not mean that it is more than you will ever imagine using. I'm very happy that I have a /48 at home and I look forward to making better use of it as the Consumer Electronics vendors start to catch on that the internet is being restored to full functionality for end users.
Agreed. /48 is good for even the smallest home user living in a one bedroom apartment. They may not fully exploit it, but at the same time, they should not be treated as second class citizens when there is enough IPv6 address wealth to share around.
Well, I'd give /48s even to studios, small lofts, dorm rooms, and any internet- connected janitorial closets in multi-tenant buildings. I see no reason to draw the line at one-bedroom apartments. Owen
On 2/1/2011 12:03 AM, Owen DeLong wrote:
The rest... All those TiVOs, Laptops, Desktops, iPads, etc. all need public addresses anyway, so, why bother with the ULA?
I think ULA is still useful for home networks. If the home router guys properly generate the ULA dynamically, it should stop conflicts within home networking. There's something to be said for internal services which ULA can be useful for, even when you do fall off the net. Jack
On Feb 1, 2011, at 7:04 AM, Jack Bates wrote:
On 2/1/2011 12:03 AM, Owen DeLong wrote:
The rest... All those TiVOs, Laptops, Desktops, iPads, etc. all need public addresses anyway, so, why bother with the ULA?
I think ULA is still useful for home networks. If the home router guys properly generate the ULA dynamically, it should stop conflicts within home networking. There's something to be said for internal services which ULA can be useful for, even when you do fall off the net.
Jack
I prefer persistent GUA over ULA for that. Owen
On Feb 1, 2011, at 9:39 AM, Jack Bates wrote:
On 2/1/2011 11:29 AM, Owen DeLong wrote:
I prefer persistent GUA over ULA for that.
I do too, though for simple zeroconf devices, I'd prefer ULA over link local. Given that it's not an either or situation, I fully support ULA existing.
Jack
Given the vast probability for abuse of ULA becoming de facto GUA later, I don't support ULA existing as the benefits are vastly overwhelmed by the potential for abouse.
On 2/1/2011 3:23 PM, Owen DeLong wrote:
Given the vast probability for abuse of ULA becoming de facto GUA later, I don't support ULA existing as the benefits are vastly overwhelmed by the potential for abouse. If the world wants ULA to become the de facto GUA, no amount of arm twisting and bulling will stop it.
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA. Jack
On Feb 1, 2011, at 2:58 PM, Jack Bates wrote:
On 2/1/2011 3:23 PM, Owen DeLong wrote:
Given the vast probability for abuse of ULA becoming de facto GUA later, I don't support ULA existing as the benefits are vastly overwhelmed by the potential for abouse. If the world wants ULA to become the de facto GUA, no amount of arm twisting and bulling will stop it.
Right... It's a toxic chemical. No matter how much we may end up wishing we could, we probably can't uninvent it at this point. Regardless, I won't encourage and will actively discourage its use.
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA.
I guess we can agree to disagree about this. I haven't seen one yet. Owen
On 2/1/2011 5:14 PM, Owen DeLong wrote:
I guess we can agree to disagree about this. I haven't seen one yet.
If my coffee maker did have an IP address, I expect it to get all it's updates from a central house store, not directly from the manufacturer over the net. I see no reason my appliances need global access; just access to their local controller. And yes, I am working on building my IPv6 coffee maker. :P jack
On Feb 1, 2011, at 3:25 PM, Jack Bates wrote:
On 2/1/2011 5:14 PM, Owen DeLong wrote:
I guess we can agree to disagree about this. I haven't seen one yet.
If my coffee maker did have an IP address, I expect it to get all it's updates from a central house store, not directly from the manufacturer over the net. I see no reason my appliances need global access; just access to their local controller.
Again, we can agree to disagree. I want GUA even for the things that don't need global access because I can change my mind later. There's no advantage to building that decision into the address, rather than making it in the filter policy of the routers and/or firewalls on the network.
And yes, I am working on building my IPv6 coffee maker. :P
Someone has to, I suppose. Owen
On Tue, Feb 01, 2011 at 03:14:57PM -0800, Owen DeLong wrote:
On Feb 1, 2011, at 2:58 PM, Jack Bates wrote:
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA.
I guess we can agree to disagree about this. I haven't seen one yet.
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
On Tue, Feb 1, 2011 at 3:38 PM, Chuck Anderson <cra@wpi.edu> wrote:
On Tue, Feb 01, 2011 at 03:14:57PM -0800, Owen DeLong wrote:
On Feb 1, 2011, at 2:58 PM, Jack Bates wrote:
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA.
I guess we can agree to disagree about this. I haven't seen one yet.
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
You might be asking the wrong person for advice or reasoning. Horses for courses. ULAs have a place. Cameron
On 2/1/11, Chuck Anderson <cra@wpi.edu> wrote:
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
A typical home user will have a /56 of GUA, or maybe a /48 with some ISPs. Anybody who knows enough to figure out how to set a ULA can figure out a /64 from their GUA space that's not being auto-assigned by one of their various home routers. So if that's the way you want to do things, it won't cost you or your ISP anything. If your ISP is only assigning you a /64 of GUA, that's another story. -- ---- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.
On Feb 1, 2011, at 5:37 PM, Bill Stewart wrote:
On 2/1/11, Chuck Anderson <cra@wpi.edu> wrote:
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
A typical home user will have a /56 of GUA, or maybe a /48 with some ISPs. Anybody who knows enough to figure out how to set a ULA can figure out a /64 from their GUA space that's not being auto-assigned by one of their various home routers. So if that's the way you want to do things, it won't cost you or your ISP anything.
If your ISP is only assigning you a /64 of GUA, that's another story.
If your ISP assigns you less than a /48, ask them to fix it. If they refuse, get a new ISP or use that ISP to connect a tunnel to someone who will give you a /48. Owen
On Tue, 01 Feb 2011 17:37:55 PST, Bill Stewart said:
A typical home user will have a /56 of GUA, or maybe a /48 with some ISPs. Anybody who knows enough to figure out how to set a ULA can figure out a /64 from their GUA space that's not being auto-assigned by one of their various home routers. So if that's the way you want to do things, it won't cost you or your ISP anything.
Your local home network topology may not allow easy choice of a /64, if you have multiple actual subnets - if it all fit into one /64, why did you need/want that /56 or /48 in the first place?)
On Feb 1, 2011, at 3:38 PM, Chuck Anderson wrote:
On Tue, Feb 01, 2011 at 03:14:57PM -0800, Owen DeLong wrote:
On Feb 1, 2011, at 2:58 PM, Jack Bates wrote:
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA.
I guess we can agree to disagree about this. I haven't seen one yet.
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
For a completely disconnected network, I don't care what you do, use whatever number you want. There's no need to coordinate that with the internet in any way. For a network connected to a connected network, either get GUA from an RIR or get GUA from the network you are connected to or get GUA from some other ISP/LIR. There are lots of options. I'd like to see RIR issued GUA get a lot cheaper. I'd much rather see cheap easy to get RIR issued GUA than see ULA get widespread use. Owen
Disconnected networks have a bothersome tendency to get connected at some point ( I have been severely bitten by this in the past ), so while I agree that there is no need to coordinate anything globally, then a RFC 1918-like definition would be nice (if we are not going to use ULAs, that is) cheers! Carlos On Tue, Feb 1, 2011 at 11:59 PM, Owen DeLong <owen@delong.com> wrote:
On Feb 1, 2011, at 3:38 PM, Chuck Anderson wrote:
On Tue, Feb 01, 2011 at 03:14:57PM -0800, Owen DeLong wrote:
On Feb 1, 2011, at 2:58 PM, Jack Bates wrote:
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA.
I guess we can agree to disagree about this. I haven't seen one yet.
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
For a completely disconnected network, I don't care what you do, use whatever number you want. There's no need to coordinate that with the internet in any way.
For a network connected to a connected network, either get GUA from an RIR or get GUA from the network you are connected to or get GUA from some other ISP/LIR.
There are lots of options.
I'd like to see RIR issued GUA get a lot cheaper. I'd much rather see cheap easy to get RIR issued GUA than see ULA get widespread use.
Owen
-- -- ========================= Carlos M. Martinez-Cagnazzo http://www.labs.lacnic.net =========================
On Wed, Feb 2, 2011 at 5:07 PM, Carlos Martinez-Cagnazzo <carlosm3011@gmail.com> wrote:
Disconnected networks have a bothersome tendency to get connected at some point ( I have been severely bitten by this in the past ), so while I agree that there is no need to coordinate anything globally, then a RFC 1918-like definition would be nice (if we are not going to use ULAs, that is)
If possible, I would argue to go further than that. Every couple of years, interconnecting organizations that used 1918 space on the back end and later turned out to need to talk to each other *and had 1918 usage conflicts* has been part of my painful world. 1918 defined both a useful private range and a space anyone could expand into if standard v4 allocations weren't enough and you weren't trying to directly route those systems. A lot of people used "useful private range" as a cover for "expanding into". Push people to get proper public assigned v6 allocations for private use going forwards. Many of them will need to interconnect them later. We know better now, and we won't exhaust anything doing so. Globally allocated != globally routed. -- -george william herbert george.herbert@gmail.com
Our classified networks aren't ever going to be connected to anything but themselves either, and they need sane local addressing. Some of them are a single room with a few machines, some of them are entire facilities with hundreds of machines, but none of them are going to be talking to a router or anything upstream, as neither of those exist on said networks. Jamie -----Original Message----- From: Chuck Anderson [mailto:cra@WPI.EDU] Sent: Tuesday, February 01, 2011 6:39 PM To: nanog@nanog.org Subject: Re: Using IPv6 with prefixes shorter than a /64 on a LAN On Tue, Feb 01, 2011 at 03:14:57PM -0800, Owen DeLong wrote:
On Feb 1, 2011, at 2:58 PM, Jack Bates wrote:
There are many cases where ULA is a perfect fit, and to work around it seems silly and reduces the full capabilities of IPv6. I fully expect to see protocols and networks within homes which will take full advantage of ULA. I also expect to see hosts which don't talk to the public internet directly and never need a GUA.
I guess we can agree to disagree about this. I haven't seen one yet.
What would your recommended solution be then for disconnected networks? Every home user and enterprise user requests GUA directly from their RIR/NIR/LIR at a cost of hunderds of dollars per year or more?
On Wed, Feb 2, 2011 at 08:11, Jamie Bowden <jamie@photon.com> wrote:
Our classified networks aren't ever going to be connected to anything but themselves either, and they need sane local addressing. Some of them are a single room with a few machines, some of them are entire facilities with hundreds of machines, but none of them are going to be talking to a router or anything upstream, as neither of those exist on said networks.
Correct me if I am wrong, but won't Classified networks will get their addresses IAW the DoD IPv6 Addressing Plan (using globals)? /TJ
If you're on a DoD classified network that spans multiple facilities (as a contractor we only get access to certain ones, and only certain hosts are allowed to access them). Self contained networks are our problem. Jamie -----Original Message----- From: TJ [mailto:trejrco@gmail.com] Sent: Thursday, February 03, 2011 10:39 AM To: NANOG Subject: Re: Using IPv6 with prefixes shorter than a /64 on a LAN On Wed, Feb 2, 2011 at 08:11, Jamie Bowden <jamie@photon.com> wrote:
Our classified networks aren't ever going to be connected to anything but themselves either, and they need sane local addressing. Some of them are a single room with a few machines, some of them are entire facilities with hundreds of machines, but none of them are going to be talking to a router or anything upstream, as neither of those exist on said networks.
Correct me if I am wrong, but won't Classified networks will get their addresses IAW the DoD IPv6 Addressing Plan (using globals)? /TJ
On Thursday, February 03, 2011 10:39:28 am TJ wrote:
Correct me if I am wrong, but won't Classified networks will get their addresses IAW the DoD IPv6 Addressing Plan (using globals)?
'Classified' networks are not all governmental. HIPPA requirements can be met with SCIFs, and those need 'classified' networks. Here, we have some control networks that one could consider 'classified' in the access control sense of the word, that is, even if a host is allowed access it must have a proven need to access, and such access needs supervision by another host. This type of network is used here for our large antenna controls, which need to be network accessible on-campus but such access must have two points of supervision (one of which is an actual person), with accessing hosts not allowed to access other networks while accessing the antenna controller. This has been an interesting network design problem, and turns traditional 'stateful' firewalling on its ear, as the need is to block access when certain connections are open, and permit access otherwise. It's made some easier since wireless access is not an option (interferes with the research done with the antennas), and wireless AP's and cell cards are actively hunted down, as well as passively hindered with shielding in the areas which have network access to the antenna controllers. It's a simple matter of protecting assets that would cost millions to replace if the controllers were given errant commands, or if the access to those controllers were to be hacked.
On 26/01/2011 09:44 p.m., Karl Auer wrote:
So let's get rid of the limitation in our minds. IPv6 provides *effectively* unlimited address space, even if it's only "for now". So let's USE it that way. Let's unlearn our limited thinking patterns. Let's go colonise infinity. And if we need to fix it in a few decades, so what? Nothing is forever.
You must be kiddin'... You're considering going through this mess again in a few decades? Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On 03/02/2011 10:07 a.m., Rob Evans wrote:
You must be kiddin'... You're considering going through this mess again in a few decades?
I'm mildly surprised if you think we're going to be done with *this* mess in a few decades.
I fully agree. But planning/expecting to go through this mess *again* is insane. -- I hope the lesson has been learned, and we won't repeat history. Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On Thu, Feb 3, 2011 at 3:17 PM, Fernando Gont <fernando@gont.com.ar> wrote:
On 03/02/2011 10:07 a.m., Rob Evans wrote:
You must be kiddin'... You're considering going through this mess again in a few decades?
I'm mildly surprised if you think we're going to be done with *this* mess in a few decades.
I fully agree. But planning/expecting to go through this mess *again* is insane. -- I hope the lesson has been learned, and we won't repeat history.
There is not yet a consensus understanding of what the problems are; that's a prerequisite to avoiding repeats. IPv4 was patched (well enough) to handle all the problems it encountered, until we hit address exhaustion. Some of the next couple of decades' problems may require another new protocol, hitting a non-address-exhaustion problem. That new problem could come out of various topology changes, inherent mobility, lots of other things. It could even come from address management (we won't likely exhaust 128 bits, but could hit configurations we can't route). Or from out of left field. -- -george william herbert george.herbert@gmail.com
On Thu, Feb 03, 2011 at 08:17:11PM -0300, Fernando Gont wrote:
I'm mildly surprised if you think we're going to be done with *this* mess in a few decades.
I fully agree. But planning/expecting to go through this mess *again* is insane. -- I hope the lesson has been learned, and we won't repeat history.
Given http://weblog.chrisgrundemann.com/index.php/2009/how-much-ipv6-is-there/ it is pretty clear the allocation algorithms have to change, or the resource is just as finite as the one we ran out yesterday.
On 2/4/2011 5:03 AM, Eugen Leitl wrote:
Given http://weblog.chrisgrundemann.com/index.php/2009/how-much-ipv6-is-there/ it is pretty clear the allocation algorithms have to change, or the resource is just as finite as the one we ran out yesterday.
That's not what the author says. It says, IPv6 is only somewherein the range of 16 million to 17 billion times larger than IPv4. Let's be realistic. A /32 (standard small ISP) is equiv to an IPv4 single IP. A /28 (medium ISP) is equiv to an IPv4 /28. A /24 (high medium, large ISP) is equiv to an IPv4 /24. A /16 (a huge ISP) is equiv to an IPv4 /16. Get the picture? So, I currently route a /16 worth of deaggregated IPv4 address space (sorry, allocation policy fault, not mine). There is NEVER a time that I will be allocated an IPv6 /16 from ARIN. Heck, the most I'll ever hope for is the current proposal's nibble boundary which might get me to a /24. I'll never talk to ARIN again after that. Jack
On Fri, Feb 04, 2011 at 08:28:53AM -0600, Jack Bates wrote:
On 2/4/2011 5:03 AM, Eugen Leitl wrote:
Given http://weblog.chrisgrundemann.com/index.php/2009/how-much-ipv6-is-there/ it is pretty clear the allocation algorithms have to change, or the resource is just as finite as the one we ran out yesterday.
That's not what the author says. It says, IPv6 is only somewherein the range of 16 million to 17 billion times larger than IPv4.
presuming you don't adhere to the guidelines that insist on the bottom 64 bits being used as a "MAC" address and the top 32 bits being used as an RIR identifier. in reality, IPv6 (as specified by many IETF RFCs and as implemented in lots of codes bases) only has 32 usable bits... just like IPv4.
Let's be realistic. A /32 (standard small ISP) is equiv to an IPv4 single IP. A /28 (medium ISP) is equiv to an IPv4 /28. A /24 (high medium, large ISP) is equiv to an IPv4 /24. A /16 (a huge ISP) is equiv to an IPv4 /16. Get the picture?
sho'huff. the real question is, how will you manage your own 32bits of space? this is a change from the old v4 world, when the question was, how will you manage your (pre CIDR) 8bits (or 16bits, or 24bits) of space?
Jack
I suspect that many people will do stupid things in managing their bits - presuming that there is virtually infinate 'greenfield' and when they have "pissed in the pool" they can just move on to a new pool. the downside... renumbering is never easy - even with/especially with IPv6. --bill
On 2/4/2011 10:50 AM, bmanning@vacation.karoshi.com wrote:
I suspect that many people will do stupid things in managing their bits - presuming that there is virtually infinate 'greenfield' and when they have "pissed in the pool" they can just move on to a new pool. the downside... renumbering is never easy - even with/especially with IPv6.
The problem is, they'll be restricted by RIR guidelines. I know that existing and future IPv6 allocation proposals for ARIN (our immediate concern) is based more on end site counts, and a limitation of /48 per end site without very good justification. Jack
On Feb 4, 2011, at 8:50 AM, bmanning@vacation.karoshi.com wrote:
On Fri, Feb 04, 2011 at 08:28:53AM -0600, Jack Bates wrote:
On 2/4/2011 5:03 AM, Eugen Leitl wrote:
Given http://weblog.chrisgrundemann.com/index.php/2009/how-much-ipv6-is-there/ it is pretty clear the allocation algorithms have to change, or the resource is just as finite as the one we ran out yesterday.
That's not what the author says. It says, IPv6 is only somewherein the range of 16 million to 17 billion times larger than IPv4.
presuming you don't adhere to the guidelines that insist on the bottom 64 bits being used as a "MAC" address and the top 32 bits being used as an RIR identifier.
1. The top 12 bits identify the RIR, not the top 32. 2. The bits somewhere between 12 and 32 identify the LIR or ISP. 3. Those facts do not reduce the available number of network numbers. Yes, EUI-64 could be argued to impinge on that, except that EUI-64 only addresses the lower 64 bits. The upper 64 bits provide about 17 billion times as many network numbers as there are host numbers in IPv4. That's without accounting for the fact that ~25% of the IPv4 addresses are unusable as unicast host addresses.
in reality, IPv6 (as specified by many IETF RFCs and as implemented in lots of codes bases) only has 32 usable bits... just like IPv4.
Um, in reality, no, that is NOT the case.
Let's be realistic. A /32 (standard small ISP) is equiv to an IPv4 single IP. A /28 (medium ISP) is equiv to an IPv4 /28. A /24 (high medium, large ISP) is equiv to an IPv4 /24. A /16 (a huge ISP) is equiv to an IPv4 /16. Get the picture?
sho'huff. the real question is, how will you manage your own 32bits of space? this is a change from the old v4 world, when the question was, how will you manage your (pre CIDR) 8bits (or 16bits, or 24bits) of space?
Among others. Owen
In message <4D4C0D25.70408@brightok.net>, Jack Bates writes:
On 2/4/2011 5:03 AM, Eugen Leitl wrote:
Given http://weblog.chrisgrundemann.com/index.php/2009/how-much-ipv6-is-the re/ it is pretty clear the allocation algorithms have to change, or the resourc e is just as finite as the one we ran out yesterday.
That's not what the author says. It says, IPv6 is only somewherein the range of 16 million to 17 billion times larger than IPv4.
And the author gets it wrong.
Let's be realistic. A /32 (standard small ISP) is equiv to an IPv4 single IP.
No, a /48 is equivalent to a single IP. You loose a little bit with small ISPs as their minimum is a /32 and supports up to 64000 customers. The bigger ISPs don't get to waste addresses space. And if a small ISP is getting space from a big ISP it also needs to maintain good usage ratios.
A /28 (medium ISP) is equiv to an IPv4 /28. A /24 (high medium, large ISP) is equiv to an IPv4 /24. A /16 (a huge ISP) is equiv to an IPv4 /16. Get the picture?
So, I currently route a /16 worth of deaggregated IPv4 address space (sorry, allocation policy fault, not mine). There is NEVER a time that I will be allocated an IPv6 /16 from ARIN. Heck, the most I'll ever hope for is the current proposal's nibble boundary which might get me to a /24. I'll never talk to ARIN again after that.
Jack
-- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 2/4/2011 5:11 PM, Mark Andrews wrote:
No, a /48 is equivalent to a single IP.
You loose a little bit with small ISPs as their minimum is a /32 and supports up to 64000 customers. The bigger ISPs don't get to waste addresses space. And if a small ISP is getting space from a big ISP it also needs to maintain good usage ratios.
Read the rest of what I said again. In the layout I used, a /32 is a /32. a /28 is a /28. Yet when you look at what is being assigned in IPv6 and you look at what we assign in IPv4, it's pretty laughable. It took years for me to get to a /16 of IPv4; where a /16 of IPv4 is small change for many large providers. In IPv6, a /16 is well out of my league and much larger than many large providers will ever need.
A /28 (medium ISP) is equiv to an IPv4 /28. A /24 (high medium, large ISP) is equiv to an IPv4 /24. A /16 (a huge ISP) is equiv to an IPv4 /16. Get the picture?
So, I currently route a /16 worth of deaggregated IPv4 address space (sorry, allocation policy fault, not mine). There is NEVER a time that I will be allocated an IPv6 /16 from ARIN. Heck, the most I'll ever hope for is the current proposal's nibble boundary which might get me to a /24. I'll never talk to ARIN again after that.
Jack
In message <4D4C8AF8.1030703@brightok.net>, Jack Bates writes:
On 2/4/2011 5:11 PM, Mark Andrews wrote:
No, a /48 is equivalent to a single IP.
You loose a little bit with small ISPs as their minimum is a /32 and supports up to 64000 customers. The bigger ISPs don't get to waste addresses space. And if a small ISP is getting space from a big ISP it also needs to maintain good usage ratios.
Read the rest of what I said again. In the layout I used, a /32 is a /32. a /28 is a /28. Yet when you look at what is being assigned in IPv6 and you look at what we assign in IPv4, it's pretty laughable.
It took years for me to get to a /16 of IPv4; where a /16 of IPv4 is small change for many large providers. In IPv6, a /16 is well out of my league and much larger than many large providers will ever need.
A /16 of IPv4 is a /32 of IPv6 if you were only delivering 1 address per customer. If you were delivering /28's to customers that /16 is equivalent to a /36. /32 get assigned to ISPs. Those ISPs assign /48s downstream. The only place where that doesn't happen is ISP to ISP assignments (resellers). /48 get assigned to everybody else. The whole internet has shifted a minimum of 16 bits to the right. In many cases it will be 32 bits to the right. If ISP's only give out /56 then the shift is 24 bits. I used to work for CSIRO. Their /16's which were got back in the late 80's will now be /48's. Mark
A /28 (medium ISP) is equiv to an IPv4 /28. A /24 (high medium, large ISP) is equiv to an IPv4 /24. A /16 (a huge ISP) is equiv to an IPv4 /16. Get the picture?
So, I currently route a /16 worth of deaggregated IPv4 address space (sorry, allocation policy fault, not mine). There is NEVER a time that I will be allocated an IPv6 /16 from ARIN. Heck, the most I'll ever hope for is the current proposal's nibble boundary which might get me to a /24. I'll never talk to ARIN again after that.
Jack
-- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 2/4/2011 6:45 PM, Mark Andrews wrote:
I used to work for CSIRO. Their /16's which were got back in the late 80's will now be /48's.
That's why I didn't try doing any adjustments of X is the new /32. The whole paradigm changes. Many ISPs devote large amounts of space to single corporate network sites. Those sites will now have a single /48. On the other hand, we currently give /32 to residential customers. They also are getting a /48. Which is why the only way to consider address usage from an ISP and RIR perspective is by how it is handed to a standard ISP of a given size. Originally, ARIN was being overly restrictive and it was "/32 for every ISP". They have loosened up, and will continue to do so (including ISP to ISP) as future proposals come to fruition. So from an ISP perspective, you have to consider your total IPv6 allocation size (within the first 32 bits of IPv6) in comparison to your total IPv4 allocations summed. From what I can tell, on average, all ISPs are shifting between 8 and 16 bits to the right from their total IPv4 size depending on their primary customer type (residential ISPs shift less than ISPs that primarily only service corporations). Jack
In message <4D4CA1B1.5060002@brightok.net>, Jack Bates writes:
On 2/4/2011 6:45 PM, Mark Andrews wrote:
I used to work for CSIRO. Their /16's which were got back in the late 80's will now be /48's.
That's why I didn't try doing any adjustments of X is the new /32. The whole paradigm changes.
So why the ~!#! are you insisting on comparing IPv4 allocations with IPv6 alocations.
Many ISPs devote large amounts of space to single corporate network sites. Those sites will now have a single /48. On the other hand, we currently give /32 to residential customers. They also are getting a /48.
Which is why the only way to consider address usage from an ISP and RIR perspective is by how it is handed to a standard ISP of a given size.
There are two sizes. Those that fit into a /32 and those that don't. The latter ones have to justify their allocations.
Originally, ARIN was being overly restrictive and it was "/32 for every ISP". They have loosened up, and will continue to do so (including ISP to ISP) as future proposals come to fruition. So from an ISP perspective, you have to consider your total IPv6 allocation size (within the first 32 bits of IPv6) in comparison to your total IPv4 allocations summed.
No. You need to compare it to the number of customer sites. If you have 1 customer with wires going to two locations thats two /48's.
From what I can tell, on average, all ISPs are shifting between 8 and 16 bits to the right from their total IPv4 size depending on their primary customer type (residential ISPs shift less than ISPs that primarily only service corporations).
Residential ISPs shift 16 bits (48-32=16). You shift less if you have less than 64000 customers sites and don't get address space from a larger ISP. Commercial ISPs shift more as what was multiple address at one sites becomes 1 /48. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 2/5/2011 6:47 AM, Mark Andrews wrote:
So why the ~!#! are you insisting on comparing IPv4 allocations with IPv6 alocations.
Because that is where the comparison must be made, at the RIR allocation size/rate level.
There are two sizes. Those that fit into a /32 and those that don't. The latter ones have to justify their allocations.
Yeah, tell that to the fee schedules.
No. You need to compare it to the number of customer sites. If you have 1 customer with wires going to two locations thats two /48's.
That's definitely the wrong way to look at it. Sure that's related to justification to an RIR to get an allocation, but ISPs will end up with much more flexible address space.
Residential ISPs shift 16 bits (48-32=16). You shift less if you have less than 64000 customers sites and don't get address space from a larger ISP. Commercial ISPs shift more as what was multiple address at one sites becomes 1 /48.
64,000 customer sites isn't required to receive more than a /32 (unless a single router makes up your entire network). Well, I currently have a /30, which is a 14 bit shift right from my /16. (30-16=14). In the near future I expect to be somewhere between a /24 and a /28, which is an 8 to 12 bit shift right from my IPv4 /16 allocation. Still, that is a considerable number of bits we'll have left when the dust settles and the RIR allocation rate drastically slows. Jack
In message <4D4D5FFC.6020905@brightok.net>, Jack Bates writes:
On 2/5/2011 6:47 AM, Mark Andrews wrote:
So why the ~!#! are you insisting on comparing IPv4 allocations with IPv6 alocations.
Because that is where the comparison must be made, at the RIR allocation size/rate level.
There are two sizes. Those that fit into a /32 and those that don't. The latter ones have to justify their allocations.
Yeah, tell that to the fee schedules.
No. You need to compare it to the number of customer sites. If you have 1 customer with wires going to two locations thats two /48's.
That's definitely the wrong way to look at it. Sure that's related to justification to an RIR to get an allocation, but ISPs will end up with much more flexible address space.
Residential ISPs shift 16 bits (48-32=16). You shift less if you have less than 64000 customers sites and don't get address space from a larger ISP. Commercial ISPs shift more as what was multiple address at one sites becomes 1 /48.
64,000 customer sites isn't required to receive more than a /32 (unless a single router makes up your entire network).
No, but you still need to have reserved growth space sensibly. /32 for a town of 3000 is overkill. Last assume you are serving a home customers so you were at 1 address per customer. You still size your pops based on expected customers and having some growth room without having to renumber. n customers requires f(n) sized block of space. The only difference with IPv6 is f(n) << 80 bits to support /48's instead of single addresses. Expected growth rates in customers don't change because you are suddenly dealing with IPv6.
Well, I currently have a /30, which is a 14 bit shift right from my /16. (30-16=14).
And did you change the amount of growth space you allowed for each pop? Were you already constrained in your IPv4 growth space and just restored your desired growth margins?
In the near future I expect to be somewhere between a /24 and a /28, which is an 8 to 12 bit shift right from my IPv4 /16 allocation.
Only if you can serve all those customers from that /16. You are then not comparing apples to apples. You are comparing a net with no growth space (IPv4) to one with growth space (IPv6).
Still, that is a considerable number of bits we'll have left when the dust settles and the RIR allocation rate drastically slows.
Jack -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 2/5/2011 7:01 PM, Mark Andrews wrote:
And did you change the amount of growth space you allowed for each pop? Were you already constrained in your IPv4 growth space and just restored your desired growth margins?
Growth rate has nothing to do with it. ARIN doesn't allow for growth in initial assignments. No predictions, no HD-Ratio, and definitely no nibble alignments. Current policy proposal hopes to fix a lot of that.
In the near future I expect to be somewhere between a /24 and a /28, which is an 8 to 12 bit shift right from my IPv4 /16 allocation. Only if you can serve all those customers from that /16. You are then not comparing apples to apples. You are comparing a net with no growth space (IPv4) to one with growth space (IPv6).
Not sure I get ya here. I am comparing apples to apples. ARIN gives me a /16 of space. There are the same number of /16's in IPv4 as IPv6. However, in IPv6, they will allocate a /24 at most to me, and I will never exceed that. This shift of 8+ bits is the gains we get shifting from IPv4 to IPv6. Jack
In message <4D4DF75E.1040109@brightok.net>, Jack Bates writes:
On 2/5/2011 7:01 PM, Mark Andrews wrote:
And did you change the amount of growth space you allowed for each pop? Were you already constrained in your IPv4 growth space and just restored your desired growth margins?
Growth rate has nothing to do with it. ARIN doesn't allow for growth in initial assignments. No predictions, no HD-Ratio, and definitely no nibble alignments.
Current policy proposal hopes to fix a lot of that.
In the near future I expect to be somewhere between a /24 and a /28, which is an 8 to 12 bit shift right from my IPv4 /16 allocation . Only if you can serve all those customers from that /16. You are then not comparing apples to apples. You are comparing a net with no growth space (IPv4) to one with growth space (IPv6).
Not sure I get ya here. I am comparing apples to apples. ARIN gives me a /16 of space. There are the same number of /16's in IPv4 as IPv6. However, in IPv6, they will allocate a /24 at most to me, and I will never exceed that. This shift of 8+ bits is the gains we get shifting from IPv4 to IPv6.
A IPv4 /16 supports 64000 potential customers. A IPv6 /32 supports 64000 potential customers. Either you have changed the customer estimates or changed the growth space allowances or were using NAT or .... You don't suddenly need 256 times the amount of space overnight all other things being equal. About the only thing I can think of is you need to advertise 256 routes and you are asking for extra blocks to get around poorly thought out filtering policies. A routing slot is a routing slot. It really doesn't matter if that slot has a /32 or a /40 or a /48 in it. They are equally expensive. If ISPs were being honest and matching IPv4 to IPv6 filtering the filters would be set a /40 not /32. By setting the filters to /32 you force the small ISP to ask for up to 256 times as much address space as they need with absolutely no benefits to anyone just to get a routing slot that won't be filtered. What's really needed is seperate the routing slot market from the address allocation market. Mark
Jack -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 2/5/2011 8:40 PM, Mark Andrews wrote:
A IPv4 /16 supports 64000 potential customers. A IPv6 /32 supports 64000 potential customers. Either you have changed the customer estimates or changed the growth space allowances or were using NAT or ....
You don't suddenly need 256 times the amount of space overnight all other things being equal. About the only thing I can think of is you need to advertise 256 routes and you are asking for extra blocks to get around poorly thought out filtering policies.
What filtering policies? My allocation was based on customers per terminating router, 1 route per terminating router. A /32 was nowhere near enough. The reason a /16 works today is because I have a routing table that looks like swiss cheese and a 95%+ utilization rate. 9 /40 (equiv of 9 /24 IPv4 DHCP pools for residential DSL) networks don't fall on a bit boundary. Nibble would make things even easier, but to say I have to run multiple routes to a pop and squeeze things in as tight as possible is insane. Justifications DO allow for some amount of aggregation in numbering plans.
If ISPs were being honest and matching IPv4 to IPv6 filtering the filters would be set a /40 not /32. By setting the filters to /32 you force the small ISP to ask for up to 256 times as much address space as they need with absolutely no benefits to anyone just to get a routing slot that won't be filtered.
Actually, many router policies, as discussed previously on the list, support /48. Routing policies don't force the /32, and a current proposal to ARIN even supports a small ISP getting a /36, hopefully at a lower cost.
What's really needed is seperate the routing slot market from the address allocation market.
I agree that inter-AS routing needs to change, though that still has nothing to do with address allocation itself. Sizes of allocations were chosen to allow for growth. The ISPs don't get near the wiggle room that corporations and end users get in address assignment currently. When analyzing exhaustion rate of IPv6, like IPv4, you have to view it at the RIR allocation level. In this case, across the board, we will see a minimum of an 8 bit shift in allocations, and often 12-16 bits (what's to the right of the allocation bits doesn't matter when we consider exhaustion rates, so long as what's to the right is appropriately utilized and justified by community standards before another request is handled by the RIR). Jack
In message <4D4E1C5D.20407@brightok.net>, Jack Bates writes:
On 2/5/2011 8:40 PM, Mark Andrews wrote:
A IPv4 /16 supports 64000 potential customers. A IPv6 /32 supports 64000 potential customers. Either you have changed the customer estimates or changed the growth space allowances or were using NAT or ....
You don't suddenly need 256 times the amount of space overnight all other things being equal. About the only thing I can think of is you need to advertise 256 routes and you are asking for extra blocks to get around poorly thought out filtering policies.
What filtering policies? My allocation was based on customers per terminating router, 1 route per terminating router. A /32 was nowhere near enough. The reason a /16 works today is because I have a routing table that looks like swiss cheese and a 95%+ utilization rate. 9 /40 (equiv of 9 /24 IPv4 DHCP pools for residential DSL) networks don't fall on a bit boundary. Nibble would make things even easier, but to say I have to run multiple routes to a pop and squeeze things in as tight as possible is insane. Justifications DO allow for some amount of aggregation in numbering plans.
Rationalising to power of 2 allocations shouldn't result in requiring 256 times the space you were claiming with the 8 bits of shift on average. A couple of bits will allow that.
If ISPs were being honest and matching IPv4 to IPv6 filtering the filters would be set a /40 not /32. By setting the filters to /32 you force the small ISP to ask for up to 256 times as much address space as they need with absolutely no benefits to anyone just to get a routing slot that won't be filtered.
Actually, many router policies, as discussed previously on the list, support /48. Routing policies don't force the /32, and a current proposal to ARIN even supports a small ISP getting a /36, hopefully at a lower cost.
What's really needed is seperate the routing slot market from the address allocation market.
I agree that inter-AS routing needs to change, though that still has nothing to do with address allocation itself. Sizes of allocations were chosen to allow for growth. The ISPs don't get near the wiggle room that corporations and end users get in address assignment currently.
When analyzing exhaustion rate of IPv6, like IPv4, you have to view it at the RIR allocation level. In this case, across the board, we will see a minimum of an 8 bit shift in allocations, and often 12-16 bits (what's to the right of the allocation bits doesn't matter when we consider exhaustion rates, so long as what's to the right is appropriately utilized and justified by community standards before another request is handled by the RIR).
You need to look very closely at any ISP that only shifts 8 bits going from IPv4 to IPv6, something dodgy is probably going on. This is not to say it is deliberately dodgy. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 2/5/2011 11:57 PM, Mark Andrews wrote:
Rationalising to power of 2 allocations shouldn't result in requiring 256 times the space you were claiming with the 8 bits of shift on average. A couple of bits will allow that.
I didn't claim 8 bit average (if I accidentally did, my apologies). I claimed a minimum of 8 bits. Somewhere between 12 and 16 is more likely. However, with new ARIN proposals, we will see shorter shifts (yet still over 8 bit shifts) as it does nibble allocations for everything (pop assignments nibble aligned, ISP allocations nibble aligned, ISP to ISP reallocation policies). It treats utilization as a 75% bar with nibble alignments to allow for proper growth at the ISP level. So for me, my /30 will at least expand out to a /28, though I will have to reanalyze the pop allocations with the new rules, as it's possible I may bump to a /24 (if I end up expanding to a /27 of actual current usage).
You need to look very closely at any ISP that only shifts 8 bits going from IPv4 to IPv6, something dodgy is probably going on. This is not to say it is deliberately dodgy.
Currently, I agree. It should be between 12 and 16 normally. However, new policy proposals are designed in such a way that the bit shift may only be 8. However, this also depends on the ISP. As ISPs do look towards dropping v4 in priority, they will also look at redesigning some of their pop layouts. This is actually a case for me. Due to growth and utilization issues on IPv4, I've concentrated pops into supernodes to better utilize the v4 I have (95% utilization of pools which cover much larger customer sets, versus a bunch of smaller utilized pools which have less utilization rates as the pops don't grow at the same rate). However, I have areas this is reaching a critical point, and the IPv6 model is dividing up into smaller pop nodes. Since I don't have address space concerns for IPv6, structuring the network and customer termination into a better layout is more appropriate. What's more, in most cases, I can accomplish v4 supernodes and v6 separation at the same time; and I'll see the benefits as more customers shift to actual v6 connectivity. Jack
In message <4D4EB93E.6000409@brightok.net>, Jack Bates writes:
On 2/5/2011 11:57 PM, Mark Andrews wrote:
Rationalising to power of 2 allocations shouldn't result in requiring 256 times the space you were claiming with the 8 bits of shift on average. A couple of bits will allow that.
I didn't claim 8 bit average (if I accidentally did, my apologies). I claimed a minimum of 8 bits. Somewhere between 12 and 16 is more likely. However, with new ARIN proposals, we will see shorter shifts (yet still over 8 bit shifts) as it does nibble allocations for everything (pop assignments nibble aligned, ISP allocations nibble aligned, ISP to ISP reallocation policies). It treats utilization as a 75% bar with nibble alignments to allow for proper growth at the ISP level.
Why would a pop need to be nibble aligned? Customers should be nibble aligned as it requires a delegation in the DNS. A pop doesn't need a delegation in the DNS. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Feb 5, 2011, at 9:40 PM, Mark Andrews wrote:
What's really needed is seperate the routing slot market from the address allocation market.
Bingo! In fact, having an efficient market for obtaining routing of a given prefix, combined with IPv6 vast identifier space, could actually satisfy the primary goals that we hold for a long-term scalable address architecture, and enable doing it in a highly distributed, automatable fashion: Aggregation would be encouraged, since use of non-aggregatable address space would entail addition costs. These costs might be seen as minimal for some organizations that desire addressing autonomy, but others might decide treating their address space portable and routable results in higher cost than is desired. Decisions about changing prefixes with ISPs can be made based on a rational tradeoff of costs, rather than in a thicket of ISP and registry policies. Conservation would actually be greatly improved, since address space would only be sought after because of the need for additional unique identifiers, rather than obtaining an address block of a given size to warrant implied routability. In light of IPv6's vast address space, it actually would be possible to provide minimally-sized but assured unique prefixes automatically via nearly any mechanism (i.e. let your local user or trade association be a registry if they want) With a significantly reduced policy framework, Registration could be fully automated, with issuance being as simple as assurance the right level of verification of requester identity (You might even get rid of this, if you can assure that ISPs obtain clear identity of clients before serving them but that would preclude any form of reputation systems based on IP address prefix such as we have in use today...) Just think: the savings in storage costs alone (from the reduction in address policy-related email on all our mailing lists) could probably fund the system. :-) Oh well, one project at a time... /John
On 2/6/11 8:00 AM, John Curran wrote:
On Feb 5, 2011, at 9:40 PM, Mark Andrews wrote:
What's really needed is seperate the routing slot market from the address allocation market.
Bingo! In fact, having an efficient market for obtaining routing of a given prefix, combined with IPv6 vast identifier space, could actually satisfy the primary goals that we hold for a long-term scalable address architecture, and enable doing it in a highly distributed, automatable fashion:
So assuming this operates on a pollution model the victims of routing table bloat are compensated by the routing table pollutors for the use of the slots which they have to carry. so I take the marginal cost of the slots that I need subtract the royalities I recieve from the other participants and if I'm close to the mean number of slots per participant then it nets out to zero. Routing table growth continues but with some illusion of fairness and the cost of maintaining an elaborate system which no-one needs. Yay?
Aggregation would be encouraged, since use of non-aggregatable address space would entail addition costs. These costs might be seen as minimal for some organizations that desire addressing autonomy, but others might decide treating their address space portable and routable results in higher cost than is desired. Decisions about changing prefixes with ISPs can be made based on a rational tradeoff of costs, rather than in a thicket of ISP and registry policies.
Conservation would actually be greatly improved, since address space would only be sought after because of the need for additional unique identifiers, rather than obtaining an address block of a given size to warrant implied routability. In light of IPv6's vast address space, it actually would be possible to provide minimally-sized but assured unique prefixes automatically via nearly any mechanism (i.e. let your local user or trade association be a registry if they want)
With a significantly reduced policy framework, Registration could be fully automated, with issuance being as simple as assurance the right level of verification of requester identity (You might even get rid of this, if you can assure that ISPs obtain clear identity of clients before serving them but that would preclude any form of reputation systems based on IP address prefix such as we have in use today...)
Just think: the savings in storage costs alone (from the reduction in address policy-related email on all our mailing lists) could probably fund the system. :-)
Oh well, one project at a time... /John
On Feb 6, 2011, at 12:15 PM, Joel Jaeggli wrote:
So assuming this operates on a pollution model the victims of routing table bloat are compensated by the routing table pollutors for the use of the slots which they have to carry. so I take the marginal cost of the slots that I need subtract the royalities I recieve from the other participants and if I'm close to the mean number of slots per participant then it nets out to zero.
Routing table growth continues but with some illusion of fairness and the cost of maintaining an elaborate system which no-one needs.
One hopes that the costs of consuming routing table slots creates backpressure to discourage needless use, and that the royalities receive offset the costs of carrying any additional routing table slots. Note that our present system lacks both consistent backpressure on consumption of routing table slots and compensation for carrying additional routes. /John p.s. While I do believe there would be a net benefit, it also should be noted that there is no apparent way to transition to such a model in any case, i.e., it could have been done that way from the beginning, but a large scale economic reengineering effort at this point might be impossible.
On 2/6/11 9:32 AM, John Curran wrote:
One hopes that the costs of consuming routing table slots creates backpressure to discourage needless use, and that the royalities receive offset the costs of carrying any additional routing table slots.
Note that our present system lacks both consistent backpressure on consumption of routing table slots and compensation for carrying additional routes.
The costs of carrying routes is unevenly distributed. when I have to carry 2 million routes in my fib on few hundred 120Gb/s line cards it's a bit different than someone with a software router who just has to make sure they have 4GB of ram... That has very attractive properties along some dimensions. e.g. the cost at the margin of connecting a new participant to the internet is rather low.
On Sun, Feb 6, 2011 at 11:15 AM, Joel Jaeggli <joelja@bogus.com> wrote:
So assuming this operates on a pollution model the victims of routing table bloat are compensated by the routing table pollutors for the use of the slots which they have to carry. so I take the marginal cost of
In this case the "victims" are the other ASes, and the "pollutors" are the ASes that announce lots of blocks. However, the pollution is also something the "victims" need, so it's not really pollution at all, unless an excessive number of slots are used it's more "meat" than trash, the pollution model doesn't exactly make sense; the basic announced routes for connectivity are not like pollution. They are more like fertilizer... nutrients that are absolutely essential when utilized in appropriate quantities, but harmful in excessive quantity. And if too many use them in excessive quantity, then polluted runoff is released as a side-effect. There is an assumption that waste is so rampant, that a per-slot cost would lead to efficiency, and no loss of connectivity or stability, but there appears to be lack of data validating the suggestion. Private "routing slot markets" could be a huge can of worms... and we thought peering spats were bad. Some $BIG_DSL_PROVIDER is going to refuse to pay some $BIG_HOSTING_PROVIDER (or anyone else) for their routing slots, they will know that their size makes it too unpallatable for anyone else on the market to _not give them_ the slots, even if they are one of the larger polluters with numerous announcements. Other providers simply can't afford to be the provider whose customers can't reach $HIGHLY_POPULAR_WEB_SITE. If $BIG_HOSTING_PROVIDER's routers do not have $BIG_DSL_PROVIDER's routes, $BIG_HOSTING_PROVIDER's customers will scream, and jump ship for $other_hosting_provider that has $big_dsl_provider routability. There will be other $BIG_COMPANYs that as well have superior negotiating position, and noone else will be in a position to discard their routes, when they refuse to pay, or negotiate a price that reflects their superior position (rather than one reflecting cost of their excessive use of routing slots). So first of all... if there's a buying and selling of routing slots a "market". It cannot be a voluntary market, or it will simply lead to a chaotic situation where numerous big providers get free routes, and everyone else has to pay the big providers extortionate/disproportionately high fees because they _have to have those slots_ due to so many of their hosting customers requiring $big_provider connectivity. To ascertain a market in the first place... need to know.... How is the number of slots that will be on the market determined? Who gets to initialize the market; create and sell paid 'routing slots', and what will give them the power to enforce all users of routing slots buy from them... Are these one-time purchases... or do 'routing slot' purchases incur maintenance fees? How would the 'ownership' of a slot get verified, when a route is announced? How is it decided how much cost 'repayment' each AS gets? Who is going to make sure each AS fully populates their table with each routing slot they have been paid to fill and they do not populate any slots that were not purchased by the originator on the open market? The idea of 'ownership' of other people's things (slots on other people's routers) generally requires the AS owning those things to sell them. That would suggest each announcer of routes having to go to each AS and paying each AS a fee for slots on their routers --- not only would it be expensive, but the communication overhead required would be massive. So clearly any market would need to be centralized; transactions would need to happen through one entity. One buy/sell transaction for a routing slot on _all_ participating ASes. Seems like a tall order
the slots that I need subtract the royalities I recieve from the other participants and if I'm close to the mean number of slots per participant then it nets out to zero.
-- -JH
On Sun, Feb 6, 2011 at 12:15 PM, Joel Jaeggli <joelja@bogus.com> wrote:
On 2/6/11 8:00 AM, John Curran wrote:
On Feb 5, 2011, at 9:40 PM, Mark Andrews wrote:
What's really needed is seperate the routing slot market from the address allocation market.
Bingo! In fact, having an efficient market for obtaining routing of a given prefix, combined with IPv6 vast identifier space, could actually satisfy the primary goals that we hold for a long-term scalable address architecture, and enable doing it in a highly distributed, automatable fashion:
So assuming this operates on a pollution model the victims of routing table bloat are compensated by the routing table pollutors for the use of the slots which they have to carry. so I take the marginal cost of the slots that I need subtract the royalities I recieve from the other participants and if I'm close to the mean number of slots per participant then it nets out to zero.
Hi Joel, It couldn't and wouldn't work that way. Here's how it could work: Part 1: The Promise. If paid to carry a particular route (consisting one specific network and netmask, no others) then barring a belief that a particular received route announcement is fraudulent, a given AS: A. Will announce that route to each neighbor AS which pays for Internet access if received from any neighbor AS, unless the specific neighbor AS has asked to receive a restricted set of routes. B. Will announce that route to every neighbor AS if received from any neighbor AS who pays for Internet access, unless the specific neighbor AS has asked to receive a restricted set of routes. C. Will not ask any neighbor AS to filter the given route or any superset from the list of routes offered on that connection. D. Will assure sufficient internal carriage of the route within the AS's network to reasonably meet responsibilities A, B and C, and extend the route or a sufficient covering route to every non-BGP customer of the network. Part 2: The Payment. Each AS who wishes to do so will offer to execute The Promise for any set of networks/netmasks requested by the legitimate origin AS for a reasonable and non-discriminatory (RAND) price selected by the AS based on reasonable estimates of the routing slot costs. The preceding not withstanding, an AS may determine that a particular route or AS is not eligible for carriage at any price due to violations of that AS's terms of service. If such is determined, the AS will not accept payment for carrying the route and will refund any payments made for service during the period in which carriage is not made. Needless to say, the origin AS with two routers can offer a RAND fee to carry routes, but not many will take them up on it. They'll have to carry the route or institute a default route if they want to remain fully connected. The folks who will get paid are the ones who collectively are the backbone where you, as the origin, can't afford for your route not to be carried. These are, of course, the same folks who are presently the victims of routing pollution who pay the lion's share of the $2B/yr routing slot costs yet have little choice but to carry the routes. Part 3: The Arbiter. One or several route payment centers collects the RAND offerings and makes them available to origin ASes in bulk sets. You write one check each month to the Arbiter and he collects your routes with his other customers and makes the appropriate Payments for the Promises. Part 4: The Covering Routes. ARIN and the other RIRs auction the rights to offer a covering route for particular /8's. The winner announces the whole /8 but gets to break the RAND rule in the Payment for covered routes. An origin AS can still choose to have everybody carry his routes. But he can also choose to have just the paid paths to the AS with the Covering Route carried, or some fraction of ASes that includes those paid paths. Or he could buy transit tunnels from the Covering Route AS anchored to PA addresses from his individual ISPs. Or he could do a mix of the two. Regardless, the origin AS ends up with full reachability without needing his explicit route to be carried the breadth of the Internet. Note that I use the term "auction" very loosely. The winner could be the qualified AS willing to pay a fixed nominal fee and promise the lowest carriage fee / 95th percentile tunnel transit fee. At any rate, you get a healthy potential for route aggregation through payment selection by the origin AS. If more precise routing is worth the money, they'll pay the slot cost. If not, they'll rely on the covers. Or if it works out that a router costs $5M because it has to carry 10M routes, who cares as long as you're being paid what it costs?
Yay?
Yay! Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
What's really needed is seperate the routing slot market from the address allocation market.
Bingo! In fact, having an efficient market for obtaining routing of a given prefix, combined with IPv6 vast identifier space, could actually satisfy the primary goals that we hold for a long-term scalable address architecture, and enable doing it in a highly distributed, automatable fashion
This is not unlike the oft made comment that if you could just charge a fraction of a cent for every mail message, there would be no spam problem. They're both bad ideas that just won't go away. Here's some thought experiments: 1) You get a note from the owner of jidaw.com, a large ISP in Nigeria, telling you that they have two defaultless routers so they'd like a share of the route fees. Due to the well known fraud problem in Nigeria, please pay them into the company's account in the Channel Islands. What do you do? (Helpful hint: there are plenty of legitimate reasons for non-residents to have accounts in the Channel Islands. I have a few.) 2) Google says here's our routes, we won't be paying anything. What do you do? 2a) If you insist no pay, no route, what do you tell your users when they call and complain? 2b) If you make a special case for Google, what do you do when Yahoo, AOL, and Baidu do the same thing? I can imagine some technical backpressure, particularly against networks that don't aggregate their routes, but money? Forget about it, unless perhaps you want to mix them into the peering/transit negotiations. Regards, John Levine, johnl@iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. http://jl.ly
1) You get a note from the owner of jidaw.com, a large ISP in Nigeria, telling you that they have two defaultless routers so they'd like a share of the route fees. Due to the well known fraud problem in Nigeria, please pay them into the company's account in the Channel Islands. What do you do? (Helpful hint: there are plenty of legitimate reasons for non-residents to have accounts in the Channel Islands. I have a few.)
If I peer with them or sell them transit or buy transit from them then we have a reason to talk, otherwise, not so much.
2) Google says here's our routes, we won't be paying anything. What do you do?
There's a cost to taking the routes from Google, and a benefit to having those routes. As long as the benefit exceeds the cost, no worries.
2a) If you insist no pay, no route, what do you tell your users when they call and complain?
2b) If you make a special case for Google, what do you do when Yahoo, AOL, and Baidu do the same thing?
Back to the cost/benefit balance above.
I can imagine some technical backpressure, particularly against networks that don't aggregate their routes, but money? Forget about it, unless perhaps you want to mix them into the peering/transit negotiations.
I think the only way it works, presuming anyone wanted to do it, is as a property of transit and peering. If I buy transit from you and want to send you a mess of routes, you might charge me more for my transit on account of that. Perhaps I get one free prefix announcement per x amount of bandwidth I am buying ? If we are peering then prefix balance might join traffic balance as a way to think about whether the arrangement is good for both peers. All of these arrangements occur between directly peering or transit providing neighbors. If I buy transit from you, I expect you to pay any costs needed to get my routes out to the world (and probably to charge me accordingly).
On 2/6/2011 3:16 PM, John Levine wrote:
I can imagine some technical backpressure, particularly against networks that don't aggregate their routes, but money? Forget about it, unless perhaps you want to mix them into the peering/transit negotiations.
On the other hand, the ESPN3 extortion worked quite well. Jack
Hi everyone, Responding to multiple messages here: On 2/6/11 10:16 PM, John Levine wrote:
What's really needed is seperate the routing slot market from the address allocation market. Bingo! In fact, having an efficient market for obtaining routing of a given prefix, combined with IPv6 vast identifier space, could actually satisfy the primary goals that we hold for a long-term scalable address architecture, and enable doing it in a highly distributed, automatable fashion
Indeed as John Curran may recall, there was a presentation at either the BGPD/CIDRD or ALE working group at an IETF meeting by a gentleman from Bell Labs on the idea of a routing slot market, back about 15 years ago. I thought it was a great presentation, but a number of factors came into play, and chief amongst them was that it would require substantial "cooperation" amongst service providers to refuse to carry customer routes, and that just wasn't happening. Have times changed since then?
This is not unlike the oft made comment that if you could just charge a fraction of a cent for every mail message, there would be no spam problem. They're both bad ideas that just won't go away.
Here's some thought experiments:
1) You get a note from the owner of jidaw.com, a large ISP in Nigeria, telling you that they have two defaultless routers so they'd like a share of the route fees. Due to the well known fraud problem in Nigeria, please pay them into the company's account in the Channel Islands. What do you do? (Helpful hint: there are plenty of legitimate reasons for non-residents to have accounts in the Channel Islands. I have a few.)
2) Google says here's our routes, we won't be paying anything. What do you do?
2a) If you insist no pay, no route, what do you tell your users when they call and complain?
2b) If you make a special case for Google, what do you do when Yahoo, AOL, and Baidu do the same thing?
I can imagine some technical backpressure, particularly against networks that don't aggregate their routes, but money? Forget about it, unless perhaps you want to mix them into the peering/transit negotiations.
These are great questions. Approaches that due away with this scarcity seem more feasible. Eliot
It would help if we weren't shipping the routing equivalent of the pre DNS /etc/hosts all over the network (it's automated, but it's still the equivalent). There has to be a better way to handle routing information than what's currently being done. The old voice telephony guys built a system that built SVCs on the fly from any phone in the world to any other phone in the world; it (normally) took less than a second for it to do it between any pair of phones under the NANPA, and only slightly longer for international outside the US and Canada. There have to be things to be learned from there. Jamie -----Original Message----- From: John Curran [mailto:jcurran@istaff.org] Sent: Sunday, February 06, 2011 11:00 AM To: Mark Andrews Cc: NANOG list Subject: What's really needed is a routing slot market (was: Using IPv6 withprefixes shorter than a /64 on a LAN) On Feb 5, 2011, at 9:40 PM, Mark Andrews wrote:
What's really needed is seperate the routing slot market from the address allocation market.
Bingo! In fact, having an efficient market for obtaining routing of a given prefix, combined with IPv6 vast identifier space, could actually satisfy the primary goals that we hold for a long-term scalable address architecture, and enable doing it in a highly distributed, automatable fashion: Aggregation would be encouraged, since use of non-aggregatable address space would entail addition costs. These costs might be seen as minimal for some organizations that desire addressing autonomy, but others might decide treating their address space portable and routable results in higher cost than is desired. Decisions about changing prefixes with ISPs can be made based on a rational tradeoff of costs, rather than in a thicket of ISP and registry policies. Conservation would actually be greatly improved, since address space would only be sought after because of the need for additional unique identifiers, rather than obtaining an address block of a given size to warrant implied routability. In light of IPv6's vast address space, it actually would be possible to provide minimally-sized but assured unique prefixes automatically via nearly any mechanism (i.e. let your local user or trade association be a registry if they want) With a significantly reduced policy framework, Registration could be fully automated, with issuance being as simple as assurance the right level of verification of requester identity (You might even get rid of this, if you can assure that ISPs obtain clear identity of clients before serving them but that would preclude any form of reputation systems based on IP address prefix such as we have in use today...) Just think: the savings in storage costs alone (from the reduction in address policy-related email on all our mailing lists) could probably fund the system. :-) Oh well, one project at a time... /John
On Mon, Feb 7, 2011 at 9:25 AM, Jamie Bowden <jamie@photon.com> wrote:
It would help if we weren't shipping the routing equivalent of the pre DNS /etc/hosts all over the network (it's automated, but it's still the equivalent). There has to be a better way to handle routing information than what's currently being done.
Hi Jamie, Consensus in the routing research arena is that it's a layer boundary problem. Layer 4/5 (TCP, various UDP-based protocols) intrudes to deeply into layer 3. Sessions are statically bound at creation to the layer 3 address. Unlike the dynamic MAC to IP bindings (with ARP) the TCP to IP bindings can't change during the potentially long-lived session. Thus route proliferation is needed to maintain them. Much better routing protocols are possible, but you first either have to break layer 3 in half (with a dynamic binding between the two halves that renders the lower half inaccessible to layer 4) or you have to redesign TCP with dynamic bindings to the layer 3 address. Ideas like LISP take the former approach. Ideas like SCTP and Multipath TCP take the latter. The deployment prospects are not promising. Modest improvements like FIB compression are in the pipeline for DFZ routing, but don't expect any earth shattering improvements. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On 2/7/2011 10:30 AM, William Herrin wrote:
Ideas like LISP take the former approach. Ideas like SCTP and Multipath TCP take the latter. The deployment prospects are not promising.
I'm rusty on LISP, but I believe it was designed to solve the DFZ problem itself, while SCTP and Multipath TCP solve issues such as being able to change the layer3 address on an existing connection (supporting rapid renumbering and multipath failover/loadbalancing utilizing multiple layer 3 addresses (1 per path). In an ideal world, we'd be using both. Jack
On Feb 7, 2011, at 8:30 AM, William Herrin wrote:
On Mon, Feb 7, 2011 at 9:25 AM, Jamie Bowden <jamie@photon.com> wrote:
It would help if we weren't shipping the routing equivalent of the pre DNS /etc/hosts all over the network (it's automated, but it's still the equivalent). There has to be a better way to handle routing information than what's currently being done.
Hi Jamie,
Consensus in the routing research arena is that it's a layer boundary problem. Layer 4/5 (TCP, various UDP-based protocols) intrudes to deeply into layer 3. Sessions are statically bound at creation to the layer 3 address. Unlike the dynamic MAC to IP bindings (with ARP) the TCP to IP bindings can't change during the potentially long-lived session. Thus route proliferation is needed to maintain them.
Much better routing protocols are possible, but you first either have to break layer 3 in half (with a dynamic binding between the two halves that renders the lower half inaccessible to layer 4) or you have to redesign TCP with dynamic bindings to the layer 3 address. Ideas like LISP take the former approach. Ideas like SCTP and Multipath TCP take the latter. The deployment prospects are not promising.
Modest improvements like FIB compression are in the pipeline for DFZ routing, but don't expect any earth shattering improvements.
On the other hand, when we can deprecate global routing of IPv4, we will see an earth shattering improvement as the current 10:1 prefix to provider ratio (300,000 prefixes for ~30,000 active ASNs) drops to something more like 2:1 in IPv6 due to providers not having to constantly run back to the RIR for additional slow-start allocations. Owen
On Mon, Feb 7, 2011 at 12:04 PM, Owen DeLong <owen@delong.com> wrote:
...
On the other hand, when we can deprecate global routing of IPv4, we will see an earth shattering improvement as the current 10:1 prefix to provider ratio (300,000 prefixes for ~30,000 active ASNs) drops to something more like 2:1 in IPv6 due to providers not having to constantly run back to the RIR for additional slow-start allocations.
Owen
I suspect as we start seeing the CIDR report for IPv6, we'll see that ASNs are announcing considerably more prefixes than that, in order to localize traffic better. I don't think it'll be 300,000 prefixes, but I'd be willing to bet it'll be more than 100,000--not exactly "earth shattering improvement". Matt (hopeless deaggregator)
On Feb 7, 2011, at 12:19 PM, Matthew Petach wrote:
On Mon, Feb 7, 2011 at 12:04 PM, Owen DeLong <owen@delong.com> wrote:
...
On the other hand, when we can deprecate global routing of IPv4, we will see an earth shattering improvement as the current 10:1 prefix to provider ratio (300,000 prefixes for ~30,000 active ASNs) drops to something more like 2:1 in IPv6 due to providers not having to constantly run back to the RIR for additional slow-start allocations.
Owen
I suspect as we start seeing the CIDR report for IPv6, we'll see that ASNs are announcing considerably more prefixes than that, in order to localize traffic better. I don't think it'll be 300,000 prefixes, but I'd be willing to bet it'll be more than 100,000--not exactly "earth shattering improvement".
Matt (hopeless deaggregator)
Currently: 3,134 IPv6 ASNs active. Currently: 4,265 IPv6 prefixes. Looks like less than 2:1 to me. That's as close as I think I can get to an IPv6 CIDR report for the moment. Owen
If you look at Gert Doering's slides that I presented at NANOG (in the IPv6 Deployment Experiences track) I believe it is 1.4 prefixes per ASN in IPv6 and something like 10.5 prefixes per ASN in IPv4. There are also descriptions of the reasons for some of these multiple advertisements in IPv6 as well as how many ASNs have just one and how many have 2 etc. The slides are here http://www.nanog.org/meetings/nanog51/presentations/Monday/NANOG51.Talk13.Ar... Enjoy! -----Cathy On Mon, Feb 7, 2011 at 2:10 PM, Owen DeLong <owen@delong.com> wrote:
On Feb 7, 2011, at 12:19 PM, Matthew Petach wrote:
On Mon, Feb 7, 2011 at 12:04 PM, Owen DeLong <owen@delong.com> wrote:
...
On the other hand, when we can deprecate global routing of IPv4, we will see an earth shattering improvement as the current 10:1 prefix to provider ratio (300,000 prefixes for ~30,000 active ASNs) drops to something more like 2:1 in IPv6 due to providers not having to constantly run back to the RIR for additional slow-start allocations.
Owen
I suspect as we start seeing the CIDR report for IPv6, we'll see that ASNs are announcing considerably more prefixes than that, in order to localize traffic better. I don't think it'll be 300,000 prefixes, but I'd be willing to bet it'll be more than 100,000--not exactly "earth shattering improvement".
Matt (hopeless deaggregator)
Currently: 3,134 IPv6 ASNs active. Currently: 4,265 IPv6 prefixes.
Looks like less than 2:1 to me.
That's as close as I think I can get to an IPv6 CIDR report for the moment.
Owen
On Mon, Feb 7, 2011 at 3:10 PM, Owen DeLong
That's as close as I think I can get to an IPv6 CIDR report for the moment.
Looks like Geoff has you already setup. http://www.cidr-report.org/v6/as2.0/ Andy Koch
On Mon, Feb 7, 2011 at 1:45 PM, Koch, Andrew <andrew.koch@tdstelecom.com> wrote:
On Mon, Feb 7, 2011 at 3:10 PM, Owen DeLong
That's as close as I think I can get to an IPv6 CIDR report for the moment.
Looks like Geoff has you already setup.
http://www.cidr-report.org/v6/as2.0/
Andy Koch
Excellent, thanks! Exactly what I was hoping to see get developed. ^_^ Wow. Did _not_ expect to see Google at the top of the evil list. ^_^;; ASnum NetsNow NetsAggr NetGain % Gain Description AS15169 38 8 30 78.9% GOOGLE - Google Inc. Makes my deaggregation seem almost not worth worrying about. :D Matt
On 07/02/11 14:25, Jamie Bowden wrote:
It would help if we weren't shipping the routing equivalent of the pre DNS /etc/hosts all over the network (it's automated, but it's still the equivalent). There has to be a better way to handle routing information than what's currently being done. The old voice telephony guys built a system that built SVCs on the fly from any phone in the world to any other phone in the world; it (normally) took less than a second for it to do it between any pair of phones under the NANPA, and only slightly longer for international outside the US and Canada. There have to be things to be learned from there.
Jamie
They did indeed, but they did it by centrally precomputing and then downloading centrally-built routing tables to each exchange, with added statically-configured routing between telco provider domains, and then doing step-by-step call setup, with added load balancing and crankback on the most-favoured links in the static routing table at each stage. All this works fine in a fairly static environment where there are only a few, well-known, and fairly trustworthy officially-endorsed entities involved within each country, and topology changes could be centrally planned. BGP is a hack, but it's a hack that works. I'm not sure how PSTN-style routing could have coped with the explosive growth of the Internet, with its very large number or routing participants with no central planning or central authority to establish trust, and an endlessly-churning routing topology. Still, every good old idea is eventually reinvented, so it may have its time again one day. -- Neil
On 02/08/2011 11:01 AM, Neil Harris wrote:
They did indeed, but they did it by centrally precomputing and then downloading centrally-built routing tables to each exchange, with added statically-configured routing between telco provider domains, and then doing step-by-step call setup, with added load balancing and crankback on the most-favoured links in the static routing table at each stage.
All this works fine in a fairly static environment where there are only a few, well-known, and fairly trustworthy officially-endorsed entities involved within each country, and topology changes could be centrally planned.
BGP is a hack, but it's a hack that works. I'm not sure how PSTN-style routing could have coped with the explosive growth of the Internet, with its very large number or routing participants with no central planning or central authority to establish trust, and an endlessly-churning routing topology.
Still, every good old idea is eventually reinvented, so it may have its time again one day.
The way LNP works is a good example of PSTN style routing scaling. Each carrier has to have at least one NPA/NXX pair per switch, of which they pick one number they will never port out, and never assign to an end user, and declare that number as their LRN. There's nothing super special about this LRN, except that it's part of that NPA/NXX that's directly allocated to the carrier's switch as the original assignee. When a phone call is made, a TCAP query is launched by the originating switch to a set of STPs that then route it to an LNP database, that has a full list of every ported number, and its LRN, and a few other tidbits of info. The switch then sees the LRN in the response, sets the called party number to the LRN, and then sets the "Generic Address Parameter" parameter in the ISUP message to the originally dialed number. This tunnels it through a network that is unaware of LNP, and when the terminating switch sees its own LRN as the destination, it moves the Generic Address Parameter back to the Called Party Number and continues processing. -Paul
The way LNP works is a good example of PSTN style routing scaling. ...
When a phone call is made, a TCAP query is launched by the originating switch to a set of STPs that then route it to an LNP database, that has a full list of every ported number, and its LRN, and a few other tidbits of info.
Right. That works great in an environment where the regulators require that every telco pay Neustar to maintain the LNP databases, and send all the updates promptly when a number is ported or disconnected. The telcos pay Neustar $300 million a year to run the database. I'm sure they'd be delighted to run a similar database for IP networks at a similar price. Of course, that just handles the networks in the U.S. Regards, John Levine, johnl@iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. http://jl.ly
Right. That works great in an environment where the regulators require that every telco pay Neustar to maintain the LNP databases, and send all the updates promptly when a number is ported or disconnected.
The telcos pay Neustar $300 million a year to run the database. I'm sure they'd be delighted to run a similar database for IP networks at a similar price. Of course, that just handles the networks in the U.S.
One of the challenges in the VOIP world these days IS call routing based on destination DID. It's easy to ship calls out trunks; it's far more compelling to send them to the destination PBX directly over UDP/IP. Sadly, the best mechanism anyone has come up with is manual number publishing in an rDNS style database, and the results are less than stellar... Nathan
On Feb 5, 2011, at 5:20 PM, Jack Bates wrote:
On 2/5/2011 7:01 PM, Mark Andrews wrote:
And did you change the amount of growth space you allowed for each pop? Were you already constrained in your IPv4 growth space and just restored your desired growth margins?
Growth rate has nothing to do with it. ARIN doesn't allow for growth in initial assignments. No predictions, no HD-Ratio, and definitely no nibble alignments.
Yet.
Current policy proposal hopes to fix a lot of that.
Yes... 2011-3 for those who are interested in knowing more. Owen
On Feb 5, 2011, at 6:38 PM, Nathan Eisenberg wrote:
Still, that is a considerable number of bits we'll have left when the dust settles and the RIR allocation rate drastically slows.
Like it did for IPv4? ;)
-Nathan
It long since would have if ISPs didn't have to come back annually (or more frequently in many cases) to get additional addresses to support their growth. In IPv6, we should be looking to do 5 or 10 year allocations. We can afford to be fairly speculative in our allocations in order to preserve greater aggregation. In iPv4, the registries were constantly trying to balance shortage of addresses with shortage of routing table slots. In IPv6, we can focus on rational allocation for administrative purposes with some consideration given to routing table slots. It makes for a significantly different set of tradeoffs and optimizations that should be used in address policy. That is why I wrote 2011-3 and why we passed 2010-8. Owen
On 2/5/2011 9:44 PM, Owen DeLong wrote:
In IPv6, we should be looking to do 5 or 10 year allocations. We can afford to be fairly speculative in our allocations in order to preserve greater aggregation.
And even if networks were only getting an 8 bit slide, that's 256 trips back to the RIR to get to their current allocations sizes (over 1000 years if they had to return once every 5 years). However, 12-16 bit slides seem more common (perhaps John knows the exact slide ratio, though I suspect many ISPs haven't really nailed down what they need in v6 yet) and that can exceed 10 year allocation rates for some ISPs. Jack
On Sat, Feb 05, 2011 at 11:47:10PM +1100, Mark Andrews wrote:
In message <4D4CA1B1.5060002@brightok.net>, Jack Bates writes:
On 2/4/2011 6:45 PM, Mark Andrews wrote:
I used to work for CSIRO. Their /16's which were got back in the late 80's will now be /48's.
That's why I didn't try doing any adjustments of X is the new /32. The whole paradigm changes.
So why the ~!#! are you insisting on comparing IPv4 allocations with IPv6 alocations.
"..96 more bits - no majik.."
Mark
--bill
...
What did that just do to your per-site /64? That you have no hope of ever seeing a user use up? It just turned that /64 into a /112 (16 bits of port space, 32 bits of cloud identifier space.) What's the next killer app that'll chew up more of your IPv6 space?
Dude... You missed... It's not supposed to be a /64 per site. The plan is a /48 per site. Yes, you managed to use one of the subnets up pretty well... ON A SINGLE SUBNET. Now, what do you do for the other 65,535 of them at the one site?
I'm all for IPv6. And I'm all for avoiding conjecture and getting to the task at hand. But simply assuming that the IPv6 address space will forever remain that - only unique host identifiers - I think is disingenious at best. :-)
Well.. There's assuming (like your assumption that a /64 per site was the original plan) and then there's doing the math. Even with the utilization you've mentioned above, my math still holds. Owen
Adrian
On Tue, Jan 25, 2011, Owen DeLong wrote:
I love this term... "repetitively sweeping a targets /64".
Seriously? Repetitively sweeping a /64? Let's do the math...
2^64 = 18,446,744,073,709,551,616 IP addresses.
Let's assume that few networks would not be DOS'd by a 1,000 PPS storm coming in so that's a reasonable cap on our scan rate.
That means sweeping a /64 takes 18,446,744,073,709,551 sec. (rounded down).
There are 86,400 seconds per day.
18,446,744,073,709,551 / 86,400 = 213,503,982,334 days.
Rounding a year down to 365 days, that's 584,942,417 years to sweep the /64 once.
If we increase our scan rate to 1,000,000 packets per second, it still takes us 584,942 years to sweep a /64.
I don't know about you, but I do not expect to live long enough to sweep a /64, let alone do so repetitively.
Owen
-- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $24/pm+GST entry-level VPSes w/ capped bandwidth charges available in WA -
I think we're losing focus on the discussion here. The core issue here is that ND tables have a finite size, just like ARP tables. Making an unsolicited request to a subnet will cause ND on the router to try and reach find the host. This can be a problem with subnets as small as 1024 (I constantly find people using Linux-based routers, for example, running with the kernel default ARP table of 127 instead of bumping it up to a sane and network appropriate level). I don't believe that using smaller IPv6 prefixes is an appropriate response to the problem. In time, we will likely see protection mechanisms come from vendors. Perhaps disabling the ability for routers to solicit ND and just depend on connected hosts to announce their presence would be sufficient. Perhaps not. It is something that needs to be looked into, just like DAD DoS attacks, and rogue RA on the LAN. But it has little to do with prefix length. When it comes down to it. I find it hard to justify attempting to mitigate this DoS vector by using longer prefixes. There are many many more useful and effective DoS vectors that are lower-hanging fruit. And the lowest hanging fruit always wins. On Tue, Jan 25, 2011 at 1:42 PM, Owen DeLong <owen@delong.com> wrote:
On Jan 25, 2011, at 8:58 AM, Patrick Sumby wrote:
On 24/01/2011 22:41, Michael Loftis wrote:
On Mon, Jan 24, 2011 at 1:53 PM, Ray Soucy<rps@maine.edu> wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it.
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc. Routers can also act as amplifiers too, DDoSing every host within a multicast ND directed solicitation group (and THAT is even assuming a correctly functioning switch thats limiting the multicast travel)
I love this term... "repetitively sweeping a targets /64".
Seriously? Repetitively sweeping a /64? Let's do the math...
2^64 = 18,446,744,073,709,551,616 IP addresses.
Let's assume that few networks would not be DOS'd by a 1,000 PPS storm coming in so that's a reasonable cap on our scan rate.
That means sweeping a /64 takes 18,446,744,073,709,551 sec. (rounded down).
There are 86,400 seconds per day.
18,446,744,073,709,551 / 86,400 = 213,503,982,334 days.
Rounding a year down to 365 days, that's 584,942,417 years to sweep the /64 once.
If we increase our scan rate to 1,000,000 packets per second, it still takes us 584,942 years to sweep a /64.
I don't know about you, but I do not expect to live long enough to sweep a /64, let alone do so repetitively.
Owen
-- Ray Soucy Epic Communications Specialist Phone: +1 (207) 561-3526 Networkmaine, a Unit of the University of Maine System http://www.networkmaine.net/
So I pretty strongly disagree about your statement. Repetitively sweeping an IPv6 network to DoS/DDoS the ND protocol thereby
flooding
the ND cache/LRUs could be extremely effective and if not payed serious attention will cause serious issues.
Yes.... This is an issue for point-to-point links but using a longer prefix (/126 or similar) has been suggested as a mitigation for this sort of attack.
I would assume that in the LAN scenario where you have a /64 for your internal network that you would have some sort of stateful firewall sitting infront of the network to stop any un-initiated sessions. This therefore stops any hammering of ND cache etc. The argument then is that the number of packets hitting your firewall / bandwidth starvation would be the the alternative line of attack for a DoS/DDos but that is a completely different issue.
So for /64 subnets used for point-to-points you disable ND, configure static neighbors and that's the end of it. No ND DDoS.
On 24/01/2011 07:41 p.m., Michael Loftis wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it.
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc.
That depends on the ND implementation being broken enough by not limiting the number of neighbor cache entries that are in the INCOMPLETE state. (I'm not saying those broken implementations don't exist, though). Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On Tue, Jan 25, 2011 at 10:26 PM, Fernando Gont <fernando@gont.com.ar> wrote:
On 24/01/2011 07:41 p.m., Michael Loftis wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem. The general feeling is that there is simply too much address space for it to be done in any reasonable amount of time, and there is almost nothing to be gained from it.
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc.
That depends on the ND implementation being broken enough by not limiting the number of neighbor cache entries that are in the INCOMPLETE state. (I'm not saying those broken implementations don't exist, though).
Even without completely overflowing the ND cache, informal lab testing shows that a single laptop on a well-connected network link can send sufficient packets at a very-large-scale backbone router's connected /64 subnet to keep the router CPU at 90%, sustained, for as long as you'd like. So, while it's not a direct denial of service (the network keeps functioning, albeit under considerable pain), it's enough to impact the ability of the network to react to other dynamic loads. :/ Matt
Hi, Matthew, On 30/01/2011 08:17 p.m., Matthew Petach wrote:
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc.
That depends on the ND implementation being broken enough by not limiting the number of neighbor cache entries that are in the INCOMPLETE state. (I'm not saying those broken implementations don't exist, though).
Even without completely overflowing the ND cache, informal lab testing shows that a single laptop on a well-connected network link can send sufficient packets at a very-large-scale backbone router's connected /64 subnet to keep the router CPU at 90%, sustained, for as long as you'd like. So, while it's not a direct denial of service (the network keeps functioning, albeit under considerable pain), it's enough to impact the ability of the network to react to other dynamic loads. :/
This is very interesting data. Are you talking about Ciscos? Any specific model? I guess that a possible mitigation technique (implementation-based) would be to limit the number of ongoing addresses in address resolution. (i.e., once you have X ongoing ND resolutions, the router should not be engaged in ND for other addresses) -- note that addresses that the router had already resolved in the past would not suffer from this penalty, as their corresponding entries would be in states other than INCOMPLETE. Thoughts? Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On Sun, Jan 30, 2011 at 6:24 PM, Fernando Gont <fernando@gont.com.ar> wrote:
Hi, Matthew,
On 30/01/2011 08:17 p.m., Matthew Petach wrote:
The problem I see is the opening of a new, simple, DoS/DDoS scenario. By repetitively sweeping a targets /64 you can cause EVERYTHING in that /64 to stop working by overflowing the ND/ND cache, depending on the specific ND cache implementation and how big it is/etc.
That depends on the ND implementation being broken enough by not limiting the number of neighbor cache entries that are in the INCOMPLETE state. (I'm not saying those broken implementations don't exist, though).
Even without completely overflowing the ND cache, informal lab testing shows that a single laptop on a well-connected network link can send sufficient packets at a very-large-scale backbone router's connected /64 subnet to keep the router CPU at 90%, sustained, for as long as you'd like. So, while it's not a direct denial of service (the network keeps functioning, albeit under considerable pain), it's enough to impact the ability of the network to react to other dynamic loads. :/
This is very interesting data. Are you talking about Ciscos? Any specific model?
Uh, I've gotten into some trouble in the past for mentioning router vendors by name before in public forums, so I'm going to avoid public mention of names; but it seems that others in this thread are able to speak up with specific details, if that helps answer your question in a slightly more roundabout way. ^_^;
I guess that a possible mitigation technique (implementation-based) would be to limit the number of ongoing addresses in address resolution. (i.e., once you have X ongoing ND resolutions, the router should not be engaged in ND for other addresses) -- note that addresses that the router had already resolved in the past would not suffer from this penalty, as their corresponding entries would be in states other than INCOMPLETE.
Thoughts?
Thanks,
That's been one of the areas that's ripe for development, yes; have the control plane take some preferential actions to avoid harming established connectivity under stressful circumstances like that; potentially taking steps to avoid aging out older, potentially still valid entries if there may not be sufficient resources to safely re-learn them, for example. Matt
On Sun, 30 Jan 2011, Matthew Petach wrote:
Even without completely overflowing the ND cache, informal lab testing shows that a single laptop on a well-connected network link can send sufficient packets at a very-large-scale backbone router's connected /64 subnet to keep the router CPU at 90%, sustained, for as long as you'd like. So, while it's not a direct denial of service (the network keeps functioning, albeit under considerable pain), it's enough to impact the ability of the network to react to other dynamic loads. :/
At AMSIX, a Cisco 12000 running IOS will get into trouble with the 170pps of ND seen there. AMSIX doesn't do MLD snooping so everybody gets everything and on IOS 12000 ND is punted to RP and when it's busy with calculating BGP, it'll start dropping BGP sessions. An access-list filtering IPv6 multicast the router isn't subscribed to fixes the problem. -- Mikael Abrahamsson email: swmike@swm.pp.se
At AMSIX, a Cisco 12000 running IOS will get into trouble with the 170pps of ND seen there. AMSIX doesn't do MLD snooping so everybody gets everything and on IOS 12000 ND is punted to RP and when it's busy with calculating BGP, it'll start dropping BGP sessions.
Really? I've tried to duplicate the results in our lab, but I can't provoke any problems at those numbers. Is it the "other" multicast traffic that's interfering with ND? When pounding the CPU with ~30 times more (5000pps) Neighbour solicitations and flapping 1000 BGP IPv4 prefixes (out of 51000) every 5 seconds, I get the following load (worst case): 12k#sh proc cpu | ex 0.00 CPU utilization for five seconds: 99%/13%; one minute: 83%; five minutes: 76% PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 29 19472 7944653 2 0.31% 0.07% 0.05% 0 PowerMgr Main 160 5120 3415 1499 0.47% 0.18% 0.06% 0 Exec 181 17016 76522129 0 0.07% 0.14% 0.15% 0 CEF RP IPC Backg 185 1992892 19727573 101 17.91% 19.36% 20.02% 0 IPv6 Input 213 256008 9155905 27 3.03% 2.80% 2.83% 0 BGP Router 216 3606044 677600 5321 64.31% 45.74% 37.41% 0 BGP Scanner 12k# Even though the load is high, there is no flaps, neither in ISIS, LDP, BFD (3 sessions with 3 x 50 ms asynch mode) nor BGP. When BGP Scanner is not running, the numbers are much lower: martin#sh proc cpu | ex 0.00 CPU utilization for five seconds: 45%/16%; one minute: 82%; five minutes: 76% PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 160 5192 3454 1503 0.79% 0.20% 0.08% 0 Exec 181 17068 76522593 0 0.15% 0.15% 0.15% 0 CEF RP IPC Backg 185 2000528 19764701 101 24.79% 19.70% 20.01% 0 IPv6 Input 213 256976 9156110 28 3.03% 2.82% 2.83% 0 BGP Router martin# The hardware in question is a PRP-1 running SY9b, and the same LC (SIP-601/SPA-5x1GE-v2) is used for both ND and BGP. Note: When doing 10000pps ND, the LDP-adjacency with a neighbour on the same LC did flap occasionally. -- Pelle RFC1925, truth 11: Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.
On Mon, 31 Jan 2011, Per Carlson wrote:
Really? I've tried to duplicate the results in our lab, but I can't provoke any problems at those numbers. Is it the "other" multicast traffic that's interfering with ND?
It's a hold-queue problem. Normally IPv6 input is around 0.5% CPU on the RP, but due to IPv6 not being supported for SPD and this also seems to cause problems with IPv4 BGP traffic as well, the hold-queue (we raised it a lot) gets full and packets are tail-dropped from the hold-queue, and keepalives being lost. This has been through a full analysis by TAC and their suggestion was to filter non-needed IPv6 multicast, and it completely removed the symptom. We haven't had any major BGP session flaps since.
When pounding the CPU with ~30 times more (5000pps) Neighbour solicitations and flapping 1000 BGP IPv4 prefixes (out of 51000) every 5 seconds, I get the following load (worst case):
We're getting many tens of thousands of prefixes from AMSIX and this peering router is in our BGP full mesh, so when peers go down, it's a lot of paths to recalculate (most of our IPv4 IBGP full mesh peers are in unique update groups for some reason on this router, that's also being analysed). Even though IOS12000 seems fairly complete as an IPv6 core router, we've been running into more and more problems like this. Cisco has implemented a lot of features but not all and for IOS, they probably never will if I understand correctly. Guess XR is the way to go if one wants to keep it for a few more years... -- Mikael Abrahamsson email: swmike@swm.pp.se
On Mon, 24 Jan 2011 15:53:32 -0500, Ray Soucy <rps@maine.edu> wrote:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
Not exactly. If it's a point-to-point link, then there are *TWO* machines on it -- one at each end; there will *always* be two machines on it. There's no sane reason to assign 18trillion addresses to it. Yes, in this respect it's like not wasting an IPv4 /24, but it's also The Right Tool For The Job.(tm) If one were to assign a /64 to every p-t-p link in the world, IPv6 address space would be used at an insane rate. (and those assignments aren't free.) Do people not remember the early days of IPv4 where /8's were handed out like Pez?
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
And such applications are *BROKEN*. IPv6 is *classless* -- end of discussion. /64 (and /80 previous) is a lame optimization so really stupid devices can find an address in 4 bytes of machine code. This is quite lame given all the other complex baggage IPv6 requires. And then /64 is only required by SLAAC, which you are not required to use.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
Another thing to consider is that most processors today lack operations for values that are larger than 64-bit. By separating the host and network segment at the 64-bit boundary you may be able to take advantage of performance optimizations that make the distinction between the two (and significantly reduce the cost of routing decisions, contributing to lower latency).
IPv6 is classless; routers cannot blindly make that assumption for "performance optimization".
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem.
Myopia doesn't make the problem go away. The point of such an attack is not to "find things", but to overload the router(s). (which can be done rather easily by a few dozen machines.) --Ricky
In message <op.vpt734cxtfhldh@rbeam.xactional.com>, "Ricky Beam" writes:
On Mon, 24 Jan 2011 15:53:32 -0500, Ray Soucy <rps@maine.edu> wrote:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
Not exactly. If it's a point-to-point link, then there are *TWO* machines on it -- one at each end; there will *always* be two machines on it. There's no sane reason to assign 18trillion addresses to it. Yes, in this respect it's like not wasting an IPv4 /24, but it's also The Right Tool For The Job.(tm) If one were to assign a /64 to every p-t-p link in the world, IPv6 address space would be used at an insane rate. (and those assignments aren't free.) Do people not remember the early days of IPv4 where /8's were handed out like Pez?
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
And such applications are *BROKEN*. IPv6 is *classless* -- end of discussion.
/64 (and /80 previous) is a lame optimization so really stupid devices can find an address in 4 bytes of machine code. This is quite lame given all the other complex baggage IPv6 requires.
And then /64 is only required by SLAAC, which you are not required to use.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
Another thing to consider is that most processors today lack operations for values that are larger than 64-bit. By separating the host and network segment at the 64-bit boundary you may be able to take advantage of performance optimizations that make the distinction between the two (and significantly reduce the cost of routing decisions, contributing to lower latency).
IPv6 is classless; routers cannot blindly make that assumption for "performance optimization".
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem.
Myopia doesn't make the problem go away. The point of such an attack is not to "find things", but to overload the router(s). (which can be done rather easily by a few dozen machines.)
--Ricky
There will be two machines but not necessarially 2 addresses. For inter router links there will usually only be the two adresses. For port to point links to general purpose machines you should expect multiple address at least one end. Even routers can use CGAs. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Jan 24, 2011, at 4:10 PM, Ricky Beam wrote:
On Mon, 24 Jan 2011 15:53:32 -0500, Ray Soucy <rps@maine.edu> wrote:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
Not exactly. If it's a point-to-point link, then there are *TWO* machines on it -- one at each end; there will *always* be two machines on it. There's no sane reason to assign 18trillion addresses to it. Yes, in this respect it's like not wasting an IPv4 /24, but it's also The Right Tool For The Job.(tm) If one were to assign a /64 to every p-t-p link in the world, IPv6 address space would be used at an insane rate. (and those assignments aren't free.) Do people not remember the early days of IPv4 where /8's were handed out like Pez?
Dude... In IPv6, there are 18,446,744,073,709,551,616 /64s. To put this in perspective... There are roughly 30,000 ISPs on Earth. The largest ISPs have thousands (not tens of thousands) of point-to-point links. Even if we assume 30,000 Point to point links per ISP and 60,000 ISPs, you still only used 1,800,000,000 /64s. The remaining 18,446,744,071,909,551,616 are still available for other uses. I think a use that does not effect the first 11 significant digits of the total address space is hard to call "used at an insane rate". Yes, I remember the early days. Now, let's compare the population of the internet in those days (60+ organizations vs 256 /8s) to the current ratio we're talking about (let's be generous and say there are two end sites for every person on the planet... 12,000,000,000 vs. 281,474,976,710,656/48s). In the former case, we're talking about a known population representing 23% of all available address space. In the IPv6 case, we're talking about a generous potential estimate of the population being less than 0.0043% of the available address space.
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
And such applications are *BROKEN*. IPv6 is *classless* -- end of discussion.
/64 (and /80 previous) is a lame optimization so really stupid devices can find an address in 4 bytes of machine code. This is quite lame given all the other complex baggage IPv6 requires.
And then /64 is only required by SLAAC, which you are not required to use.
I disagree. There are many useful optimizations that come from /64 other than just SLAAC. However, SLAAC is also a very useful tool.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
True, but, in terms of deploying networks, unless you have a really good reason not to, it is best to use /64 for all segments. It preserves what I like to call the principle of least surprise (the less you surprise the operators, the less likely they are to make mistakes) and it provides a very clean, easy to comprehend boundary that works in virtually any application. You never have to worry about outgrowing your subnet. SLAAC works wherever you want it to. You don't have to keep track of complex tables of allocations and optimizations of how you carved things up at the individual subnet level. You can focus on optimizing how you carve up aggregates of /64s into regions or sites or whatever. Personally I'm also a fan of using a /48 per site as well, allowing better concentration on optimizing the higher levels of the addressing hierarchy without getting buried in the minutiae of individual subnets.
Another thing to consider is that most processors today lack operations for values that are larger than 64-bit. By separating the host and network segment at the 64-bit boundary you may be able to take advantage of performance optimizations that make the distinction between the two (and significantly reduce the cost of routing decisions, contributing to lower latency).
IPv6 is classless; routers cannot blindly make that assumption for "performance optimization".
Blindly, no. However, it's not impractical to implement fast path switching that handles things on /64s and push anything that requires something else to the slow path.
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem.
Myopia doesn't make the problem go away. The point of such an attack is not to "find things", but to overload the router(s). (which can be done rather easily by a few dozen machines.)
Only if you don't deploy reasonable mitigation strategies. Owen
IPv6 is classless; routers cannot blindly make that assumption for "performance optimization".
Blindly, no. However, it's not impractical to implement fast path switching that handles things on /64s and push anything that requires something else to the slow path.
Any vendor who was stupid enough to do *hardware* switching for up to /64 and punted the rest to software would certainly not get any sales from us. 128 bits. No magic. Steinar Haug, Nethelp consulting, sthaug@nethelp.no
On Tue, 25 Jan 2011 07:02:30 +0100 (CET) sthaug@nethelp.no wrote:
IPv6 is classless; routers cannot blindly make that assumption for "performance optimization".
Blindly, no. However, it's not impractical to implement fast path switching that handles things on /64s and push anything that requires something else to the slow path.
Any vendor who was stupid enough to do *hardware* switching for up to /64 and punted the rest to software would certainly not get any sales from us.
Actually, they'd most likely punt the rest to exact match rather than longest match cams. Exact match cams are cheaper because they're simpler, and have been made even more so because they've been for more than a decade layer 2 switches, and they're are far many more of them than there are routers.
128 bits. No magic.
"magic" is another way of describing progress. Electric start cars would have been "magic" to owners of Motorwagens.
On Mon, 24 Jan 2011 19:46:19 -0500, Owen DeLong <owen@delong.com> wrote:
Dude... In IPv6, there are 18,446,744,073,709,551,616 /64s.
Those who don't learn from history are doomed to repeat it. "Dude, there are 256 /8 in IPv4." "640k ought to be enough for anyone." People can mismange anything into oblivion. IPv6 will end up the same mess IPv4 has become. (granted, it should take more than 30 years this time.)
The largest ISPs have thousands (not tens of thousands) of point-to-point links.
Having worked for small ISPs, I can count over 10k ptp links. That number goes way up when you count dialup and DSL.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
True, but, in terms of deploying networks, unless you have a really good reason not to, it is best to use /64 for all segments.
Again, the only reason for this /64 class boundry is SLAAC. The network is still 128 bits; you still have to pay attention to ALL of those bits. (Remember, SLAAC started out as a /80.)
Blindly, no. However, it's not impractical to implement fast path switching that handles things on /64s and push anything that requires something else to the slow path.
Any router that does CPU switching is already trash. High speed, low latency routing and switches is done in silicon (fpga's); it is not hoised to a general purpose CPU. For consumer devices, (almost) everything is done by the CPU to make it cheap. (some actually have tiny single chip switches in there.) --Ricky
In message <op.vpvur91htfhldh@rbeam.xactional.com>, "Ricky Beam" writes:
On Mon, 24 Jan 2011 19:46:19 -0500, Owen DeLong <owen@delong.com> wrote:
Dude... In IPv6, there are 18,446,744,073,709,551,616 /64s.
Those who don't learn from history are doomed to repeat it.
"Dude, there are 256 /8 in IPv4."
"640k ought to be enough for anyone."
And there was a time that IPv6 address might have been 64 bits in length and rather than having to managed 64 bits of addressing like we were managing IPv4 (smallest sized networks that would do the job) it was decided to double the address length and just manage networks and their aggregation rather than networks and their sizes. That was a deliberate decision. We are still packing networks reasonable densely especially anything /48 or shorter.
People can mismange anything into oblivion. IPv6 will end up the same mess IPv4 has become. (granted, it should take more than 30 years this time.)
The largest ISPs have thousands (not tens of thousands) of point-to-point links.
Having worked for small ISPs, I can count over 10k ptp links. That number goes way up when you count dialup and DSL.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
True, but, in terms of deploying networks, unless you have a really good reason not to, it is best to use /64 for all segments.
Again, the only reason for this /64 class boundry is SLAAC. The network is still 128 bits; you still have to pay attention to ALL of those bits.
(Remember, SLAAC started out as a /80.)
Blindly, no. However, it's not impractical to implement fast path switching that handles things on /64s and push anything that requires something else to the slow path.
Any router that does CPU switching is already trash. High speed, low latency routing and switches is done in silicon (fpga's); it is not hoised to a general purpose CPU.
For consumer devices, (almost) everything is done by the CPU to make it cheap. (some actually have tiny single chip switches in there.)
--Ricky
-- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Jan 25, 2011, at 1:17 PM, Ricky Beam wrote:
On Mon, 24 Jan 2011 19:46:19 -0500, Owen DeLong <owen@delong.com> wrote:
Dude... In IPv6, there are 18,446,744,073,709,551,616 /64s.
Those who don't learn from history are doomed to repeat it.
Correct, but...
"Dude, there are 256 /8 in IPv4."
There still are. The difference is that at the time that was said, there were 60 odd organizations that were expecting to connect to the internet and growth wasn't expected to exceed ~100. Today, there are billions of organizations, but, growth isn't expected to hit a trillion. As such, I think 281,474,976,710,656 (more than 281 trillion)/48s will cover it for longer than IPv6 will remain a viable protocol.
"640k ought to be enough for anyone."
If IPv4 is like 640k, then, IPv6 is like having 47,223,664,828,696,452,136,959 terabytes of RAM. I'd argue that while 640k was short sighted, I think it is unlikely we will see machines with much more than a terabyte of RAM in the lifetime of IPv6.
People can mismange anything into oblivion. IPv6 will end up the same mess IPv4 has become. (granted, it should take more than 30 years this time.)
IPv4 is not a mess. IPv4 was not mismanaged in my opinion. IPv4 was properly managed to the best use of the purposes intended at the time. The uses evolved. The management evolved. We have now reached the end of the ability to evolve the management enough to continue to meet the new uses. As a result, we're now deploying IPv6. IPv6 has learned from the history of IPv4 and instead of having enough addresses for all anticipated needs, it has enough addresses to build any conceivable form of network well beyond the size of what can be expected to function within the bounds of the protocol.
The largest ISPs have thousands (not tens of thousands) of point-to-point links.
Having worked for small ISPs, I can count over 10k ptp links. That number goes way up when you count dialup and DSL.
Most people don't implement DSL as point to point. Usually, instead, it is done as a point to multipoint addressing structure. Using a /64 per DSLAM is not an issue. Can you still count over 10k point to point links if you just look at the ones that need to be numbered? 8,192 /30s is a /15 of address space. That's not a small provider by almost anyone's definition of the term small.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
True, but, in terms of deploying networks, unless you have a really good reason not to, it is best to use /64 for all segments.
Again, the only reason for this /64 class boundry is SLAAC. The network is still 128 bits; you still have to pay attention to ALL of those bits.
(Remember, SLAAC started out as a /80.)
So?
Blindly, no. However, it's not impractical to implement fast path switching that handles things on /64s and push anything that requires something else to the slow path.
Any router that does CPU switching is already trash. High speed, low latency routing and switches is done in silicon (fpga's); it is not hoised to a general purpose CPU.
Depends on the application. If you're talking core backbone, sure. If you're talking workgroup switch/router, then, it's a different story. The number of running quagga boxes present in the internet and the number of deployed Juniper J-Series and SRX-series platforms would seem to prove your statement wrong. Owen
On 01/25/2011 11:06 PM, Owen DeLong wrote:
"640k ought to be enough for anyone."
If IPv4 is like 640k, then, IPv6 is like having 47,223,664,828,696,452,136,959 terabytes of RAM. I'd argue that while 640k was short sighted, I think it is unlikely we will see machines with much more than a terabyte of RAM in the lifetime of IPv6.
I would be very careful with such predictions. How about 2 TB of RAM ?: "...IBM can cram 1 TB of memory into a 4U chassis or 2 TB in an eight-socket box in two 4U chassis..." http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/page2.html http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/ I don't know who will use it or how much they will need to pay for it or even when they will be available, but they are talking about it (in this case at the last CEBIT in March). People are building some very big systems for example with lots and lots of virtual machines.
On Sun, 2011-01-30 at 17:39 +0100, Leen Besselink wrote:
On 01/25/2011 11:06 PM, Owen DeLong wrote:
If IPv4 is like 640k, then, IPv6 is like having 47,223,664,828,696,452,136,959 terabytes of RAM. I'd argue that while 640k was short sighted, I think it is unlikely we will see machines with much more than a terabyte of RAM in the lifetime of IPv6.
I would be very careful with such predictions. How about 2 TB of RAM ?:
"...IBM can cram 1 TB of memory into a 4U chassis or 2 TB in an eight-socket box in two 4U chassis..."
http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/page2.html http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/
I don't know who will use it or how much they will need to pay for it or even when they will be available, but they are talking about it (in this case at the last CEBIT in March).
People are building some very big systems for example with lots and lots of virtual machines.
On dell.com you can buy a PowerEdge R910 with 1TB RAM for around $80k. Laurent
On Jan 30, 2011, at 8:39 AM, Leen Besselink wrote:
On 01/25/2011 11:06 PM, Owen DeLong wrote:
"640k ought to be enough for anyone."
If IPv4 is like 640k, then, IPv6 is like having 47,223,664,828,696,452,136,959 terabytes of RAM. I'd argue that while 640k was short sighted, I think it is unlikely we will see machines with much more than a terabyte of RAM in the lifetime of IPv6.
I would be very careful with such predictions. How about 2 TB of RAM ?:
Yes... I left a word out of my sentence... I think it is unlikely we will see COMMON machines with much more than a terabyte of RAM in the lifetime of IPv6. Sure, there will be the rare monster super-special-purpose thing with more RAM capacity than there is storage in many large disk farms, but, for common general purpose machines, I think it's safe to say that 47,223,664,828,696,452,136,959 terabytes ought to be enough for anyone given that even at the best of Moore's law common desktops will take 9 or more years to get to 1 Terabyte of RAM.
"...IBM can cram 1 TB of memory into a 4U chassis or 2 TB in an eight-socket box in two 4U chassis..."
http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/page2.html http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/
I don't know who will use it or how much they will need to pay for it or even when they will be available, but they are talking about it (in this case at the last CEBIT in March).
People are building some very big systems for example with lots and lots of virtual machines.
Yes... My intent, like the 640k quote, was aimed at the common desktop machine and primarily to show that since 1 TB is an inconceivably large memory footprint for any normal user today, it's going to be a long long time before 47,223,664,828,696,452,136,959 TB comes up short for anyone's needs. Owen
On Sun, 30 Jan 2011 17:39:45 +0100, Leen Besselink said:
On 01/25/2011 11:06 PM, Owen DeLong wrote:
"640k ought to be enough for anyone."
Remember that when this apocryphal statement was allegedly made in 1981, IBM mainframes and Crays and the like were already well in to the 64-256M of RAM area, and it was intended to apply to consumer-class machines (like what you'd run Windows on).
If IPv4 is like 640k, then, IPv6 is like having 47,223,664,828,696,452,136,959 terabytes of RAM. I'd argue that while 640k was short sighted, I think it is unlikely we will see machines with much more than a terabyte of RAM in the lifetime of IPv6.
I would be very careful with such predictions. How about 2 TB of RAM ?:
OK. A petabyte of ram instead. Better? Then IPv6 is like having 47,223,664,828,696,452,136 petabytes of RAM. Or go to an exabyte of ram. That's a million times the current highest density and still leaves you a bunch of commas in the number. We seem to be managing at best, at the bleeding edge, one comma per decade. And note that the change in units from kbytes to terabytes itself hides 3 more commas. In any case, the fact you can stick a terabyte of RAM into a 4U Dell rack mount that sucks a whole lot of power doesn't mean we're anywhere near being able to do it for consumer-class hardware. Remember, much of the growth is going to be in the embedded and special purpose systems - the smart phone/PDA/handheld game system arena, etc. How many fully loaded R910's will Dell sell, and how many iPhones will Apple sell? How long before a Blackberry or an iPhone has a terabyte of RAM? (For that matter, when will they get to a terabyte of SSD capacity?)
In any case, the fact you can stick a terabyte of RAM into a 4U Dell rack mount that sucks a whole lot of power doesn't mean we're anywhere near being able to do it for consumer-class hardware. Remember, much of the growth is going to be in the embedded and special purpose systems - the smart phone/PDA/handheld game system arena, etc. How many fully loaded R910's will Dell sell, and how many iPhones will Apple sell? How long before a Blackberry or an iPhone has a terabyte
of
RAM? (For that matter, when will they get to a terabyte of SSD capacity?)
There are other reasons to use a /64. Some vendors have the option of storing either the /64 prefix or the entire /128 in CAM. The option to select one mode or the other is often not specific to a route but is a global option. If you want hardware switching of nets smaller than /64, you need to enable the mode that places the entire address in CAM. Doing that results in an increase in resources required to hold routes resulting in fewer routes in hardware at any given time. So hardware that, by default, places the entire address in CAM is fine, but if you have the option of using only the /64 prefix then an IPv6 route becomes less expensive (only 2x the "cost" of a v4 route in resources rather than 4x the cost).
On Tue, 25 Jan 2011 16:17:59 EST, Ricky Beam said:
On Mon, 24 Jan 2011 19:46:19 -0500, Owen DeLong <owen@delong.com> wrote:
Dude... In IPv6, there are 18,446,744,073,709,551,616 /64s.
Those who don't learn from history are doomed to repeat it.
"Dude, there are 256 /8 in IPv4."
"640k ought to be enough for anyone."
People can mismange anything into oblivion. IPv6 will end up the same mess IPv4 has become. (granted, it should take more than 30 years this time.)
To burn through all the /48s in 100 years, we'll have to use them up at the rate of 89,255 *per second*. That implies either *really* good aggregation, or your routers having enough CPU to handle the BGP churn caused by 90K new prefixes arriving on the Internet per second. Oh, and hot-pluggable memory, you'll need another terabyte of RAM every few hours. At that point, running out of prefixes is the *least* of your worries.
In a message written on Tue, Jan 25, 2011 at 05:07:16PM -0500, Valdis.Kletnieks@vt.edu wrote:
To burn through all the /48s in 100 years, we'll have to use them up at the rate of 89,255 *per second*.
That implies either *really* good aggregation, or your routers having enough CPU to handle the BGP churn caused by 90K new prefixes arriving on the Internet per second. Oh, and hot-pluggable memory, you'll need another terabyte of RAM every few hours. At that point, running out of prefixes is the *least* of your worries.
If you were allocating individual /48's, perhaps. But see, I'm a cable company, and I want a /48 per customer, and I have a couple of hundred thousand per pop, so I need a /30 per pop. Oh, and I have a few hundred pops, and I need to be able to aggreate regionally, so I need a /24. By my calculations I just used 16M /48's and I did it in about 60 seconds to write a paragraph. That's about 279,620 per second, so I'm well above your rate. To be serious for a moment, the problem isn't that we don't have enough /48's, but that humans are really bad at thinking about these big numbers. We're going from a very constrained world with limited aggregation (IPv4) to a world that seems very unconstrained, and building in a lot of aggregation. Remember the very first IPv6 addressing proposals had a fully structured address space and only 4096 ISP's at the top of the chain! If we aggregate poorly, we can absolutely blow through all the space, stranding it in all sorts of new and interesting ways. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
On Jan 25, 2011, at 2:21 PM, Leo Bicknell wrote:
In a message written on Tue, Jan 25, 2011 at 05:07:16PM -0500, Valdis.Kletnieks@vt.edu wrote:
To burn through all the /48s in 100 years, we'll have to use them up at the rate of 89,255 *per second*.
That implies either *really* good aggregation, or your routers having enough CPU to handle the BGP churn caused by 90K new prefixes arriving on the Internet per second. Oh, and hot-pluggable memory, you'll need another terabyte of RAM every few hours. At that point, running out of prefixes is the *least* of your worries.
If you were allocating individual /48's, perhaps. But see, I'm a cable company, and I want a /48 per customer, and I have a couple of hundred thousand per pop, so I need a /30 per pop. Oh, and I have a few hundred pops, and I need to be able to aggreate regionally, so I need a /24.
By my calculations I just used 16M /48's and I did it in about 60 seconds to write a paragraph. That's about 279,620 per second, so I'm well above your rate.
How soon do you expect your $CABLECO to need to come back to the RIR for their next /24? That is the meaningful number. The fact that it took you 60 seconds to use a /24 to retrofit a network that was built over decades really isn't a useful measure of utilization rate.
To be serious for a moment, the problem isn't that we don't have enough /48's, but that humans are really bad at thinking about these big numbers. We're going from a very constrained world with limited aggregation (IPv4) to a world that seems very unconstrained, and building in a lot of aggregation. Remember the very first IPv6 addressing proposals had a fully structured address space and only 4096 ISP's at the top of the chain!
Yep... Proposal 121 is intended to help address this problem (the humans are bad at math and big numbers problem).
If we aggregate poorly, we can absolutely blow through all the space, stranding it in all sorts of new and interesting ways.
We may or may not blow through the space, but, we certainly can easily render the space we do blow through useless. Owen
On Tue, 25 Jan 2011 14:21:12 PST, Leo Bicknell said:
If you were allocating individual /48's, perhaps. But see, I'm a cable company, and I want a /48 per customer, and I have a couple of hundred thousand per pop, so I need a /30 per pop. Oh, and I have a few hundred pops, and I need to be able to aggreate regionally, so I need a /24.
By my calculations I just used 16M /48's and I did it in about 60 seconds to write a paragraph. That's about 279,620 per second, so I'm well above your rate.
Fine. You got ARIN or somebody to allocate your *first* /24 in under a minute. Now how long will it take you to actually *deploy* that many destinations? And where do you plan to get your customers for the next 4 or 5 /24's, and how long will *those* deploys take? Face it Leo, you can't *sustain* that growth rate.
building in a lot of aggregation. Remember the very first IPv6 addressing proposals had a fully structured address space and only 4096 ISP's at the top of the chain!
How many Tier-1's are there now, even if you include all the wannabes? And how long would it take at current growth rates of Tier-1 status to run out the *other* 4,087 entries?
On Jan 25, 2011, at 2:32 PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 25 Jan 2011 14:21:12 PST, Leo Bicknell said:
If you were allocating individual /48's, perhaps. But see, I'm a cable company, and I want a /48 per customer, and I have a couple of hundred thousand per pop, so I need a /30 per pop. Oh, and I have a few hundred pops, and I need to be able to aggreate regionally, so I need a /24.
By my calculations I just used 16M /48's and I did it in about 60 seconds to write a paragraph. That's about 279,620 per second, so I'm well above your rate.
Fine. You got ARIN or somebody to allocate your *first* /24 in under a minute. Now how long will it take you to actually *deploy* that many destinations? And where do you plan to get your customers for the next 4 or 5 /24's, and how long will *those* deploys take?
Face it Leo, you can't *sustain* that growth rate.
building in a lot of aggregation. Remember the very first IPv6 addressing proposals had a fully structured address space and only 4096 ISP's at the top of the chain!
How many Tier-1's are there now, even if you include all the wannabes? And how long would it take at current growth rates of Tier-1 status to run out the *other* 4,087 entries?
RIRs allocate to lots of non-Tier-1 organizations, so, that count is pretty meaningless no matter what definition of "tier 1" you decide to use. I suspect that there are probably somewhere between 30,000 and 120,000 ISPs world wide that are likely to end up with a /32 or shorter prefix. Owen
Owen DeLong wrote:
...... I suspect that there are probably somewhere between 30,000 and 120,000 ISPs world wide that are likely to end up with a /32 or shorter prefix.
A /32 is the value that a start-up ISP would have. Assuming that there is a constant average rate of startups/failures per year, the number of /32's in the system should remain fairly constant over time. Every organization with a *real* customer base should have significantly shorter than a /32. In particular every organization that says "I can't give my customers prefix length X because I only have a /32" needs to go back to ARIN today and trade that in for a *real block*. There should be at least 10 organizations in the ARIN region that qualify for a /20 or shorter, and most would likely be /24 or shorter. As Owen said earlier, proposal 121 is intended to help people through the math. Please read the proposal, and even if you don't want to comment on the PPML list about it, take that useless /32 back to ARIN and get a *real block* today. Tony
On Jan 25, 2011, at 4:20 PM, Tony Hain wrote:
Owen DeLong wrote:
...... I suspect that there are probably somewhere between 30,000 and 120,000 ISPs world wide that are likely to end up with a /32 or shorter prefix.
A /32 is the value that a start-up ISP would have. Assuming that there is a constant average rate of startups/failures per year, the number of /32's in the system should remain fairly constant over time.
Every organization with a *real* customer base should have significantly shorter than a /32. In particular every organization that says "I can't give my customers prefix length X because I only have a /32" needs to go back to ARIN today and trade that in for a *real block*. There should be at least 10 organizations in the ARIN region that qualify for a /20 or shorter, and most would likely be /24 or shorter.
As Owen said earlier, proposal 121 is intended to help people through the math. Please read the proposal, and even if you don't want to comment on the PPML list about it, take that useless /32 back to ARIN and get a *real block* today.
Tony
Unfortunately, it's hard for them to do that *today*. That's the other thing proposal 121 is intended to do is help ARIN make better allocations for ISPs. Indeed, a key part of my quoted paragraph above was the "or shorter" phrase. Even in that scenario, though, I expect a typical ISP will use a /28, a moderately large ISP will use a /24, a very large access provider might use a /20, and only a handful of extremely large providers are likely to get /16s even under the generous criteria of proposal 121. Fully deployed, the current internet would probably consume less than a /12 per RIR if every RIR adopted proposal 121. The 50 year projections of internet growth would likely have each RIR invading but not using more than half of their second /12. Even if every RIR gets to 3 /12s in 50 years, that's still only 15/512ths of the initial /3 delegated to unicast space by IETF. There are 6+ more /3s remaining in the IETF pool. Owen
On Jan 25, 2011, at 5:33 PM, Nathan Eisenberg wrote:
Even if every RIR gets to 3 /12s in 50 years, that's still only 15/512ths of the initial /3 delegated to unicast space by IETF. There are 6+ more /3s remaining in the IETF pool.
That's good news - we need to make sure we have a /3 for both the Moon and Mars colonies. ;)
Nathan
I'll be surprised and happy to see it if we reach the point of colonizing either of those locations before we reach the point of IPv6 being an insufficient protocol for modern needs. Owen
On Wed, Jan 26, 2011 at 01:33:05AM +0000, Nathan Eisenberg wrote:
Even if every RIR gets to 3 /12s in 50 years, that's still only 15/512ths of the initial /3 delegated to unicast space by IETF. There are 6+ more /3s remaining in the IETF pool.
That's good news - we need to make sure we have a /3 for both the Moon and Mars colonies. ;)
A /64 is barely enough bits for a ~2 m resolution on Earth surface, and barely to the Karman line. In practice you'd aim for ~um resolution for all major gravity wells in this system (DTN is already flying, there's a Cisco box in Earth orbit, Moon and Mars are next). (And of course if you're you're going to multiply above by 10^11, or so. Eventually).
On Jan 26, 2011, at 6:29 PM, Eugen Leitl wrote:
In practice you'd aim for ~um resolution for all major gravity wells in this system (DTN is already flying, there's a Cisco box in Earth orbit, Moon and Mars are next).
Don't forget the asteroid belt, that's where the real money is. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Tue, 25 Jan 2011, Tony Hain wrote:
Every organization with a *real* customer base should have significantly shorter than a /32. In particular every organization that says "I can't give my customers prefix length X because I only have a /32" needs to go back to ARIN today and trade that in for a *real block*. There should be at least 10 organizations in the ARIN region that qualify for a /20 or shorter, and most would likely be /24 or shorter.
+1 on this. We returned our /32 that we received back in 2002 or so, and after proper application received a /25 where I believe we have up to /22 reserved for us in case we need it. We hope we're not going to have to pollute the DFZ with more than a single entry in the forseeable future. To everybody who thinks we need to conserve addresses, please consider this current allocation policy (/48 and /56) as something we'll do for the first /3 in use, when we exhaust that, we need to really look at what we're doing and look if we need to change the policy for the other /3:s. We have 7 more tries to go before we exhaust the whole IPv6 space. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Jan 25, 2011, at 2:07 PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 25 Jan 2011 16:17:59 EST, Ricky Beam said:
On Mon, 24 Jan 2011 19:46:19 -0500, Owen DeLong <owen@delong.com> wrote:
Dude... In IPv6, there are 18,446,744,073,709,551,616 /64s.
Those who don't learn from history are doomed to repeat it.
"Dude, there are 256 /8 in IPv4."
"640k ought to be enough for anyone."
People can mismange anything into oblivion. IPv6 will end up the same mess IPv4 has become. (granted, it should take more than 30 years this time.)
To burn through all the /48s in 100 years, we'll have to use them up at the rate of 89,255 *per second*.
That implies either *really* good aggregation, or your routers having enough CPU to handle the BGP churn caused by 90K new prefixes arriving on the Internet per second. Oh, and hot-pluggable memory, you'll need another terabyte of RAM every few hours. At that point, running out of prefixes is the *least* of your worries.
This presumes that we don't run out of /48s by installing them in routers a /20 at a time. Owen
On 24/01/2011 09:46 p.m., Owen DeLong wrote:
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem.
Myopia doesn't make the problem go away. The point of such an attack is not to "find things", but to overload the router(s). (which can be done rather easily by a few dozen machines.)
Only if you don't deploy reasonable mitigation strategies.
Just wondering: What would you deem as "reasonable mitigation strategies"? Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On Mon, Jan 24, 2011 at 7:10 PM, Ricky Beam <jfbeam@gmail.com> wrote:
On Mon, 24 Jan 2011 15:53:32 -0500, Ray Soucy <rps@maine.edu> wrote:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
Not exactly. If it's a point-to-point link, then there are *TWO* machines on it -- one at each end; there will *always* be two machines on it. There's no sane reason to assign 18trillion addresses to it. Yes, in this respect it's like not wasting an IPv4 /24, but it's also The Right Tool For The Job.(tm) If one were to assign a /64 to every p-t-p link in the world, IPv6 address space would be used at an insane rate. (and those assignments aren't free.) Do people not remember the early days of IPv4 where /8's were handed out like Pez?
Not sure we disagree? I happily use 126-bit prefixes on point-to-point networks. The discussion is on LAN networks, though.
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
And such applications are *BROKEN*. IPv6 is *classless* -- end of discussion.
Yes, they are broken, but if you think the fact that 1% of users using prefixes smaller than 64-bit will make those bad developers change their ways I'm not sure what to tell you. The horse has already left the barn on that one. But I'm not going to stop you from using 120-bit prefixes on all your networks if that's what you want. Having to shuffle LAN networks around because they no longer have the address space to support the number of hosts on it goes away if you stick with the 64 rule. DoS problems exists regardless of the prefix length. I'd rather find a solution to the problem than simply try to mitigate it by turning IPv6 into IPv4.
/64 (and /80 previous) is a lame optimization so really stupid devices can find an address in 4 bytes of machine code. This is quite lame given all the other complex baggage IPv6 requires.
And then /64 is only required by SLAAC, which you are not required to use.
You should think of IPv6 as a 64-bit address that happens to include a 64-bit host identifier.
No, you should not. That underminds the fundamental concept of IPv6 being *classless*. And it will lead to idiots writing broken applications and protocols assuming that to be true.
IPv6 is classless. You can have any size prefix you want. Chopping off the last 64-bits and calling it a host segment essentially turns IPv6 into a 64-bit address that will never have a need for PAT (NAT overload) which was the reasoning to go with the 128-bit address space. You can use any size prefix you want for routing. Are you of the mindset that everyone should use a 120-bit prefix for each network, then advertise each of those into BGP without route aggregation? I'm not really sure where you're going here. The argument can also be made that using smaller prefixes with sequential host numbering will lead to making network sweeps and port scanning viable in IPv6 where it would otherwise be useless. At that point you just need evidence of one IPv6 address being in use and you know that a few hundred next to it have the interesting hosts connected.
Another thing to consider is that most processors today lack operations for values that are larger than 64-bit. By separating the host and network segment at the 64-bit boundary you may be able to take advantage of performance optimizations that make the distinction between the two (and significantly reduce the cost of routing decisions, contributing to lower latency).
IPv6 is classless; routers cannot blindly make that assumption for "performance optimization".
I'm not saying its make-or-break. It certainly does help. Especially when we're trying to shave nanoseconds off. I don't foresee 128-bit CPUs simply to accommodated IPv6 anytime soon. These kinds of optimizations happen every day already in other applications. IPv6 will likely be no different as vendors begin to compete on who has the best IPv6 hardware.
Many cite concerns of potential DoS attacks by doing sweeps of IPv6 networks. I don't think this will be a common or wide-spread problem.
Myopia doesn't make the problem go away. The point of such an attack is not to "find things", but to overload the router(s). (which can be done rather easily by a few dozen machines.)
--Ricky
Neither does avoiding it until it gets ignored? There are plenty of solutions and best practices that have nothing to do with the prefix length. I'm saying that you shouldn't use the possibility of a specific type of DoS attack (that is less attractive to attackers than nearly every other type of DoS attack out there) be your determinant for what prefix length you use. If an attacker wants to DoS a router and has the capacity to use this attack vector, they also have the capacity to achieve it with a dozen other ones. -- Ray Soucy Epic Communications Specialist Phone: +1 (207) 561-3526 Networkmaine, a Unit of the University of Maine System http://www.networkmaine.net/
On 25/01/2011 11:44 a.m., Ray Soucy wrote:
The argument can also be made that using smaller prefixes with sequential host numbering will lead to making network sweeps and port scanning viable in IPv6 where it would otherwise be useless. At that point you just need evidence of one IPv6 address being in use and you know that a few hundred next to it have the interesting hosts connected.
Sequential host numbering is already being used, despite of the prefix lengths in use. Also, the claim that "IPv6 address scanning is impossible" is generally based on the (incorrect) assumption that host addresses are spread (randomly) over the 64-bit IID. -- But they usually aren't. Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On Jan 26, 2011, at 8:12 AM, Fernando Gont wrote:
Also, the claim that "IPv6 address scanning is impossible" is generally based on the (incorrect) assumption that host addresses are spread (randomly) over the 64-bit IID. -- But they usually aren't.
It also doesn't take into account hinted scanning via routing table lookups, whois lookups, and walking reverse DNS, not to mention making use of ND mechanisms once a single box on a given subnet has been successfully botted. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Tue, Jan 25, 2011 at 8:29 PM, Roland Dobbins <rdobbins@arbor.net> wrote:
On Jan 26, 2011, at 8:12 AM, Fernando Gont wrote:
Also, the claim that "IPv6 address scanning is impossible" is generally based on the (incorrect) assumption that host addresses are spread (randomly) over the 64-bit IID. -- But they usually aren't.
It also doesn't take into account hinted scanning via routing table lookups, whois lookups, and walking reverse DNS, not to mention making use of ND mechanisms once a single box on a given subnet has been successfully botted.
It's not that discovering IPv6 hosts is impossible -- it is just that there's a very large mathematical obstacle between any brute force attempt, and the hosts attempting to be discovered, that didn't exist with IPv4. It is fair to say in the aggregate that 'scanning is impossible' with IPv6, but host discovery is not impossible. Exhaustive scanning is what is basically impossible. Hinted partial scanning might yield useful number of guessable host addresses to be attempted; that is, if most networks wind up using some guessable IP addresses for possibly vulnerable hosts; then someone/some where will find it worth while to attempt partial scanning of random announced prefixes; attempting to guess network IDs, then attempting to guess lan host IDs. The bots attempting partial scanning will have to have a lot of ideas about what addresses are most likely to be assigned, and some mechanism of making a "tradeoff" to decide when to give up on a certain network and move on to attempt 'partial scanning' against the next prefix. DNS walking and ND mechanism use are something different from scanning. They are also less effective -- would-be intruder has to compromise a host on LAN before ND can be of any use, it doesn't help so much in discovering LAN hosts on other subnets (if say compromised host is in say a very small IPv6 DMZ isolated from potentially vulnerable hosts in separated secure networks); DNS walking is no good against hosts not listed in DNS. There are other methods of discovery as well, but they are not close in scale or 'ease of use' to what brute-force address space scanning could easily accomplish with IPv4. -- -JH
On Jan 26, 2011, at 11:17 AM, Jimmy Hess wrote:
There are other methods of discovery as well, but they are not close in scale or 'ease of use' to what brute-force address space scanning could easily accomplish with IPv4.
Most botted hosts today are compromised in the first place via layer-7 exploits, not via scanning and network-based exploits. Pushing the miscreants in the direction of hinted scanning will further strain already overloaded whois and DNS servers. And just because iterative scanning is a crapshoot in IPv6, it costs attackers nothing to do it, anyways, and so they will. So, the fact that IPv6 access networks can contain huge numbers of possible endpoint addresses as compared to IPv4 is largely irrelevant; and in fact will have negative consequences with regards to the second-order effects of hinted scanning. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On 25/01/2011 11:29 p.m., Roland Dobbins wrote:
On Jan 26, 2011, at 8:12 AM, Fernando Gont wrote:
Also, the claim that "IPv6 address scanning is impossible" is generally based on the (incorrect) assumption that host addresses are spread (randomly) over the 64-bit IID. -- But they usually aren't.
It also doesn't take into account hinted scanning via routing table lookups, whois lookups, and walking reverse DNS, not to mention making use of ND mechanisms once a single box on a given subnet has been successfully botted.
+1 Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On 24/01/2011 05:53 p.m., Ray Soucy wrote:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
Just curious: What are the advantages you're referring to? Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
On Jan 25, 2011, at 10:30 PM, Fernando Gont wrote:
On 24/01/2011 05:53 p.m., Ray Soucy wrote:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
Just curious: What are the advantages you're referring to?
1. Sparse addressing 2. SLAAC 3. RFC 4193 Privacy Addressing 4. Never have to worry about "growing" a subnet to hold new machines. 5. Universal subnet size, no surprises, no operator confusion, no bitmath. There are probably others. Owen
On 26/01/2011 06:14 a.m., Owen DeLong wrote:
That said. Any size prefix will likely work and is even permitted by the RFC. You do run the risk of encountering applications that assume a 64-bit prefix length, though. And you're often crippling the advantages of IPv6.
Just curious: What are the advantages you're referring to?
1. Sparse addressing
This comes at a cost, though.
2. SLAAC 3. RFC 4193 Privacy Addressing
Privacy Extensions "solve" (*) a privacy issue *introduced* by SLAAC embedding the MAC addresses in the IID. -- So, if anything, I deem this as a patch, rather than a feature. (*) there is some bibliography about the effectiveness of privacy addresses. Some have even argued that they are harmful.
4. Never have to worry about "growing" a subnet to hold new machines.
As in #1, this comes at a price.
5. Universal subnet size, no surprises, no operator confusion, no bitmath.
With quite a bit of experience with subnetting (from IPv4), I doubt this can be flagged as a benefit. Thanks, -- Fernando Gont e-mail: fernando@gont.com.ar || fgont@acm.org PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1
* Ray Soucy:
Every time I see this question it' usually related to a fundamental misunderstanding of IPv6 and the attempt to apply v4 logic to v6.
True, you have to ignore more than a decade of IPv4 protocol development and resort to things like pre-VLSM networking.
That said. Any size prefix will likely work and is even permitted by the RFC.
Could you quote chapter and verse, please? RFC 4291 section 2.5.4 says this: All Global Unicast addresses other than those that start with binary 000 have a 64-bit interface ID field (i.e., n + m = 64), formatted as described in Section 2.5.1. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
* Carlos Martinez-Cagnazzo:
The subject says it all... anyone with experience with a setup like this ?
Unicast addresses must be located in at least a /64 subnet. No doubt there are vendors which enforce this (perhaps even in the ASICs), so deviating from this rule will result in some lock-in. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
The subject says it all... anyone with experience with a setup like this ?
Unicast addresses must be located in at least a /64 subnet. No doubt there are vendors which enforce this (perhaps even in the ASICs), so deviating from this rule will result in some lock-in.
The Juniper and Cisco equipment I have worked with can handle static LAN addresses with a mask different from /64 just fine. Same with OSes like FreeBSD, for instance. SLAAC obviously won't work. Steinar Haug, Nethelp consulting, sthaug@nethelp.no
participants (51)
-
Adrian Chadd
-
Bill Stewart
-
bmanning@vacation.karoshi.com
-
Cameron Byrne
-
Carlos Martinez-Cagnazzo
-
Carlos Martinez-Cagnazzo
-
Chuck Anderson
-
cja@daydream.com
-
Dorn Hetzel
-
Eliot Lear
-
eric clark
-
Eugen Leitl
-
Fernando Gont
-
Florian Weimer
-
George Bonser
-
George Herbert
-
Jack Bates
-
Jamie Bowden
-
Jimmy Hess
-
Joel Jaeggli
-
John Curran
-
John Levine
-
Karl Auer
-
Koch, Andrew
-
Lamar Owen
-
Laurent GUERBY
-
Leen Besselink
-
Leo Bicknell
-
Mark Andrews
-
Mark Smith
-
Matthew Petach
-
Michael Dillon
-
Michael Loftis
-
Mikael Abrahamsson
-
Nathan Eisenberg
-
Neil Harris
-
Owen DeLong
-
Patrick Sumby
-
Paul Timmins
-
Per Carlson
-
Phil Regnauld
-
Randy Carpenter
-
Ray Soucy
-
Ricky Beam
-
Rob Evans
-
Roland Dobbins
-
sthaug@nethelp.no
-
TJ
-
Tony Hain
-
Valdis.Kletnieks@vt.edu
-
William Herrin