I don't respond to many of these threads but I have to say I've contested this one too only to have to beaten into my head that a /64 is "appropriate".. it still hasn't stuck, but unfortunately rfc's for other protocols depend on the blocks to now be a /64.. It's a waste, even if we're "planning for the future", no one house needs a /64 sitting on their lan.. or at least none I can sensibly think of o_O. Ryan On Fri, Sep 27, 2013 at 12:57 AM, <bmanning@vacation.karoshi.com> wrote:
Yup. Seen/Heard all that. Even tooted that horn for a while. /64 is an artifical boundary - many/most IANA/RIR delegations are in the top /32 which is functionally the same as handing out traditional /16s. Some RIR client are "bigger" and demand more, so they get the v6 equvalent of /14s or smaller. Its the _exact_ same model as v4 in the previous decade. With the entire waste in the bottom /64.
Its tilting at windmills, but most of the community has "drunk the koolaide" on wasteful /v6 assignment. What a horrific legacy to hand to our children (and yes, it will hit that soon)
/bill
On Thu, Sep 26, 2013 at 01:18:50PM -0700, Darren Pilgrim wrote:
On 9/26/2013 1:07 PM, joel jaeggli wrote:
On Sep 26, 2013, at 12:29 PM, Darren Pilgrim <nanog@bitfreak.org> wrote:
On 9/26/2013 1:52 AM, bmanning@vacation.karoshi.com wrote:
sounds just like folks in 1985, talking about IPv4...
The foundation of that, though, was ignorance of address space exhaustion. IPv4's address space was too small for such large thinking.
The first dicussion I could find about ipv4 runnout in email archives is circa 1983
IPv6 is far beyond enough to use such allocation policies.
There are certain tendencies towards profligacy that might prematurely influence the question of ipv6 exhaustion and we should be on guard against them allocating enough /48s as part of direct assignments is probably not one of them.
That's just it, I really don't think we actually have an exhaustion risk with IPv6. IPv6 is massive beyond massive. Let me explain.
We have this idea of the "/64 boundary". All those nifty automatic addressing things rely on it. We now have two generations of hardware and software that would more or less break if we did away with it. In essence, we've translated an IPv4 /32 into an IPv6 /64. Not great, but still quite large.
Current science says Earth can support ten billion humans. If we let the humans proliferate to three times the theoretical upper limit for Earth's population, a /64 for each human would be at about a /35's worth of /64's. If we're generous with Earth's carrying capacity, a /36.
If we handed out /48's instead so each human could give a /64 to each of their devices, it would all fit in a single /52. Those /48's would number existance at a rate of one /64 per human, one /64 per device, and a 65535:1 device:human ratio. That means we could allocate 4000::/3 just for Earth humans and devices and never need another block for that purpose.
That's assuming a very high utilisation ratio, of course, but really no worse than IPv4 is currently. The problem isn't allocation density, but router hardware. We need room for route aggregation and other means of compartmentalisation. Is a 10% utilisation rate sparse enough? At 10% utilisation, keeping the allocations to just 4000::/3, we'd need less than a single /60 for all those /48's. If 10% isn't enough, we can go quite a bit farther:
- 1% utilisation would fit all those /48's into a /62. - A full /64 of those /48's would be 0.2% utilisation. - 0.1%? We'd have to steal a bit and hand out /47's instead. - /47 is ugly. At /52, we'd get .024% (one per 4096).
That's while maintaining a practice of one /64 per human or device with 65535 devices per human. Introduce one /64 per subnet and sub-ppm utilisation is possible. That would be giving a site a /44 and them only ever using the ::/64 of it.
Even with sloppy, sparse allocation policies and allowing limitless human and device population growth, we very likely can not exhaust IPv6.
On Fri, 27 Sep 2013, Ryan McIntosh wrote:
It's a waste, even if we're "planning for the future", no one house needs a /64 sitting on their lan.. or at least none I can sensibly think of o_O.
Okay, I'm just curious, what size do you (and other's of similar opinion) think the IPv6 space _should_ have been in order to allow us to not have to jump through conservation hoops ever again? 128 bits isn't enough, clearly, 256? 1k? 10k? -- Brandon Ross Yahoo & AIM: BrandonNRoss +1-404-635-6667 ICQ: 2269442 Schedule a meeting: https://doodle.com/bross Skype: brandonross
On 2013-09-27, at 10:40, Brandon Ross <bross@pobox.com> wrote:
On Fri, 27 Sep 2013, Ryan McIntosh wrote:
It's a waste, even if we're "planning for the future", no one house needs a /64 sitting on their lan.. or at least none I can sensibly think of o_O.
Okay, I'm just curious, what size do you (and other's of similar opinion) think the IPv6 space _should_ have been in order to allow us to not have to jump through conservation hoops ever again? 128 bits isn't enough, clearly, 256? 1k? 10k?
Given the design decision to use the bottom 64 bits to identify an individual host on a broadcast domain, the increase in address size isn't really 32 bits to 128 bits -- if your average v4 subnet size for a vlan is a /27, say, then it's more like an increase of 27 bits to 64 bits from the point of view of global assignment. Alternatively, considering that it's normal to give a service provider at least a /32, whereas the equivalent assignment in v4 might have been something like a /19 (handwave, handwave), it's more like an increase of 13 bits to 32 bits. Alternatively, considering that it's considered reasonable in some quarters to give an end-user a /48 so that they can break out different subnets inside their network whereas with IPv4 you'd give a customer a single address and expect them to use NAT, then it's more like an increase of 31 bits to 48 bits. That's still a lower bound of 2^17 times as many available addresses, and having enough addresses to satisfy a network 131,072 times as big as the current v4 Internet does not seem like a horrible thing. But the oft-repeated mantra that "there are enough addresses to individually number every grain of sand on the world's beaches" doesn't describe reality very well. The IPv6 addressing plan didn't wind up meeting our requirements very well. Film at 11. Joe
On Fri, Sep 27, 2013 at 10:40 AM, Brandon Ross <bross@pobox.com> wrote:
Okay, I'm just curious, what size do you (and other's of similar opinion) think the IPv6 space _should_ have been in order to allow us to not have to jump through conservation hoops ever again? 128 bits isn't enough, clearly, 256? 1k? 10k?
Hi Brandon, There is no bit length which allocations of /20's and larger won't quickly exhaust. It's not about the number of bits, it's about how we choose to use them. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
There is no bit length which allocations of /20's and larger won't quickly exhaust. It's not about the number of bits, it's about how we choose to use them.
Regards, Bill Herrin
True, but how many orgs do we expect to fall into that category? If the majority are getting /32, and only a handful are getting /24 or larger, can we assume that the average is going to be ~/28 ? If that is so, then out of the current /3, we can support over 30,000,000 entities. Actually, I would think the average is much closer to /32, since there are several orders of magnitude more orgs with /32 than /20 or smaller. Assuming /32 would be 500 million out of the /3. So somewhere between 30 and 500 million orgs. How many ISPs do we expect to be able to support? Also, consider that there are 7 more /3s that could be allocated in the future. As has been said, routing slots in the DFZ get to be problematic much sooner than address runout. Most current routers support ~1 million IPv6 routes. I think it would be reasonable to assume that that number could grow by an order of magnitude or 2, but I don't thin we'll see a billion or more routes in the lifetime of IPv6. Therefore, I don't see any reason to artificially inflate the routing table by conserving, and then making orgs come back for additional allocations. -Randy
On Sep 27, 2013, at 10:04 AM, Randy Carpenter <rcarpen@network1.net> wrote:
There is no bit length which allocations of /20's and larger won't quickly exhaust. It's not about the number of bits, it's about how we choose to use them.
Regards, Bill Herrin
True, but how many orgs do we expect to fall into that category? If the majority are getting /32, and only a handful are getting /24 or larger, can we assume that the average is going to be ~/28 ? If that is so, then out of the current /3, we can support over 30,000,000 entities. Actually, I would think the average is much closer to /32, since there are several orders of magnitude more orgs with /32 than /20 or smaller. Assuming /32 would be 500 million out of the /3. So somewhere between 30 and 500 million orgs.
How many ISPs do we expect to be able to support? Also, consider that there are 7 more /3s that could be allocated in the future.
As has been said, routing slots in the DFZ get to be problematic much sooner than address runout. Most current routers support ~1 million IPv6 routes. I think it would be reasonable to assume that that number could grow by an order of magnitude or 2, but I don't thin we'll see a billion or more routes in the lifetime of IPv6. Therefore, I don't see any reason to artificially inflate the routing table by conserving, and then making orgs come back for additional allocations.
In ipv4 there are 482319 routes and 45235 ASNs in the DFZ this week, of that 18619 ~40% announce only one prefix. given the distribution of prefix counts across ASNs it's quite reasonable to conclude that the consumption of routing table slots is not primarly a property of the number of participants but rather in the hands of a smaller number of large participants many of whom are in this room.
-Randy
In ipv4 there are 482319 routes and 45235 ASNs in the DFZ this week, of that 18619 ~40% announce only one prefix. given the distribution of prefix counts across ASNs it's quite reasonable to conclude that the consumption of routing table slots is not primarly a property of the number of participants but rather in the hands of a smaller number of large participants many of whom are in this room.
Which, compounds the idea that routing slots are going to be more of an issue than allocation size. -Randy
On Fri, Sep 27, 2013 at 1:04 PM, Randy Carpenter <rcarpen@network1.net> wrote:
There is no bit length which allocations of /20's and larger won't quickly exhaust. It's not about the number of bits, it's about how we choose to use them.
True, but how many orgs do we expect to fall into that category? If the majority are getting /32, and only a handful are getting /24 or larger, can we assume that the average is going to be ~/28 ? If that is so, then out of the current /3, we can support over 30,000,000 entities. Actually, I would think the average is much closer to /32, since there are several orders of magnitude more orgs with /32 than /20 or smaller. Assuming /32 would be 500 million out of the /3. So somewhere between 30 and 500 million orgs.
How many ISPs do we expect to be able to support? Also, consider that there are 7 more /3s that could be allocated in the future.
Hi Randy, If that's how we choose to use IPv6 then runout should be a long way away. That's a big "if". And choosing to stay that course is a form of conservation.
Therefore, I don't see any reason to artificially inflate the routing table by conserving, and then making orgs come back for additional allocations.
I'm not convinced of that. Suppose the plan was: you start with a /56. When you need more you get a /48. Next is a /40. Next a /32. Next a /28. You can hold exactly one of each size, never more. And the RIRs tell us all which address banks each size comes from. In such a scenario, the RIR doesn't have to reserve a /28 for expansion every time the allocate a /32. 'Cause, you know, that's what they've been doing. And you can easily program your router to discard the TE routes you don't wish to carry since you know what the allocation size was. That means you only have to carry at most 5 routes for any given organization. You'd want to allow some TE for the sake of efficient routing, but you get to choose how much. As things stand now, you're going to allow those guys with the /19s and /22s to do traffic engineering all the way down to /48. You don't have a practical way to say "no." Food for thought. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Fri, Sep 27, 2013 at 2:11 PM, William Herrin <bill@herrin.us> wrote:
On Fri, Sep 27, 2013 at 1:04 PM, Randy Carpenter <rcarpen@network1.net> wrote:
Therefore, I don't see any reason to artificially inflate the routing table by conserving, and then making orgs come back for additional allocations.
I'm not convinced of that. Suppose the plan was: you start with a /56. When you need more you get a /48. Next is a /40. Next a /32. Next a /28. You can hold exactly one of each size, never more. And the RIRs tell us all which address banks each size comes from.
In such a scenario, the RIR doesn't have to reserve a /28 for expansion every time the allocate a /32. 'Cause, you know, that's what they've been doing. And you can easily program your router to discard the TE routes you don't wish to carry since you know what the allocation size was. That means you only have to carry at most 5 routes for any given organization. You'd want to allow some TE for the sake of efficient routing, but you get to choose how much.
As things stand now, you're going to allow those guys with the /19s and /22s to do traffic engineering all the way down to /48. You don't have a practical way to say "no."
Point is: there are a number of address management practices which significantly impact the routing table size. The ones that jump to mind are: 1. Receiving discontiguous blocks from the registry on subsequent requests. If the blocks can't aggregate then they consume two routes even if they don't need to. Registry-level mitigations: allocate in excess of immediate need. Reserve additional space to allow subsequent allocations by changing the netmask on the same contiguous block. 2. Registry assignments to single-homed users. Registry assignments can't aggregate, even if their use shares fate with another AS's routes. Registry-level mitigations:minimize allocations to organizations which are not multihomed. 3. Traffic engineering. Fine tuning how data flows by cutting up an address block into smaller announced routes. Registry-level mitigations: Standardize allocation sizes and allocate from blocks reserved for that particular allocation size. Do not change a netmask in order to reduce or enlarge an allocation. This allows the recipients of TE advertisements to identify them and, if desired, filter them. 4. ISP assignments to multihomed users. In other networks, assignments to end users from your space are likely to be indistinguishable from traffic engineering routes. TE filtering is impossible if some of the announcements are multihomed customers whose fate is not shared with the ISP to whom the space was allocated. Registry-level mitigations: Direct assignment to all multihomed networks. Discourage ISPs from assigning subnets to multihomed customers. Note the contradictory mitigations. Standardized block sizes increases the number of discontiguous blocks. Netmask changes defeat standardized block sizes. So, it's a balancing act. Does more route bloat come from filterable TE? Or from discontiguous allocations? The customer lock-in from being the organization who assigns the customer's IP addresses turns around and bites you in the form of unfilterable traffic engineering routes. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Fri, Sep 27, 2013 at 02:10:47AM -0400, Ryan McIntosh wrote:
I don't respond to many of these threads but I have to say I've contested this one too only to have to beaten into my head that a /64 is "appropriate".. it still hasn't stuck, but unfortunately rfc's for other protocols depend on the blocks to now be a /64..
I became a "convert" to the school of thought that hands out a /48 to every end user when I realised that the current, *most* profligate addressing scheme anyone's recommending involves essentially giving out an IPv6 /48 to anyone who's currently getting an IPv4 /32 (eyeball SP end-users, and dedicated server / VPS customers). Even with this scheme, we have an address space over eight *thousand* times greater than what we have now[1]. If I am the current IPv4 Internet, then we can have more IPv6 Internets than there are people in the town I live in. Once that sunk in, I realised that, practically speaking, we're solid. Yes, there have been a few "big" blocks like /20s handed out, but they're few and far between. I work for a comparatively *tiny* hosting company, and we've got 3 IPv4 /20s, and yet the single IPv6 /32 we've got should more than do us for a *very* long time to come[2]. I'm now firmly in the camp that the resource to be worrying about is routing table slots, not address space exhaustion.
It's a waste, even if we're "planning for the future", no one house needs a /64 sitting on their lan.. or at least none I can sensibly think of o_O.
I prefer to think of it as simply "enough address space I don't have to worry about manual assignment", rather than "I'm 'wasting' 18446744073709551612 addresses". Thinking of IPv6 as being a 48-bit or 64-bit address space that also has the added bonus of never having to worry about host addressing makes things a lot more palatable. - Matt [1] And that's assuming that we only use 2000::/3 for this go around, which is one of six /3 blocks that we have to play with. If we completely fuck this up, we've effectively got IPv's 7 through 11 to try different ideas without having to change addressing formats. [2] To be fair, we're using an IPv6 addressing scheme that involves a lot more compaction than "/48s for everyone!", but even if we were handing out /48s for every machine in our facilities (which we wouldn't need to do, because plenty of customers have multiple machines, and thus would get a single /48 for all their machines), we'd still not be running out any time soon -- we've got ~65k IPv6 /48s, compared to ~12k IPv4 /32s, so yeah... -- Generally the folk who love the environment in vague, frilly ways are at odds with folk who love the environment next to the mashed potatoes. -- Anthony de Boer, in a place that does not exist
participants (7)
-
Brandon Ross
-
Joe Abley
-
joel jaeggli
-
Matt Palmer
-
Randy Carpenter
-
Ryan McIntosh
-
William Herrin