On Dec 4, 2013, at 10:21 , Brian Dickson <brian.peter.dickson@gmail.com> wrote:
Rob Seastrom wrote:
"Ricky Beam" <jfbeam at gmail.com<http://mailman.nanog.org/mailman/listinfo/nanog>> writes:
* On Fri, 29 Nov 2013 08:39:59 -0500, Rob Seastrom <rs at seastrom.com <http://mailman.nanog.org/mailman/listinfo/nanog>> wrote: *>> * So there really is no excuse on AT&T's part for the /60s on uverse 6rd... *> * ... *> * Handing out /56's like Pez is just wasting address space -- someone *> * *is* paying for that space. Yes, it's waste; giving everyone 256 *> * networks when they're only ever likely to use one or two (or maybe *> * four), is intentionally wasting space you could've assigned to *> * someone else. (or **sold** to someone else :-)) IPv6 may be huge to *> * the power of huge, but it's still finite. People like you are *> * repeating the same mistakes from the early days of IPv4... * There's finite, and then there's finite. Please complete the following math assignment so as to calibrate your perceptions before leveling further allegations of profligate waste. Suppose that every mobile phone on the face of the planet was an "end site" in the classic sense and got a /48 (because miraculously, the mobile providers aren't being stingy). Now give such a phone to every human on the face of the earth. Unfortunately for our conservation efforts, every person with a cell phone is actually the cousin of either Avi Freedman or Vijay Gill, and consequently actually has FIVE cell phones on active plans at any given time. Assume 2:1 overprovisioning of address space because per Cameron Byrne's comments on ARIN 2013-2, the cellular equipment providers can't seem to figure out how to have N+1 or N+2 redundancy rather than 2N redundancy on Home Agent hardware. What percentage of the total available IPv6 space have we burned through in this scenario? Show your work. -r
Here's the problem with the math, presuming everyone gets roughly the same answer: The efficiency (number of prefixes vs total space) is only achieved if there is a "flat" network, which carries every IPv6 prefix (i.e. that there is no aggregation being done).
Yes, but since our most exaggerated estimates only got to a /11, you can do up to 256x in waste in order to support aggregation and still fit within 2000::/3 (1/8th of the total address space).
This means 1:1 router slots (for routes) vs prefixes, globally, or even internally on ISP networks.
If any ISP has > 1M customers, oops. So, we need to aggregate.
Basically, the problem space (waste) boils down to the question, "How many levels of aggregation are needed"?
I argue that much of the waste needed for aggregation is available in the amount by which the model for the number of required is included in the pre-existing exaggeration of the model. However, there's still a 256x factor within 2000::/3 that can also absorb aggregation costs.
If you have variable POP sizes, region sizes, and assign/aggregate towards customers topologically, the result is: - the need to maintain power-of-2 address block sizes (for aggregation), plus - the need to aggregate at each level (to keep #prefixes sane) plus - asymmetric sizes which don't often end up being just short of the next power-of-2 - equals (necessarily) low utilization rates - i.e. much larger prefixes than would be suggested by "flat" allocation from a single pool.
Here's a worked example, for a hypothetical big consumer ISP: - 22 POPs with "core" devices - each POP has anywhere from 2 to 20 "border" devices (feeding access devices) - each "border" has 5 to 100 "access" devices - each access device has up to 5000 customers
But you don't have to (or even usually want to) aggregate at all of those levels.
Rounding up each, using max(count-per-level) as the basis, we get: 5000->8192 (2^13) 100->128 (2^7) 20->32 (2^5) 22->32 (2^5) 5+5+7+13=30 bits of aggregation 2^30 of /48 = /18 This leaves room for 2^10 such ISPs (a mere 1024), from the current /8. A thousand ISPs seems like a lot, but consider this: the ISP we did this for, might only have 3M customers. Scale this up (horizontally or vertically or both), and it is dangerously close to capacity already.
First of all, you are mistaken. There is no current /8, it's a /3, so there's room for 32 times that (32,768 such ISPs). Second of all, what would make much more sense in your scenario is to aggregate at one or two of those levels. I'd expect probably the POP and the Border device levels most likely, so what you're really looking at is 5000*100 = 500,000 /48s per border. To make this even, we'll round that up to 524,288 (2^19) and actually to make life easy, let's take that to a nibble boundary (2^20) 1,048,576, which gives us a /28 per Border Device. Now aggregating POPs, we have 22*20 border devices which fits in 8 bits, so we can build this ISP in a /20 quite easily. That leaves us 17 bits in the current /3 for assigning ISPs of this size (note, this is quite the sizeable ISP since we're supporting 220,000,000 customers in your count. Even so, since they only need a /20, I think that's OK. We've got room to support enough of those ISPs that even if they only have 3 million customers, since we can support 131,072 such ISPs in the current /3, that gives us the ability to address a total of 393,216,000,000 total customers. There are approximately 6,800,000,000 People on the planet, so we have almost 60x as many ISPs as we would possibly need.
The answer above (worked math) will be unique per ISP. It will also drive consumption at the apex, i.e. the size of allocations to ISPs.
Sure, but doing aggregation in the least efficient possible way against inflated numbers and you were still barely able to make it be a problem by dividing the currently available address space by 256 and ignoring the fact that the currently available address space is only 1/8th of the total address space.
And root of the problem was brought into existence by the insistence that every network (LAN) must be a /64.
Not really. The original plan was for everything to be 64 bits, so by adding another 64 bits and making every network a /64, we're actually better off than we would have been if we'd just gone to 64 bit addresses in toto. Thanks for playing. Owen