Re: number of unaggregated class C's in swamp?
From: Michael Dillon <michael@junction.net> On Fri, 29 Sep 1995, Mark Kent wrote:
Announce now that by Oct 1, 1996 no individual /24 will be routed.
This is a reasonable timetable.
Filter 204/24 and 205/24 on Oct 31, 1995 Filter 202/24 and 203/24 on Nov 30, 1995
This is not!
The great bulk of the world, especially management types, gets their technical news with a 4 to 6 month time delay from glossy technical magazines. This delay is due to the time required for writers to
I want to thank Mark Kent, and disagree with Michael Dillon. Last April, folks at IETF stood in the front of the room and said they were going to start reducing the prefixes. I heard within a _month_! Now, it has been 6 months. We've already had plenty of time for the news magazines to print and management types to absorb the news. Some competent providers have already informed their customers that they need to renumber, have already started planning, and are actually renumbering! Others are not so competent. Time to announce to them that we're sorry, but unless they pay out lots of money to upgrade their competitors and fund the research, their competitors aren't going to accept their /24 routes anymore. We are all in this together. Now, the pier wg is trying to help put together a list of OS's and pin-pointing "how to" work through renumbering. That should help. But there's no time like the present to get on with doing it! The only difference I'd make is to _START_ with 192, not end there. We get a lot more bang for the buck there, as renumbering will help aggregate both continentally and regionally, and also free up badly managed space for the future. That nice table from Dennis was an eye-opener (just what I was looking for, thanks)! Bill.Simpson@um.cc.umich.edu Key fingerprint = 2E 07 23 03 C5 62 70 D3 59 B1 4F 5E 1D C2 C1 A2
The only difference I'd make is to _START_ with 192, not end there. We get a lot more bang for the buck there, as renumbering will help aggregate both continentally and regionally, and also free up badly managed space for the future.
I think this is actually a pretty good idea. I think not dealing with pre-existing allocations is going to mean putting an ever-tighter squeeze on future allocations in a way that is counter-productive, since if you squeeze the number of IPv4 routes used to route a chunk of address space too hard you'll end up applying pressure which encourages people to use the address space less efficiently. I'm not particularly fond of prefix length filtering since I think there are times when it is appropriate and useful to route longer-prefix routes (though I guess I recognize the necessity at this point), a better aim would be to get the average number of routes it takes to route a chunk of address space down without resorting to implementing prefix-length filters. The one thing that hasn't been done, however, is to set an actual goal for routing efficiency. I don't think it is rational to expect we'll be able to maintain a 30,000 route forwarding table until the end of time, since I think doing so while simultaneously allocating addresses in a way which ensures both good continued growth and efficient use of the address space is not possible. I also wouldn't bet the farm on some amazing new routing architecture saving us, so what I think we should be doing is trying to pick a routing efficiency which gives us a number of routes at the IPv4 end state (i.e. the number of routes needed to route the full address space) which seems tractable given the current routing architecture and plausible not-too-distant future hardware. I'd like to state my guess that a good number to aim at might be about 1200 routes per fully-utilized class-A-sized chunk of address space. This would put the IPv4 end state at a maximum of about 250,000 routes, a number which I think is not an unreasonable target for new high-end router designs since I think it represents both a tractable routing problem with the current architecture given modern enough hardware, and it is not too out-of-line with what I am guessing might be accomodated in a fast forwarding path. I think if both router vendors and users aimed at this (or some other mutually satisfactory) target, without exceptions, the outcome would be happier than trying to muddle along designing both hardware and address allocation plans without either a quantitative measure of what's good and what's not, or a well-defined goal of where we'd like to end up. And I'd like to aim for a place we can get to without any magic bullets, I think it would be better to save those for IPv6 where we have a cleaner slate to deploy onto. I like the idea of measuring each and every class-A-sized block against some standard separately, since a lot of the class-C space has been allocated to regional registries this way and it inconveniences those places which have done the best the least. I'm less attached to the number 1200 in particular, but I do think an explicit target should be chosen which represents both a tractable limit to design big routers for and which allows the implementation of efficient address allocation strategies which won't have to be tighened over time. I do note that 1200 is close to the threatened /18 address filter, but this is mostly accidental. I'd much rather see each space filled with /14's and /20's, and even an occasional /23 or /25, as appropriate and as long as the filled block was only 1200 (or N, for some well-defined N) routes, rather than picking an arbitrary, one-size-fits-all filter limit. The latter is a sign of failure. I think cidrd, which after all is a deployment group, would be much more productive if it concentrated on deciding on an appropriate N, and then figuring out how to fix each class-A-sized block where the number of routes exceeds N, without exception. It's fine to work on magic bullets too, but I really think it would save a lot of noise if we could also get back to doing some basic engineering of what we have and know already. I've included a modified table of the current situation, this one done using data from a more neutrally-located router (one belonging to Haavard Eidnes, the same one that the top 20 list comes from) to lose the bias that fetching this from one of MCI's routers causes. Dennis Ferguson A B 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- 8 23 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 0 3 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 11 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12 0 8 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 13 0 13 0 2 1 0 0 0 0 2 0 0 0 0 0 1 4 14 0 40 0 4 1 0 0 0 6 2 0 0 0 0 7 3 6 15 0 71 0 21 5 0 0 0 6 8 0 0 1 0 14 12 3 16 1 4675 28 43 45 0 6 0 53 47 6 0 13 5 71 35 21 17 0 0 5 4 8 0 0 0 4 14 2 0 5 9 17 2 11 18 1 1 10 9 9 0 1 0 17 29 2 0 27 15 38 17 24 19 0 3 29 36 31 0 0 0 29 22 6 0 43 15 51 61 145 20 0 1 55 45 32 0 6 0 33 35 9 0 49 16 95 32 5 21 1 1 74 61 40 0 9 0 61 51 5 0 92 12 95 44 6 22 1 0 162 89 53 0 17 0 79 59 7 0 125 32 74 43 5 23 3 2 337 98 97 0 21 0 116 91 22 0 136 20 115 61 2 24 21 18 6272 1462 1230 1 225 1 2970 2431 278 1 892 357 2539 1119 103 26 0 5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 27 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 29 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 30 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 31 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
On Fri, 29 Sep 1995, Dennis Ferguson wrote:
I like the idea of measuring each and every class-A-sized block against some standard separately, since a lot of the class-C space has been allocated to regional registries this way and it inconveniences those places which have done the best the least. I'm less attached to the number 1200 in particular, but I do think an explicit target should be chosen which represents both a tractable limit to design big routers for and which allows the implementation of efficient address allocation strategies which won't have to be tighened over time. I do note that 1200 is close to the threatened /18 address filter, but this is mostly accidental. I'd much rather see each space filled with /14's and /20's, and even an occasional /23 or /25, as appropriate and as long as the filled block was only 1200 (or N, for some well-defined N) routes, rather than picking an arbitrary, one-size-fits-all filter limit. The latter is a sign of failure.
Your N-routes-per-/8 block is less of a big-stick approach than the /18 filtering and meshes well with what seems to be a movement to force people to renumber in that it provides an out for people who feel they must have an independently routed /24. If their provider can fit them in and still meet the goal of N in their block then it's OK. If they can't be fitted in, then they are not S.O.L. By renumbering to a different /8 block they could still be allowed to keep their independently routed /24 Given that some people might feel threatened by the big stick approach it would be good to have an out for them so they don't feel cornered and run to their lawyers. If there is a forced renumbering then would there also be some reallocation of these /8 blocks based on topology? Would a proportional fraction of N also be allocated to the providers who do the topological aggregation in each /8 block? So if N = 1200 and 4 providers are given one quarter of a /8 like 192 would they also be given one quarter of the routes, i.e. 300, and then negotiate from there? Or is this whole idea of reassigning a block like 192/8 to NSP's based on topology completely unthinkable? Michael Dillon Voice: +1-604-546-8022 Memra Software Inc. Fax: +1-604-542-4130 http://www.memra.com E-mail: michael@memra.com
Dennis,
I like the idea of measuring each and every class-A-sized block against some standard separately, since a lot of the class-C space has been allocated to regional registries this way and it inconveniences those places which have done the best the least. I'm less attached to the number 1200 in particular, but I do think an explicit target should be chosen which represents both a tractable limit to design big routers for and which allows the implementation of efficient address allocation strategie s which won't have to be tighened over time. I do note that 1200 is close to the threatened /18 address filter, but this is mostly accidental. I'd much rather see each space filled with /14's and /20's, and even an occasional /23 or /25, as appropriate and as long as the filled block was only 1200 (or N, for some well-defined N) routes, rather than picking an arbitrary, one-size-fits-all filter limit. The latter is a sign of failure.
So, perhaps we should just look at the total amount of IP address space advertised by a provider in its routing advertisements, then divide this amount by the number of routes the provider advertises, and see whether the resulting number meets the goal. Yakov.
......... Yakov Rekhter is rumored to have said: ] ] So, perhaps we should just look at the total amount of IP address space ] advertised by a provider in its routing advertisements, then divide ] this amount by the number of routes the provider advertises, and ] see whether the resulting number meets the goal. ] But what is the goal? I thought about your jottings all weekend Yakov. I'm sure it will comfort you to know I thought of you over the weekend :) My opinion is that the Internet became a classic chaos model directly after the transition from a single NFSNET backbone to the NAP model we now live with. At this point in time we are debating and proposing the methods by which we will impose laws on this system. As one studies chaotic models, one often finds that natural laws tend to impose possible goals called 'attractor points'. I believe our goal for 'attractor points' would be that people have a respectable ratio of networks announced to host space claimed. However, I haven't the foggiest how to define this goal. Do we factor in the amount of address space a person has? Should this be a factor? But how do we factor in percentage usage, and future growth? These are the questions that have been kicked around always. My capitalist nature says that the amount of address space one has should not be an issue. I'm not terribly sure on how that enters into the metric. I'd be in favor of something that directly associated 'goodness' or 'cost' with the amount of ip nodes one could route, or the ratio of routes to nodes. I attempted to do some work on this, and basically came to no conclusion. Perhaps my futility will spawn some other, more intelligent thought. First, I took the radb from ftp.ra.net:/routing.arbiter/radb/dbase/radb.db.gz and parsed it into a database that looks like this: AS Origin:Number of Routes:Address Space (ie #of ip nodes):nodes/routes I then started using 'gnuplot' to compare fields to fields, looking for some order in the chaos. Surprisingly, I found none. If you're interested in the databse, it's available from ftp://ftp.mid.net/pub/noc/wacky.routing.database. It may be that I hosed my math up in some of the places, because some of my numbers look kind of wacky. So, here are a few of the graphs, in hope that someone might comment on them, or gain some sort of insight..... Here I plotted the number of routes vs the metric Yakov mentioned, hoping to see some sort of a 'sampling' of where we are. However, we don't really... Or do we? nodes/route 100000 ++-----------+-------------+------------+-------------+-----------++ + + A + + + + 80000 A+ routes vs nodes/route A ++ AA A | 60000 A+ A ++ 40000 A+ A ++ AAA | 20000 AAA A A ++ AAAAA AA A A + A A + A +A + 0 AAAAAA-A-A-A-+--A----------+----------A-+-------------+-----------++ 0 500 1000 1500 2000 2500 routes Well, we seem to see some sort of correlation where the more routes a person has, the lower their metric becomes where metric = hostspace/routes. Interestingly, I found that the older an AS, the more routes they had. Big surprise there.... :-). Number of Routes 1000 ++--------+---------+---------+---------+---------+---------+-----++ + + + + + + + | 800 ++ A AS# vs # of Routes A ++ | | 600 +A A ++ 400 ++ A A ++ AA A A | 200 AAAA A A A AA ++ AAAA A A +AAA AAAAA AAA AA+ AAAAAA A+ A + + | 0 AAAAAAAAA-+AAA-A-AAAAA-AAAAAAA+AAAAAAAAAAAAAAAA--AAA--------+-----A+ 0 1000 2000 3000 4000 5000 6000 AS # And finally, I found that the older an AS number, in general, the more IP space they had, at least to some degree: Address Space 1e+07 ++--------+--------+---------+--------+---------+--------+--------++ + + + + + + + + 8e+06 ++ A AS# vs Address Space A ++ | | 6e+06 ++ ++ 4e+06 A+ A A A ++ |A A A | 2e+06 +A A A AA A A A ++ +AAA A A + A AAA A AA AA+ A A A + + + + 0 AAAAAAAAA-+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA--AA--------+-----A--++ 0 1000 2000 3000 4000 5000 6000 7000 AS # Well, perhaps these were interesting, perhaps not. Perhaps they'll help people think of a brilliant wiz bang solution to this whole huge routing table mess. Most likely not. Oh well. -- Alan Hannan Email: alan@mid.net Network Systems/Security Voice: (402) 472-0239 MIDnet, Lincoln NOC Office Fax: (402) 472-0240 " The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. " - George Bernard Shaw
Alan
......... Yakov Rekhter is rumored to have said: ] ] So, perhaps we should just look at the total amount of IP address space ] advertised by a provider in its routing advertisements, then divide ] this amount by the number of routes the provider advertises, and ] see whether the resulting number meets the goal. ]
But what is the goal?
One goal is to come up with a metric to measure how efficient the routing system works (especially wrt to aggregation), and how well individual providers manage to aggregate. This would allow us to look at parts of the system (e.g. individual providers), and see which of these parts would need improvements. Also if we have such a metric, we could look at how various mechanisms/incentives could influence the metric.
My capitalist nature says that the amount of address space one has should not be an issue. I'm not terribly sure on how that enters into the metric. I'd be in favor of something that directly associated 'goodness' or 'cost' with the amount of ip nodes one could route, or the ratio of routes to nodes.
Ideally, as you suggested, it would be really nice to have a metric that would tell us how efficiently the routing *and* addressing system works wrt to providing routes to actual hosts, rather than to blocks of addresses (after all, the purpose of the routing system is to provide connectivity to hosts, not just to host-less addresses). Moreover, if we would have such a metric, we might be able to come up with some mechanisms/incentives that would truly promote scalable Internet, so that such mechanisms/incentives would both (a) drive towards more efficient address space utilization (thus imposing a back pressure on consumption of one finite resource - IP address space), and (b) drive towards more routing aggregation (thus imposing a back pressure on consumption of another finite resource - forwarding tables). In practice, getting this metric requires some way of knowing the number of hosts per prefix. We don't have any technology to do this, so we rely on a "simplifying assumption" - we assumed that the amount of address space that an Internet Registry allocates to a site on average reflects the number of hosts within the site. I am well aware of all the traps associated with this "simplifying assumption", but at the moment that is all what we have. So, to sum things up, if we measure how efficiently we route wrt to address space (which we can do today), and if we'll augment this data with the "simplifying assumption", we can at least get some *rough* approximation of the "ideal" metric. Yakov.
On Fri, 29 Sep 1995, William Allen Simpson wrote:
The great bulk of the world, especially management types, gets their technical news with a 4 to 6 month time delay from glossy technical magazines. This delay is due to the time required for writers to
I want to thank Mark Kent, and disagree with Michael Dillon.
I notice you cut out my comments about press releases.
Last April, folks at IETF stood in the front of the room and said they were going to start reducing the prefixes. I heard within a _month_!
I heard about it too but not until the past few weeks has it dawned upon me that there is the intention to FORCE people to renumber networks even if they don't change providers.
Now, it has been 6 months. We've already had plenty of time for the news magazines to print and management types to absorb the news.
But have the magazines been writing about it? Have there been PRESS RELEASES?
Others are not so competent. Time to announce to them that we're sorry, but unless they pay out lots of money to upgrade their competitors and fund the research, their competitors aren't going to accept their /24 routes anymore. We are all in this together.
If this is indeed the type of forceful action being contemplated then this certainly deserves some PR chest-thumping. I.e. press releases to the mainstream press and maybe a joint press conference with somebody who has introduced renumbering tools lik maybe ftp.com.
Now, the pier wg is trying to help put together a list of OS's and pin-pointing "how to" work through renumbering. That should help.
But there's no time like the present to get on with doing it!
It wasn't so many months ago that magazines were advising corporations to apply for their own "portable" IP addresses so as to avoid renumbering. If the tables have so completely turned, then it must be a crisis situation and that means somebody has to stand up a=in public and say "We're sorry but the unexpected growth in the Internet has FORCED us to take this action and even people who don't switch providers will have to renumber in order to maintain access to the full global Internet". Obviously no operational people are panicing because this seems like a less serious crisis than routes flapping or a hung router but if there is an urgency for the public to take action then that urgency has to be commmunicated to them. Michael Dillon Voice: +1-604-546-8022 Memra Software Inc. Fax: +1-604-542-4130 http://www.memra.com E-mail: michael@memra.com
participants (5)
-
Alan Hannan
-
Dennis Ferguson
-
Michael Dillon
-
William Allen Simpson
-
Yakov Rekhter