None of these are big enough to justify their own backbone operations or to buy a backbone from someone else, or there wouldn't be a problem. Paying scads of extortion money is also problematic (cheaper to simply burn the IP addresses).
I am NOT advocating tossing all of that out. I am simply bringing up a problem condition. Please, don't shoot the messenger, or otherwise get defensive (return fire is a bitch).
Nope, all of these are reasonable, the ones that aren't are, for example, where folks have a single connection, or multi-home only to a single provider.
What I am bringing up here is that new, information-age companies, as predicted in MegaTrends over 10 years ago, are now starting to appear. They are very diffused (sparse population, over very large areas of the globe) and have connectivity needs which are both critical, yet very different from click-n-morter customers that the Big8 was built up to handle (either classful or classless). The current architecture is not handeling them very well.
The problem is currently in it's infancy, it will get much worse.
I'm not disagreeing with any of this. Actually, I see reliability and availability feeding into all these other issues as well. It just that some of the folks advocating portability and deaggregation are using "route table size doesn't matter anymore" as an argument, when it absolutely does matter, especially if we plan to make the Internet more reliable, and less vulnerable. -danny
Danny McPherson: Saturday, May 13, 2000 1:47 PM
None of these are big enough to justify their own backbone operations or to buy a backbone from someone else, or there wouldn't be a problem. Paying scads of extortion money is also problematic (cheaper to simply burn the IP addresses).
I am NOT advocating tossing all of that out. I am simply bringing up a problem condition. Please, don't shoot the messenger, or otherwise get defensive (return fire is a bitch).
Nope, all of these are reasonable, the ones that aren't are, for example, where folks have a single connection, or multi-home only to a single provider.
Agreed, peering on a single connection is a canard. However, there is a cause/effect relationship with the latter. They can't multi-home to multiple providers because they aren't big enough (can't justify the cost). Which is precisely part of the problem that I am presenting here.
What I am bringing up here is that new, information-age companies, as predicted in MegaTrends over 10 years ago, are now starting to appear. They are very diffused (sparse population, over very large areas of the globe) and have connectivity needs which are both critical, yet very different from click-n-morter customers that the Big8 was built up to handle (either classful or classless). The current architecture is not handeling them very well.
The problem is currently in it's infancy, it will get much worse.
I'm not disagreeing with any of this. Actually, I see reliability and availability feeding into all these other issues as well.
The reason this is an issue is exactly because they want reliability and availability, HA requirements.
It just that some of the folks advocating portability and deaggregation are using "route table size doesn't matter anymore" as an argument, when it absolutely does matter, especially if we plan to make the Internet more reliable, and less vulnerable.
I actually agree with you here as well. relying on infinite router table growth is not a scalable strategy. We need something else.
-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 There is a pretty simple solution to this problem. It should be implemented at the MAN level as it seems difficult to get large providers to cooperate well enough for this to work, but it is probably feasable for smaller organisations. A small group if service providers in the same geographical area jointly apply for a good sized block of address space, say a /18. They also jointly apply for an ASN from which this address space will be advertised (they could just as well each advertise it from within their own AS, but some people don't like seeing inconsistent ASs in the global BGP tables). When they get a customer that may only need a /24 that wants to be multi-homed, they suggest that they get a feed from one of the other providers in the group. The group of providers can transfer routing information between themselves using the routing protocol of their choice. This would mean a small increase in the size of local (i.e. within the ASs of the group) routing tables, but a negligible increase in the size of the global BGP tables. - -w - -- Will Waites \________ ww@shadowfax.styx.org\____________________________ Idiosyntactix Ministry of Research and Development\ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.1 (OpenBSD) Comment: Processed by Mailcrypt 3.5.5 and Gnu Privacy Guard <http://www.gnupg.org/> iQEXAwUBOR4dXQ4cK24IcAwYFAPyngP/T5sKGx+26Q63DbZVJVrs0qnIGet8u7KM wz/MQtfjPi6JKiGFKAn/5G2VWxAH9HXUr80vygxLZYfChinczLiHTVygVi9HvME5 dbCSbwQ/Ikzw6I1se2n9N5TGypWElzGZEVlqzjkVuQ/qn/lDDflzSxogAzgCF874 ecVYP+pssYIEAJZX+YvAAI4VVHDVZpz5ueC500p24rOW49eOGlokwotP4Mt6VOUu B1MrOS17hN4+miW3gIcrVRPiqnydtGcXtcNIIYBNzmF1bdNV0njF1s16dXJxWW4/ KdQEpvX86q9K9QNgm4Icze9ctMeSJDMTj9+5/xsKUdVQKCLjrNr4PeGn =AKZ3 -----END PGP SIGNATURE-----
On Sat, May 13, 2000 at 11:28:42PM -0400, ww@shadowfax.styx.org wrote: [snip]
A small group if service providers in the same geographical area jointly apply for a good sized block of address space, say a /18. They also jointly apply for an ASN from which this address space will be advertised (they could just as well each advertise it from within their own AS, but some people don't like seeing inconsistent ASs in the global BGP tables). When they get a customer that may only need a /24 that wants to be multi-homed, they suggest that they get a feed from one of the other providers in the group. [snip]
Something about those who do not study history applies here... Tony Li had a draft in 1996 for an ISPAC (ISP Address Coalition). Seek ye the CIDRd archives; I'm sure there's still a copy of draft-li-ispac-??.txt out there somewhere. My memory's faulty, but I don't recall it going past 00 into the light of day. Cheers, Joe -- Joe Provo Voice 508.486.7471 Director, Internet Planning & Design Fax 508.229.2375 Network Deployment & Management, RCN <joe.provo@rcn.com>
The group of providers can transfer routing information between themselves using the routing protocol of their choice. This would mean a small increase in the size of local (i.e. within the ASs of the group) routing tables, but a negligible increase in the size of the global BGP tables.
The problem with this strategy is that it does not eliminate the single point of failure of an incompetant routing engineer at one of the providers in the group -- the chance of a single mistake by one provider in the group bringing down the whole group would be MUCH higher than the chance of a single human error bringing down two unrelated providers. Speaking as a former employee of a small, multi-homed company with several diverse /24s, I can say that there are definitely valid reasons for this configuration. We were not large enough to enough request our own address space, and our upstream would not give us all the address space we eventually needed at the outset, so we ended up with a bunch of random class Cs. In order to achieve the level or reliability we needed, we had to be multihomed to seperate providers; during the time I worked there, more than 70% of outages were due to one of our upstreaming goofing, not due to a single circuit being down, so any solution which results in multiple paths to non-independant networks would not have met our requirements. It seems to me it is kind of approaching the problem backward to say "Well, these are the limits of our routers, so this is what services we can offer moving forwards." Wouldn't it make more sense to identify what the actual needs of end users are (and I think portable /24s is a legitimate need!), and then plan the future of the backbone based upon those needs? If we need to go to Cisco and say, "Hey, make a GSR that does BGP updates faster", then that's what we need to do! Imposing limitations on end users which make the internet less useful is not a solution to the problem, at best it's a kludge which reduces the headache for backbone providers, but doesn't actually solve any long-term problems. Also, I don't really buy the "how do we manage 250K routes?" arguement. Any well-designed system which can effectively manage 10K of something, in general, ought to be able to handle 250K of it; it's just a question of scaling it up, and there's no question that processors and getting faster and memory cheaper every day. If there's some magic number of routes that suddenly becomes unmanagable, I'd love to hear why.
On Mon, May 15, 2000, Chris Williams wrote: [snip]
Also, I don't really buy the "how do we manage 250K routes?" arguement. Any well-designed system which can effectively manage 10K of something, in general, ought to be able to handle 250K of it; it's just a question of scaling it up, and there's no question that processors and getting faster and memory cheaper every day. If there's some magic number of routes that suddenly becomes unmanagable, I'd love to hear why.
I agree with everything you said about /24 multihoming except this. If it were just a question of scaling it up to be faster, then it would have been solved for now and we wouldn't be discussing it. "Making BGP go faster" isn't a "throw more RAM and CPU at it", its a "Actually research the problem with the data we have today and develop new solutions which solve these problems." People in general have this strange concept of well designed; there is no absolute concept of well designed, only "well designed with a given set of data and a given level of knowledge". BGPv4 was designed with different goals, different data and with different ways of thinking, so how do you expect it to scale with _todays_ demands? "Faster BGP" and "Handling 250k routes" is not just a function of CPU speed and memory capacity. You have to consider network topology, latency/packetloss, router software (as well as hardware, so you can throw in your CPU and hardware here), peering patterns, route filtering, IGP/iBGP behaviour and some liberal application of fairy dust. Read Craig and Abha's presentations on BGP convergence at the last two NANOG meetings. It might shed some light on the issues and behaviour involved. Adrian
"Faster BGP" and "Handling 250k routes" is not just a function of CPU speed and memory capacity. You have to consider network topology, latency/packetloss, router software (as well as hardware, so you can throw in your CPU and hardware here), peering patterns, route filtering, IGP/iBGP behaviour and some liberal application of fairy dust.
Just to clarify, I wasn't speaking so much in terms of actually making BGP faster, as in terms of route policy management, which was an issue brought up earlier in this thread. So, I wasn't suggesting that "doubling the number of routes in a network might not bring up new issues", my argument was more along the lines of "if you can keep track of policies for 10K routes, you can probably keep track of policies for 20K or 200K routes". Which of course is not to say that your router can nessecarily handle 200K pieces of policy data, although I would think that _is_ more of a hardware/software issue than a protocol/network-behavior issue. If the emerging network topologies and route-informaton volumes are bringing out non-performance limitations in our routing protocols, then obviously we need new protocols, and certainly there have been a number of links to papers on different routing schemes posted in the last week. Really, the core of what I was saying was, "instead of complaining about the headaches rapid growth causes us and trying to change user behavior, let's really focus our efforts on solving the underlying problems". I am not trying to start a fight here -- I believe the majority of posts so far on this topic have been in exactly the spirit of working together for a better solution. I just hope that the final solution takes into account the actual needs of different types of companies and individuals on the internet, rather than being purely based upon what is easiest for backbones providers.
I just hope that the final solution takes into account the actual needs of different types of companies and individuals on the internet, rather than being purely based upon what is easiest for backbones providers.
I agree with the thought, but I think the reality will be that the solution will favor those that have a substantial financial interest. -brad (Rural CNE) bwalters@inet-direct.com
I agree with the thought, but I think the reality will be that the solution will favor those that have a substantial financial interest.
Well, as another poster pointed out, ignoring the needs of a large enough group of users is likely to result in that group pressuring legislators to intervene, a situation which I have trouble imagining as beneficial for *anyone*.
participants (7)
-
Adrian Chadd
-
Bradly Walters
-
Chris Williams
-
Danny McPherson
-
Joe Provo - Network Architect
-
Roeland M.J. Meyer
-
ww@shadowfax.styx.org