Re: Faster 'Net growth rate raises fears about routers
On Tue, 3 Apr 2001, RJ Atkinson wrote:
At 08:37 03/04/01, Greg Maxwell wrote:
Replace the internet with a highly aggregated IPv6 network which uses transport level multihoming and you gain a factor of 1000 improvement at core routers (and 100,000x further from the core where you no longer need to be default-free) and still have the oppturnity for a further 5x by going to a state-of-the-art CPU (providing that your cpu speed reasoning is valid).
Precisely which "highly aggregated IPv6 network which uses transport level multihoming" is one talking about ? What's the RFC on this ?
It's possible to 'solve' these problems in the future: Forbid IP level multihoming for IPv6 which crosses aggregation boundaries. I.e. absolutly no multihoming that inflates more then your providers routing tabling, connect to whoever you want, but no AS should emit a route for any other AS without aggregating it into their own space without a special agreement of limited scope (i.e. not globally!)
AFAIK, IPv6 multihoming is identical to IPv4 multihoming, with all the same adverse implications on the default-free routing table -- hence the creation of an IETF MULTI6 WG to try to change this. If I've missed some recent advance in the IETF specifications, please share the details (preferably citing RFC and page number :-) with the rest of us.
IPv6 multihoming *is* the same. What I argue is that: Routers are the wrong place to do multihoming for anything but Tier-1 connectivity. Multihoming belongs in the end node. SCTP (2960) is a transport level protcol which offers what amounts to a superset of TCP. One of it's features is multihoming (2960; section 6.4). With such a protcol it is possible to accomplish all of the realibility benifits of IP multihoming while achieving much greater scalability, flexibility, and performance. We need to stop looking at IP addresses as host-identifyers (thats what DNS is for) and look at them as path-identifyers. I'm not sure of an RFC detailing specifics of an Internet architecture based on these concepts, thats one of the reasons I asked here. I'll troll around more and see if I can find anyone working on such a document and if no one is, I'll create one.
Greg,
On Tue, 3 Apr 2001, RJ Atkinson wrote:
At 08:37 03/04/01, Greg Maxwell wrote:
Replace the internet with a highly aggregated IPv6 network which uses transport level multihoming and you gain a factor of 1000 improvement at core routers (and 100,000x further from the core where you no longer need to be default-free) and still have the oppturnity for a further 5x by going to a state-of-the-art CPU (providing that your cpu speed reasoning is valid).
Precisely which "highly aggregated IPv6 network which uses transport level multihoming" is one talking about ? What's the RFC on this ?
It's possible to 'solve' these problems in the future: Forbid IP level multihoming for IPv6 which crosses aggregation boundaries. I.e. absolutly no multihoming that inflates more then your providers routing tabling, connect to whoever you want, but no AS should emit a route for any other AS without aggregating it into their own space without a special agreement of limited scope (i.e. not globally!)
Who is going to "forbid" this ? And who is going to enforce this ?
AFAIK, IPv6 multihoming is identical to IPv4 multihoming, with all the same adverse implications on the default-free routing table -- hence the creation of an IETF MULTI6 WG to try to change this. If I've missed some recent advance in the IETF specifications, please share the details (preferably citing RFC and page number :-) with the rest of us.
IPv6 multihoming *is* the same.
What I argue is that: Routers are the wrong place to do multihoming for anything but Tier-1 connectivity. Multihoming belongs in the end node.
SCTP (2960) is a transport level protcol which offers what amounts to a superset of TCP. One of it's features is multihoming (2960; section 6.4).
With such a protcol it is possible to accomplish all of the realibility benifits of IP multihoming while achieving much greater scalability, flexibility, and performance.
We need to stop looking at IP addresses as host-identifyers (thats what DNS is for) and look at them as path-identifyers.
Perhaphs. But (stating the facts) for now, both in IPv4 *and* in IPv6 IP addresses carry dual semantics - host-identifiers (aka end-point identifiers) *and* path-identifiers (aka locators). Yakov.
On Tue, 3 Apr 2001, Yakov Rekhter wrote:
It's possible to 'solve' these problems in the future: Forbid IP level multihoming for IPv6 which crosses aggregation boundaries. I.e. absolutly no multihoming that inflates more then your providers routing tabling, connect to whoever you want, but no AS should emit a route for any other AS without aggregating it into their own space without a special agreement of limited scope (i.e. not globally!)
Who is going to "forbid" this ? And who is going to enforce this ?
Ahem. The same people who prevent the current global routing table from being flooded by /25 - /30s.
We need to stop looking at IP addresses as host-identifyers (thats what DNS is for) and look at them as path-identifyers.
Perhaphs. But (stating the facts) for now, both in IPv4 *and* in IPv6 IP addresses carry dual semantics - host-identifiers (aka end-point identifiers) *and* path-identifiers (aka locators).
I though it was explicit with IPv6 that end-nodes are not-host identifyers. In the real world today, IPv6 addresses are certantly not host-identifyers: Many hosts (including the one I'm typing on) have multiple IP addresses, and sites have a farm of web serverers behind a single IP address. We may pretend that a IP address means a host, but it doesn't.
First off, let me just say that I'm not speaking for my employer on this, okay? Thanks. The people who prevent the current global routing table from being flooded by /25-/30 announcements are also the people who punch holes in their address space for /24s. Abha's numbers at the ptomaine BOF clearly show the effect of RIR policies (spikes around /20 and /19), but the bigger effect from my perspective was the spike around /24, created (I presume) by the punches in CIDR blocks that providers make to allow multi-homing. I haven't seen good numbers for the distribution of punches in a long time, but my limited experience indicates that those punches are being made fairly randomly within the provider's allocated address space. This means that the bit boundaries don't align and you increasingly have mini-swamps inside providers' /19s and /20s. Why are providers doing this? Someone is paying them to do it. Why are customers spending money on this? My belief is that they want more say in their own fate. That may express itself as a desire for redundancy in the case of catastrophic business failures, better ability to express their own routing policies, or a simple worry that they won't get the best price if they have only one supplier. At the core of this, though, is a desire for more control over something that they see as increasingly important to their own fate. I think there are various short term work-arounds to the current explosion of paths in the routing tables, and I encourage folks to join the ptomaine mailing list (ptomaine-request@shrubbery.net) if they want to contribute to the solution. But don't try to accomplish it by reducing the ability of the customer to control their own fate. There are real economic pressures out there which will prevent that class of solution from success. regards, Ted Hardie
On Tue, 3 Apr 2001, Yakov Rekhter wrote:
It's possible to 'solve' these problems in the future: Forbid IP level multihoming for IPv6 which crosses aggregation boundaries. I.e. absolutly no multihoming that inflates more then your providers routing tabling, connect to whoever you want, but no AS should emit a route for any other AS without aggregating it into their own space without a special agreement of limited scope (i.e. not globally!)
Who is going to "forbid" this ? And who is going to enforce this ?
Ahem.
The same people who prevent the current global routing table from being flooded by /25 - /30s.
We need to stop looking at IP addresses as host-identifyers (thats what DNS is for) and look at them as path-identifyers.
Perhaphs. But (stating the facts) for now, both in IPv4 *and* in IPv6 IP addresses carry dual semantics - host-identifiers (aka end-point identifiers) *and* path-identifiers (aka locators).
I though it was explicit with IPv6 that end-nodes are not-host identifyers.
In the real world today, IPv6 addresses are certantly not host-identifyers: Many hosts (including the one I'm typing on) have multiple IP addresses, and sites have a farm of web serverers behind a single IP address. We may pretend that a IP address means a host, but it doesn't.
On Tue, 3 Apr 2001 hardie@equinix.com wrote:
Why are customers spending money on this? My belief is that they want more say in their own fate. That may express itself as a desire for redundancy in the case of catastrophic business failures, better ability to express their own routing policies, or a simple worry that they won't get the best price if they have only one supplier. At the core of this, though, is a desire for more control over something that they see as increasingly important to their own fate.
Why is pretty simple, in my (admittedly limited) experience with customers. Count the single points of failure on the way from a customer T1 to the ISP POP. Customer premesis equipment, especially if the customer doesn't buy something w/dual power supplies, redundant control processors, etc. Copper haul within the building. T1 local loop. Telco network. T1 or hubbed-DS3 card in the provider's customer edge router. Provider's customer edge router (assuming it doesn't have fully redundant components.) Add to that telco techs stealing pairs, and all the other fun events I'm sure we've all seen, and life at the end of a single circuit can get pretty sketchy. Top it off with a MTR well over 4 hours, especially when the blame game starts, and it gets nasty. Note that many of these problems aren't fixed until you have APS SONET, assuming someone engineered the protect path diversely. Now add a business that can't afford downtime, and multihoming becomes simple. How many SPs out there are offering customer circuits into multiple edge boxes for fault tolerance? Is this adequate, or does the availability requirement call for multiple POPs? Is this adequate, or is it necessary to go for multiple service providers? I think the first problem is that conventional wisdom tells the customer that they have to buy a circuit to two different SPs in order to get real fault tolerance. I haven't seen a whole lot of aggressive marketing about pulling two circuits into two edge boxes, using two different pieces of CPE or one fault-tolerant one. The industry isn't pushing the idea that you can have redundant service from a single provider. (grain of salt: one of our providers sold us a backup transit DS3 for the cost of the local loop) I'm at a multi-POP network in Boston. We've had great luck selling customers a Verizon circuit into one of our POPs and a Worldcom circuit into a different one. It costs more, but they don't have nearly the exposure of a single circuit customer. However, if you're not set up to do this, the appropriate level of paranoia calls for circuits to two different providers. Maybe if SPs really addressed availability requirements of their customers, it wouldn't be such an issue. -travis
Travis writes: <Snipped description of problems related to single-circuit customer prems>
I think the first problem is that conventional wisdom tells the customer that they have to buy a circuit to two different SPs in order to get real fault tolerance. I haven't seen a whole lot of aggressive marketing about pulling two circuits into two edge boxes, using two different pieces of CPE or one fault-tolerant one. The industry isn't pushing the idea that you can have redundant service from a single provider. (grain of salt: one of our providers sold us a backup transit DS3 for the cost of the local loop)
I'm at a multi-POP network in Boston. We've had great luck selling customers a Verizon circuit into one of our POPs and a Worldcom circuit into a different one. It costs more, but they don't have nearly the exposure of a single circuit customer. However, if you're not set up to do this, the appropriate level of paranoia calls for circuits to two different providers. Maybe if SPs really addressed availability requirements of their customers, it wouldn't be such an issue.
-travis
Thanks for your message. I agree that many redundancy issues can be solved by taking circuits from different L2 providers into two different POPs of the same IP provider. Some problems (like catastrophic business failure of the IP provider) are not solved by this approach, though, and customers see those points of failure as well or better than the technical failures that you describe. Part of the reason for the customer's perception relates to your point about the industry not pushing the idea that you can get redundant service from a single provider. In fact, there are large sales forces out there pushing the opposite idea: that you must get redundancy and you must do it by having a second provider. My experience is that once a customer has anything like reasonable connectivity, it is difficult to replace the existing provider because the hassle factor and downtime are just too great a cost for an incremental improvement or savings. For the salesperson, the right way to make the sale then is not to try to sell someone a replacement; instead, she or he sells them the service as an enhancement. For many companies out there, in other words, offering to enable a customer to multi-home may be their only shot at getting any business from that customer. We all remember I'm not speaking for my employer, right? thanks, Ted Hardie
On Tue, Apr 03, 2001 at 11:25:15AM -0700, hardie@equinix.com wrote:
Part of the reason for the customer's perception relates to your point about the industry not pushing the idea that you can get redundant service from a single provider. In fact, there are large sales forces
Again, let's not forget the real-world example that started the thread: The single provider went out of business.
On Tue, Apr 03, 2001 at 01:33:36PM -0400, Travis Pugh wrote:
I'm at a multi-POP network in Boston. We've had great luck selling customers a Verizon circuit into one of our POPs and a Worldcom circuit into a different one.
Don't know how the world looks like in the US, but here a SDH/Sonet provider will never guarentee diversity of his/her circuit to that of a different provider, often the end user can be almost sure that at least the last few km will be in the same duct, as the local communities demand that the providers cooperation when digging fiber into the ground...
It costs more, but they don't have nearly the exposure of a single circuit customer. However, if you're not set up to do this, the appropriate level of paranoia calls for circuits to two different providers. Maybe if SPs really addressed availability requirements of their customers, it wouldn't be such an issue.
/Jesper -- Jesper Skriver, jesper(at)skriver(dot)dk - CCIE #5456 Work: Network manager @ AS3292 (Tele Danmark DataNetworks) Private: FreeBSD committer @ AS2109 (A much smaller network ;-) One Unix to rule them all, One Resolver to find them, One IP to bring them all and in the zone to bind them.
At 18:35 03/04/01, Jesper Skriver wrote:
Don't know how the world looks like in the US, but here a SDH/Sonet provider will never guarentee diversity of his/her circuit to that of a different provider,
Interesting, though they do guarantee that to NATO circuits. smd, are you able to get diverse local paths over there ?
often the end user can be almost sure that at least the last few km will be in the same duct, as the local communities demand that the providers cooperation when digging fiber into the ground...
Obviously there is a concern if everyone is in the same duct, but if one builds with rings like sensible engineers, then a single cut just means traffic goes the long way round. Mind, if a backhoe disconnects one's building entirely from the ring or there are byzantine failures, no form of multi-homing will really save one. At a previous job, we ensured that local transport came down one road into the front side of the building and a different local transport came down the back road into the back side of the building next door, then connected the buildings via fibre of our own. Made for quite a nice setup actually. Ran
Anyone care to share their list of contacts at the large ISP's (Earthlink, AOL, etc) to whom I can make a request to get on their SMTP whitelist? TIA. --- Quantum Mechanics: the dreams stuff is made of
On Tue, Apr 03, 2001 at 07:51:41PM -0400, RJ Atkinson wrote:
At 18:35 03/04/01, Jesper Skriver wrote:
Don't know how the world looks like in the US, but here a SDH/Sonet provider will never guarentee diversity of his/her circuit to that of a different provider,
Interesting, though they do guarantee that to NATO circuits.
The company I work for, can and do provide diverse circuits, but they won't guarentee diversity between that and one of a different provider, the reasoning behind this is, that one cannot know if/when the other provider reroute their circuit, so that there is no diversity any more.
smd, are you able to get diverse local paths over there ?
often the end user can be almost sure that at least the last few km will be in the same duct, as the local communities demand that the providers cooperation when digging fiber into the ground...
Obviously there is a concern if everyone is in the same duct, but if one builds with rings like sensible engineers,
SDH/Sonet protection removes quite a bit of the problem yes, but often it's usefull to get 2 circuits with diverse routing (and without protection) instead of a single with protection, and the price is usually in the same order for both. We always get multiple circuits with diverse routing instead of a single circuit with protection if we can. /Jesper -- Jesper Skriver, jesper(at)skriver(dot)dk - CCIE #5456 Work: Network manager @ AS3292 (Tele Danmark DataNetworks) Private: FreeBSD committer @ AS2109 (A much smaller network ;-) One Unix to rule them all, One Resolver to find them, One IP to bring them all and in the zone to bind them.
On Wed, 4 Apr 2001, Jesper Skriver wrote:
Don't know how the world looks like in the US, but here a SDH/Sonet provider will never guarentee diversity of his/her circuit to that of a different provider, often the end user can be almost sure that at least the last few km will be in the same duct, as the local communities demand that the providers cooperation when digging fiber into the ground...
Hi Jesper. Delivery to a different provider is a little less clear, at least in Boston, but if you buy a "type 1" circuit directly from the provider, and your building is set up with two fiber entrances, you can actually get a real diverse circuit. Verizon even has a diversity option on their tariff, although there's some doubt as to whether or not they bother to make sure it is really diverse. I'd have to go ask the provisioning department about inter-provider diversity, but I would imagine tortured screams would be the standard reply. Of course, the caveat is that you never believe a provider when they tell you that you have working / protect on a diverse path, since the first fiber cut invariably points out some problem with the circuit engineering. -travis
On Tue, 3 Apr 2001 hardie@equinix.com wrote:
The people who prevent the current global routing table from being flooded by /25-/30 announcements are also the people who punch holes in their address space for /24s. Abha's numbers at the ptomaine BOF clearly show the effect of RIR policies (spikes around /20 and /19), but the bigger effect from my perspective was the spike around /24, created (I presume) by the punches in CIDR blocks that providers make to allow multi-homing. I haven't seen good numbers for the distribution of punches in a long time, but my limited experience indicates that those punches are being made fairly randomly within the provider's allocated address space. This means that the bit boundaries don't align and you increasingly have mini-swamps inside providers' /19s and /20s.
Why are providers doing this? Someone is paying them to do it.
I don't argue that multihoming is bad. I argue that we're doing it in the wrong place with negitive consequences.
Why are customers spending money on this? My belief is that they want more say in their own fate. That may express itself as a desire for redundancy in the case of catastrophic business failures, better ability to express their own routing policies, or a simple worry that they won't get the best price if they have only one supplier. At the core of this, though, is a desire for more control over something that they see as increasingly important to their own fate.
I agree. However, you can offer even greater control and all the other benefits of multihoming without doing it at the IP layer. Multihoming at the IP later thus breaking aggregation is like dumping toxic waste, it cost is largely carried by those not in recept of it's benefits or any form of payment. If we can avoid it while still providing the necessary level of service, then we should seriously investigate such opportunities.
I think there are various short term work-arounds to the current explosion of paths in the routing tables, and I encourage folks to join the ptomaine mailing list (ptomaine-request@shrubbery.net) if they want to contribute to the solution.
Short term is nice, but it doesn't matter in the long run. :)
But don't try to accomplish it by reducing the ability of the customer to control their own fate. There are real economic pressures out there which will prevent that class of solution from success.
I never suggested that, I suggested investigating alternatives which increase customer choice, performance, reliability, and Internet scalability and potential measures to make the minor inital cost of implimentation more acceptable. The obvious intrest here is that most network operators would not have their customers multihoming at the IP level and thus preventing aggregation and polluting the global routing table is there was another way to achieve the same benefits.
Greg, Sorry if it seemed that I was misrepresenting your position. I wanted to take up the specific point made in the previous post about how flooding small announcements is prevented now, in order to assert that there are customer desires fueling this and that those must be handled in proposed solutions. I did not mean to imply that this was the sole, or even main, point of your post. I fully agree with what I do see as your main point, that if there were other ways to get the customers the benefits they desire, that they would not insist on using IP multi-homing. In addition to transport layer multi-homing, Bob Moskowitz's HIP proposal (draft-moskowitz-hip-arch-02.txt) and other architectural proposals offer longer term potential solutions which deserve serious consideration. On a second point, though, I have to disagree with you that short term solutions don't matter in the long term. There is a strong urge in the Internet standards community to maintain backwards compatibility, and that tends to mean that short term solutions circumscribe the potential long term solutions. The current work to get H.323 or SIP across addressing realms is a classic example: NATs provide a solution to one problem (address space exhaustion), but break signalling protocols which have to cross the address realm. The efforts to fix that problem are circumscribed by how NATs work, no matter how tortuous the efforts may seem. best regards, Ted Hardie
On Tue, 3 Apr 2001 hardie@equinix.com wrote:
The people who prevent the current global routing table from being flooded by /25-/30 announcements are also the people who punch holes in their address space for /24s. Abha's numbers at the ptomaine BOF clearly show the effect of RIR policies (spikes around /20 and /19), but the bigger effect from my perspective was the spike around /24, created (I presume) by the punches in CIDR blocks that providers make to allow multi-homing. I haven't seen good numbers for the distribution of punches in a long time, but my limited experience indicates that those punches are being made fairly randomly within the provider's allocated address space. This means that the bit boundaries don't align and you increasingly have mini-swamps inside providers' /19s and /20s.
Why are providers doing this? Someone is paying them to do it.
I don't argue that multihoming is bad. I argue that we're doing it in the wrong place with negitive consequences.
Why are customers spending money on this? My belief is that they want more say in their own fate. That may express itself as a desire for redundancy in the case of catastrophic business failures, better ability to express their own routing policies, or a simple worry that they won't get the best price if they have only one supplier. At the core of this, though, is a desire for more control over something that they see as increasingly important to their own fate.
I agree. However, you can offer even greater control and all the other benefits of multihoming without doing it at the IP layer.
Multihoming at the IP later thus breaking aggregation is like dumping toxic waste, it cost is largely carried by those not in recept of it's benefits or any form of payment.
If we can avoid it while still providing the necessary level of service, then we should seriously investigate such opportunities.
I think there are various short term work-arounds to the current explosion of paths in the routing tables, and I encourage folks to join the ptomaine mailing list (ptomaine-request@shrubbery.net) if they want to contribute to the solution.
Short term is nice, but it doesn't matter in the long run. :)
But don't try to accomplish it by reducing the ability of the customer to control their own fate. There are real economic pressures out there which will prevent that class of solution from success.
I never suggested that, I suggested investigating alternatives which increase customer choice, performance, reliability, and Internet scalability and potential measures to make the minor inital cost of implimentation more acceptable.
The obvious intrest here is that most network operators would not have their customers multihoming at the IP level and thus preventing aggregation and polluting the global routing table is there was another way to achieve the same benefits.
participants (8)
-
Greg Maxwell
-
hardie@equinix.com
-
Jesper Skriver
-
Mike Batchelor
-
RJ Atkinson
-
Shawn McMahon
-
Travis Pugh
-
Yakov Rekhter