Re: design of a real routing v. endpoint id seperation
(apologies to Owen for CC'ng list, his points are valid concerns that I hadnt addressed or considered properly) Owen DeLong wrote:
c) Carry a much larger table on a vastly more expensive set of routers in order to play.
ISPs who dont wish to connect these customers should feel free not to, and that will have no bearing on the rest of those who do.
Somehow, given C) above, I am betting that most providers will be in this latter category.
Considering that most people who are in favor of multihoming for ipv6 believe that there is customer demand for it, the market forces would decide this one. Additionally, until there are a few hundred thousand routes in the multihoming table, I dont see any more expense than today, merely an extra box in the pop. It could be years away that the doomsday table growth the anti-multihoming crowd predicts could occur. Only at that point would expensive seperate routers be needed. In fact seperate routers makes the multihoming table very small, at least to start with. It would be an implementation detail. An ISP could easily start off by simply not announcing the more specifics in the prefix space, without the new router systems. The point is, that the scaling problems multihoming brings would be limited to a) ISP's who want to offer service to customers who want to multihome b) The system that the ISP runs to provide this service. This is in contrast to todays mechanism, where customers who want to multihome affect everyone who accepts a full BGP feed. At the time customer demand worldwide demanded seperate routing tables, would be the time that ISPs would be able to decide whether the roi would be sufficient or not for them to keep their investment. Such a scheme would be a "money where your mouth is". You say there is customer demand for multihoming? Well here it is. Lets see which ISPs want to implement it and which customers want to pay extra (FSVO extra) for it. In fact, customers who multihome in this way, need not use the same ASN space as the rest of the world, just unique to the multihoming table (that might not work well if ISP's "faked" it by simply not advertising the more specifics they carried internally) This concept brings true hierarchy, and thus scalability, to the routing table.
If you are referring to the affect that this will attract "unwanted" traffic, that would be considered a COB.
That too, but, primarily, c).
There are simple ways to minimize this. 1) standard BGP tricks....anti-social to be sure, such as prepending, meds...... 2) "Transit"-multihoming peering, where you depend more on external parties who peer with you on the multihoming plane more "popular" advertisement to bring you a higher ratio of traffic you are interested in. A small multihoming-table-carrying ISP would want to arrange things so that he pays a bit mer per (Mn|Gb) from his multihoming-table-peer, but does not have to attract large quantities of unwanted traffic from his non-multihoming-table peer.
In essence, the previous discussion about LNP suggested that telco's must do the same thing, attract unwanted traffic, traffic they must switch right back out of their network.
Except they don't. My formerly AT&T number does not go through AT&Ts network to reach me just because it was ported. Read up on how SS7 actually works before you make statements like this that simply aren't true.
So I have been told....apparently I mistook the "conslusions" of the relevant threads. apologies.
Owen
Considering that most people who are in favor of multihoming for ipv6 believe that there is customer demand for it, the market forces would decide this one.
We have nobody but ourselves to blame for this. If we all ran networks that worked as well as our customers demand and didn't have our petty peering squables every full moon, the market wouldn't feel the need to have to dual home.
We have nobody but ourselves to blame for this. If we all ran networks that worked as well as our customers demand and didn't have our petty peering squables every full moon, the market wouldn't feel the need to have to dual home.
that's the telco brittle network model, make it so it fails infrequently. this has met with varied success. the internet model is to expect and route around failure. randy
that's the telco brittle network model, make it so it fails infrequently. this has met with varied success.
One way to look at it:
the internet model is to expect and route around failure.
this has also met with varied success. :-)
On Fri, 21 Oct 2005, Randy Bush wrote: the internet model is to expect and route around failure. randy That precludes agreement on a definition of "failure". In recent weeks we have once again learned that a large fuzzy fringe around any sort of 100% consensus makes life interesting. For instance; was the withdrawal of certain routes from your BGP sessions a "failure" for you? Was it for superwebhostingforfree.com, who relies on a single provider for transit? matto --matt@snark.net------------------------------------------<darwin>< The only thing necessary for the triumph of evil is for good men to do nothing. - Edmund Burke
the internet model is to expect and route around failure. You cannot stop the last mile backhoes.
no, but if your facility is critical, you have redundant physical and layer one exits from it. and you have parallel sites. randy
the market wouldn't feel the need to have to dual home.
the internet model is to expect and route around failure.
Seems to me that there is some confusion over the meaning of "multihoming". We seem to assume that it means BGP multihoming wherein a network is connected to multiple ASes and uses BGP to manage traffic flows. Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server. And to many consumers of network access it is a synonym for redundancy or resiliency or something like that. BGP multihoming is not the only way to satisfy the consumers of network access and design a solution in which failure is expected and it is possible for the customer to route around failure. A single tier-2 ISP who uses BGP multihoming with several tier 1 ISPs can provide "multihoming" to it's customers without BGP. For instance, if this tier-2 has two PoPs in a city and peering links exist at both PoPs and they sell a resilient access service where the customer has two links, one to each PoP, then it is possible to route around many failures. This is probably sufficient for most people and if the tier-2 provider takes this service seriously they can engineer things to make total network collapse exteremely unlikely. Another way in which consumer's could be "multihomed" would be to have their single access link going to an Internet exchange where there is a choice of providers. If one provider's network fails, they could phone up another provider at the exchange and have a cross-connect moved to restore connectivity in an hour or so. This will satisfy many people. Of course there are many variations on the above theme. This is an issue with multiple solutions, some of which will be superior to BGP multihoming. It's not a simple black or white scenario. And being a tier-1 transit-free provider is not all good. It may give some people psychological comfort to think that they are in the number 1 tier, but customers have good reason to see tier-1 transit-free status as a negative. --Michael Dillon
--On October 24, 2005 10:01:21 AM +0100 Michael.Dillon@btradianz.com wrote:
the market wouldn't feel the need to have to dual home.
the internet model is to expect and route around failure.
Seems to me that there is some confusion over the meaning of "multihoming". We seem to assume that it means BGP multihoming wherein a network is connected to multiple ASes and uses BGP to manage traffic flows.
As I understand it, the term multihoming in a network operations context is defined as: (A multihomed network is) A network which is connected via multiple distinct paths so as to eliminate or reduce the likelihood that a single failure will significantly reduce reachability. Note, this is independent of the protocols used, or, even of whether or not what is being connected to is the internet. So, it does not assume BGP. It does not assume an AS. Now, in the context of an ARIN or NANOG discussion, I would expect to be able to add the following assertions to the term: 1. The connections are to the internet. A connection which is not to the internet is of little operational significance to NANOG, and, ARIN has very little to do with multihoming in general, and, even less if it is not related to the internet. 2. The connections are likely to distinct ISPs, although, in some cases, not necessarily so. Certainly, if one is to say one is addressing the issues of multihoming, then, one must address both values for this variable. 3. Most multihoming today is done using BGP, but, many other solutions exist with various tradeoffs. In V6, there is currently only one known (BGP) and one proposed, but, unimplemented (Shim6) solution under active consideration by IETF. (this may be untrue, but, it seems to be the common perception even if not reality).
Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server.
That is not multihoming. That may be an implementation artifact of some forms of multihoming (using the addresses assigned by multiple providers ala Shim6 proposal), but, multiple addresses on an interface do not necessarily imply multihoming. In fact, more commonly, that is virtual hosting.
And to many consumers of network access it is a synonym for redundancy or resiliency or something like that. BGP multihoming is not the only way to satisfy the consumers of network access and design a solution in which failure is expected and it is possible for the customer to route around failure.
It certainly is one component of a redundancy/resiliency solution.
A single tier-2 ISP who uses BGP multihoming with several tier 1 ISPs can provide "multihoming" to it's customers without BGP. For instance, if this tier-2 has two PoPs in a city and peering links exist at both PoPs and they sell a resilient access service where the customer has two links, one to each PoP, then it is possible to route around many failures. This is probably sufficient for most people and if the tier-2 provider takes this service seriously they can engineer things to make total network collapse exteremely unlikely.
As long as you are willing to accept that a policy failure in said Tier2 ISP could impact both pops simultaneously, and, accept that single point of failure as a risk, then, yes, it might meet some customers' needs. It will not meet all customers' needs.
Another way in which consumer's could be "multihomed" would be to have their single access link going to an Internet exchange where there is a choice of providers. If one provider's network fails, they could phone up another provider at the exchange and have a cross-connect moved to restore connectivity in an hour or so. This will satisfy many people.
Again, there are tradeoffs and risks to be balanced here as there are multiple single points of failure inherent in such a scenario. However, at the IP level, such a network would, indeed be multihomed. The layer 1 and 2 issues not withstanding.
Of course there are many variations on the above theme. This is an issue with multiple solutions, some of which will be superior to BGP multihoming. It's not a simple black or white scenario. And being a tier-1 transit-free provider is not all good. It may give some people psychological comfort to think that they are in the number 1 tier, but customers have good reason to see tier-1 transit-free status as a negative.
I'm not sure why you say some are superior to BGP multihoming. I can see why some are more cost effective, easier, simpler in some cases, or, possibly more hassle-free, but, the term superior is simply impossible to define in this situation, so, I'm unsure how you can categorize something as superior when the term can't be defined sufficiently. Owen -- If this message was not signed with gpg key 0FE2AA3D, it's probably a forgery.
On Mon, 2005-10-24 at 02:24 -0700, Owen DeLong wrote: <SNIP>
3. Most multihoming today is done using BGP, but, many other solutions exist with various tradeoffs. In V6, there is currently only one known (BGP) and one proposed, but, unimplemented (Shim6) solution under active consideration by IETF. (this may be untrue, but, it seems to be the common perception even if not reality).
As for "multihoming" in the sense that one wants redundancy, getting two uplinks to the same ISP, or what I have done a couple of times already, multiple tunnels between 2 sites (eg 2 local + 2 remote) and running BGP/OSPF/RIP/VRRP/whatever using (private) ASN's and just providing a default to the upstream network and them announcing their /48 works perfectly fine. The multihoming that people here seem to want though is the Provider Independent one, and that sort of automatically implies some routing method: read BGP. Greets, Jeroen
On Mon, 2005-10-24 at 02:24 -0700, Owen DeLong wrote:
As I understand it, the term multihoming in a network operations context is defined as:
(A multihomed network is) A network which is connected via multiple distinct paths so as to eliminate or reduce the likelihood that a single failure will significantly reduce reachability.
Given that definition of multhoming.....
3. Most multihoming today is done using BGP, but, many other solutions exist with various tradeoffs. In V6, there is currently only one known (BGP) and one proposed, but, unimplemented (Shim6) solution under active consideration by IETF. (this may be untrue, but, it seems to be the common perception even if not reality).
... shim6 doesn't fit into the definition does it? Its seems to be a question of multihomed networks Vs. multihomed hosts (although the effect may be the same at the end of the day).
... shim6 doesn't fit into the definition does it? Its seems to be a question of multihomed networks Vs. multihomed hosts (although the effect may be the same at the end of the day).
Yes... The network is still multihomed, but, instead of using routing to handle the source/dest addr. selection, it is managed at each end host independent of the routers. The routers function sort of like the network is single homed. It's very convoluted. Owen -- If it wasn't crypto-signed, it probably didn't come from me.
On Mon, 24 Oct 2005, Owen DeLong wrote:
Yes... The network is still multihomed, but, instead of using routing to handle the source/dest addr. selection, it is managed at each end host independent of the routers. The routers function sort of like the network is single homed. It's very convoluted.
That is to say the least. Offices who want to be multihomed would want to do it once for all the computers there using one device like they now can do with a router. Web farms would similarly want to do it for all the servers there again as they now do with one router or load-balancer, etc. Managing it if multihoming is entirely host-based would be hard (I note that for office multihoming you could potentially create one router that would do shim6 on its out interfaces and would do NAT between that and its inside network - but we don't want NAT for ipv6 if I understand IETF and IAB direction). So while I really do think that we need some-kind of multi6 design which works for small multi-homing networks without need for them to have to use ASN and have their routes in global BGP table (leaving all that primarily to NSPs with /32 and larger as IETF envisioned), the current shim6 design does not seem properly done to be usable for that audience and as somebody noticed yesterday it would instead be great for multi-dsl users, especially gamers and p2p. Now if we resurrected A6 with its ability to separately enter ip address with host and network parts at the dns level - then we're at least part the way done as far as multi6 multi-homing setup in dns for entire network at once. But I still don't see easy way to do it for the device management and yet another new protocol would probably be needed for automatic assignment of locators and secondary ipv6 addresses (BTW - did I hear right that there is going to be new WG related to MIP6 to work out issues of assignment and using of multiple ipv6 addresses and interfaces - IETF seems to be doing lots of things in parallel at this potential L3.5 layer that could be done lot better together as part of proper TCP/IP redesign). --- William Leibzon Elan Networks william@elan.net
Thus spake <Michael.Dillon@btradianz.com>
the market wouldn't feel the need to have to dual home.
the internet model is to expect and route around failure.
Seems to me that there is some confusion over the meaning of "multihoming". We seem to assume that it means BGP multihoming wherein a network is connected to multiple ASes and uses BGP to manage traffic flows.
AFAICT, that is the accepted definition in this forum. Anything less is best called by a different, more precise term to avoid confusion.
Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server.
That is virtual hosting in a NANOG context. Some undereducated MCSEs might call it multihoming, but let's not endorse that here.
A single tier-2 ISP who uses BGP multihoming with several tier 1 ISPs can provide "multihoming" to it's customers without BGP. For instance, if this tier-2 has two PoPs in a city and peering links exist at both PoPs and they sell a resilient access service where the customer has two links, one to each PoP, then it is possible to route around many failures. This is probably sufficient for most people and if the tier-2 provider takes this service seriously they can engineer things to make total network collapse exteremely unlikely.
I bet customers who bought two links to Cogent no longer believe they're "multihomed"; policy failures are disturbingly frequent in Tier 2s, particularly those wanting to join the Tier 1 club. Total network failures are rarer, but even folks like UUNET, WorldCom, AT&T, MCI, etc. have them from time to time. With restoral times measured in days on both types of occasions, you can't discount them as "extremely unlikely" if your business can't function without a network. Ask the folks at Starbucks how many millions of dollars of coffee they gave away when their cash registers didn't work for a couple days... and how many customers (i.e. future revenue) they would have lost if they hadn't. Two links to the same provider is merely "redundancy" or "link/POP diversity", not multihoming. Don't let your marketing department override your common sense or engineering clue. S Stephen Sprunk "Stupid people surround themselves with smart CCIE #3723 people. Smart people surround themselves with K5SSS smart people who disagree with them." --Aaron Sorkin
Stephen Sprunk wrote: [snip]
Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server.
That is virtual hosting in a NANOG context. Some undereducated MCSEs might call it multihoming, but let's not endorse that here.
Unfortunately, this is a common and "standards blessed" way to refer to any host with multiple interfaces/addresses (real or virtual). For example, from the "Terminology" section, 1.1.3, of RFC1122, "Requirements for Internet Hosts -- Communication Layers," says, Multihomed A host is said to be multihomed if it has multiple IP addresses. For a discussion of multihoming, see Section 3.3.4 below. -- Crist J. Clark crist.clark@globalstar.com Globalstar Communications (408) 933-4387 The information contained in this e-mail message is confidential, intended only for the use of the individual or entity named above. If the reader of this e-mail is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any review, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail in error, please contact postmaster@globalstar.com
On Mon, 24 Oct 2005 13:31:17 PDT, Crist Clark said:
Stephen Sprunk wrote: [snip]
Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server.
That is virtual hosting in a NANOG context. Some undereducated MCSEs might call it multihoming, but let's not endorse that here.
Unfortunately, this is a common and "standards blessed" way to refer to any host with multiple interfaces/addresses (real or virtual).
I think Stephen meant "Some undereducated McSE (you want fries with that?) call "multiple websites on one server" "multihoming".
I believe RFC1122 was written in the days when there was a one-to-one correlation between IP addresses and interfaces, and, you couldn't have one machine with multiple addresses on the same network. Obviously, also, we are talking about network multihoming, not host multihoming in a NANOG context. It is hard to perceive a situation where Host Multihoming would require coordination. Owen --On October 24, 2005 1:31:17 PM -0700 Crist Clark <crist.clark@globalstar.com> wrote:
Stephen Sprunk wrote: [snip]
Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server.
That is virtual hosting in a NANOG context. Some undereducated MCSEs might call it multihoming, but let's not endorse that here.
Unfortunately, this is a common and "standards blessed" way to refer to any host with multiple interfaces/addresses (real or virtual). For example, from the "Terminology" section, 1.1.3, of RFC1122, "Requirements for Internet Hosts -- Communication Layers," says,
Multihomed A host is said to be multihomed if it has multiple IP addresses. For a discussion of multihoming, see Section 3.3.4 below.
-- Crist J. Clark crist.clark@globalstar.com Globalstar Communications (408) 933-4387
The information contained in this e-mail message is confidential, intended only for the use of the individual or entity named above. If the reader of this e-mail is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any review, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail in error, please contact postmaster@globalstar.com
-- If it wasn't crypto-signed, it probably didn't come from me.
On Mon, 2005-10-24 at 10:01 +0100, Michael.Dillon@btradianz.com wrote:
Other people use this term in very different ways. To some people it means using having multiple IP addresses bound to a single network interface. To others it means multiple websites on one server.
Do you not mean a single host with multiple interfaces? I didn't think anyone with multiple IPs on a single interface would consider it to be multihomed.
On Mon, 24 Oct 2005 Michael.Dillon@btradianz.com wrote:
A single tier-2 ISP who uses BGP multihoming with several tier 1 ISPs can provide "multihoming" to it's customers without BGP. For instance, if this tier-2 has two PoPs in a city and peering links exist at both PoPs and they sell a resilient access service where the customer has two links, one to each PoP, then it is possible to route around many failures. This is probably sufficient for most people and if the tier-2 provider takes this service seriously they can engineer things to make total network collapse exteremely unlikely.
From RFC 3582, this is not multihoming (see the defs below). The above is referred to as "multi-connecting" or multi-attaching (also see RFC 4116).
I agree, this is sufficient for many sites. Especially in academic world, many universities are just multi-connected, trusting the stability of their NREN's backbone and transit providers. Lots of commercial sites do it too, but some are wary due to events like L3/Cogent, L3 backbone downtime, etc. ..... A "multihomed" site is one with more than one transit provider. "Site-multihoming" is the practice of arranging a site to be multihomed. and: A "transit provider" operates a site that directly provides connectivity to the Internet to one or more external sites. The connectivity provided extends beyond the transit provider's own site. A transit provider's site is directly connected to the sites for which it provides transit. -- Pekka Savola "You each name yourselves king, yet the Netcore Oy kingdom bleeds." Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
Neil J. McRae wrote:
Considering that most people who are in favor of multihoming for ipv6 believe that there is customer demand for it, the market forces would decide this one.
We have nobody but ourselves to blame for this. If we all ran networks that worked as well as our customers demand and didn't have our petty peering squables every full moon, the market wouldn't feel the need to have to dual home.
There is not only the multihoming issue but also the PI address issue. Even if any ISP would run his network very competently and there were no outages we would face the ISP switching issue. Again we would end up with either PI addresses announced by the ISP or BGP by the customer. With either the DFZ continues to grow. There is just no way around it. -- Andre
There is not only the multihoming issue but also the PI address issue. Even if any ISP would run his network very competently and there were no outages we would face the ISP switching issue. Again we would end up with either PI addresses announced by the ISP or BGP by the customer. With either the DFZ continues to grow. There is just no way around it.
The way around it is to stop growing the DFZ routing table by the size of the Prefixes. If customers could have PI addreses and the DFZ routing table was based, instead, on ASNs in such a way that customers could use their upstream's ASNs and not need their own, then, provider switch would be a change to the PI->ASN mapping and not affect the DFZ table at all. Owen -- If it wasn't crypto-signed, it probably didn't come from me.
The way around it is to stop growing the DFZ routing table by the size of the Prefixes. If customers could have PI addreses and the DFZ routing table was based, instead, on ASNs in such a way that customers could use their upstream's ASNs and not need their own, then, provider switch would be a change to the PI->ASN mapping and not affect the DFZ table at all.
One way to do this is for two ISPs to band together in order that each ISP can sell half of a joint multihoming service. Each ISP would set aside a subset of their IP address space to be used by many such multihomed customers. Each ISP would announce the subset from their neighbor's space which means that there would be two new DFZ prefixes to cover many multihomed customers. Each multihomed customer would run BGP using a private AS number selected from a joint numbering plan. This facilitates failover if one circuit goes down but doesn't consume unneccesary public resources per customer. This does require the two ISPs to maintain a strict SLA on their interconnects in order to match the SLAs on their customer contracts. The interconnect then becomes more than "just" a peering connection, it also becomes a mission critical service component. Of course, the whole thing multihoming thing could be outsourced to a 3rd party Internet exchange operator with some creativity at both the technical level and the business level. The IP address aggregate would then belong to the exchange. More than 2 ISPs could participate. Customers could move from one ISP to another without changing addresses. The SLA on interconnects could be managed by the exchange. Etc. --Michael Dillon
--On October 24, 2005 10:44:31 AM +0100 Michael.Dillon@btradianz.com wrote:
One way to do this is for two ISPs to band together in order that each ISP can sell half of a joint multihoming service. Each ISP would set aside a subset of their IP address space to be used by many such multihomed customers. Each ISP would announce the subset from their neighbor's space which means that there would be two new DFZ prefixes to cover many multihomed customers.
[snip...] Except this completely disregards some customers concerns about having provider independence and being able to change providers without having a major financial disincentive to do so. That _IS_ a real business concern, no matter how much the IETF would like to pretend it does not matter. Owen -- If this message was not signed with gpg key 0FE2AA3D, it's probably a forgery.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Yo Neil! On Fri, 21 Oct 2005, Neil J. McRae wrote:
If we all ran networks that worked as well as our customers demand...
Some demand low price and some demand high availability. No way to please everyone. RGDS GARY - --------------------------------------------------------------------------- Gary E. Miller Rellim 20340 Empire Blvd, Suite E-3, Bend, OR 97701 gem@rellim.com Tel:+1(541)382-8588 Fax: +1(541)382-8676 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFDWWiP8KZibdeR3qURAlxDAKCnE8uNK36GKu5wHeuFtR9bID3LMwCeNMV5 Hrp1sFipFeyg4or0SHDv5bE= =KdkD -----END PGP SIGNATURE-----
participants (16)
-
Andre Oppermann
-
Crist Clark
-
Gary E. Miller
-
Jeroen Massar
-
Joe Maimon
-
John Reilly
-
Matt Ghali
-
Michael.Dillon@btradianz.com
-
Neil J. McRae
-
Owen DeLong
-
Pekka Savola
-
Randy Bush
-
Stephen Sprunk
-
Tony Li
-
Valdis.Kletnieks@vt.edu
-
william(at)elan.net