On Wed, Mar 15, 2000, Peter A. van Oene wrote:
a DNS oriented approach can be feasible in many situations.
yup. unfortunately the hack is not a good net citizen (some folk don't appreciate packets thrown at their servers), and some versions are not very accurate (as the server for foo.bar may be quite net.far from the host foo.bar).
but then most bgp hacks, though better net citizens, are not brilliantly accurate either. the anycast hack really being the only one that scales and performs at all well.
Help me our here. Can we not agree that exponentially more websites will require the ability to multihome to different AS's to achieve proper redundancy / disaster recovery?
Yes.
Instead of simply saying "if it ain't BGP its crap" like the above, or the E Gavron's telling me that these customers should simple advertise their little netblocks out of two or more AS's, can someone suggest some viable solutions?
Viable ? Yes. Scales well ? Yes, if you're willing to put the work into it.
The hard reality is that there isn't enough AS space. This is unbelievably obvious. With that in mind, how do I multihome?
Multihome by its very definition refers to *network* multihoming. You are after service multihoming. With the current protocol set as it stands? You can do some neat tricks, but there is no sure fire way to achieve the level of redundancy that you're after without a *lot* of work. I'm not talking initial setup work, I'm talking maintainence.
I am currently engaged in a great number of projects that face this exact challenge. In liue of more stategic solutions or co-location into proper facilities, I personally don't see a better mechanism than load distribution techniques similar to the 3DNS.
Basically you're after a new and neat way of multihoming services without requiring it to be done at the network level. Now, you have a couple of options as far as I can see: * You can pick one of the hacks and work with it. It'll work now, as how how well it'll work, how it will scale, and how it will handle the changing internet is well, anyone's guess. * You can do some research into finding an elegant solution to the problem. We will probably all love you. The trouble with this is it won't be instantaneous, and you'll have to pull some magic to get people to adopt it. * You can coloate with a large backbone which already multihomes, and work with them to develop redudancy for your services. This means you're at their mercy for service guarantees, and trying to deal with most existing network providers to do the tricks needed to present redundant services isn't going to be easy. At the moment, I would try 2 and 3, but thats because I'm not in a "It has to be working now, or we don't get paid" boat. Unfortunately, most of us are too busy working on other pressing issues (read: paid employment) to push research into this sort of stuff (please, if you are, stick your hand up now!). What you and a whole heap of other people need to realise is that the way the internet is built *now* prohibits people from doing redundant services without multihoming. You can do it, but you can't do it properly. I'm all up for dreaming up some DNS-URI-object cache type hack, but unless someone is going to pay a bunch of people to do the research, you might be stuck. Its a standard software development thing - you build something with some basic assumptions. You then are totally free to do what the hell you want, as long as you don't want to change the base assumptions. To change the base assumptions isn't trivial. :-) Until the right company releases something of course. But then, you're at their mercy. Adrian