In message <199607220013.TAA27391@academ.com>, Stan Barber writes:
From stuff I've seen here and elsewhere I think the most important reason for this is congestion at NAPs making it impossible to suck (or shove) lots of bandwidth at anything but your provider's backbone.
In using "NAPs" above, are you just talking about the NSF NAPs or all interconnections?
I'm not clear on the distinction -- but since the first location we want to do this would be based in San Francisco, I'm referring mostly to mae-west, the pacbell nap, and CIX. It should be relatively inexpensive to long-haul a few T1s further away from the California NAPs. (and it would be relatively expensive to move the machines... because of the people involved in maintaining them. Which is a pain, 'cause doing high-availability stuff in an earthquake zone seems silly.)
Generally for each connection to each provider, you would have to set up BGP.
Yeah, definately. But most backbones seem to have "customer routes" as an option, and if I trust them enough to get those routes correct then I will hopefully not have to bother with extreme amounts of filtering. It's pretty easy to enforce "no transit" at the packet filtering level -- only packets destined for my nets will be allowed in. Is there some other aspect of filtering I'm forgetting about? We have a dedicated and backup network engineer at any rate. The border router would be a cisco 7200 or 7500 series with 128Mb. Dean