On 21 Jul 1996, Dean Gaudet wrote:
The application involves at least thirty machines, so colocation is likely to be cost-prohibitive. A single T3, or frac T3 isn't an option because there isn't a single provider that I can trust for the availability we want. Even ignoring availability, I seriously doubt that any provider can consistently fill a single customer's T3 pipe these days.
of this into account, I'm really leaning towards a solution that involves lots of small pipes to lots of providers. Essentially eliminating the need for 90% of our packets to traverse NAPs by using each backbone mostly for their own customers.
I haven't considered yet the maintenance/logistical cost of managing 15 T1s to 6 or 7 providers vs. the "ease" of two frac-T3s to two providers.
Given that it sounds like you are budgeting for slightly more than a DS3 worth of bandwidth, in connectivity cost, then the way to do multi T1 is to pick a set of ISPs that comprise a good percentage of the net, and get N x T1 to the ISPs based on your best guess break down of traffic. If it's just the whole Internet you are aiming for, then something like 4 T1s to MCI, 3 T1s to Sprint, 3 T1s to UUNET, 2 to ANS, 1 to AGIS and 2 to others is an example. Note that this is a wild guesstimate of the percentage of Internet traffic sinks. What you need to think about with this scenario are the following: 1) cost of procuring 15 T1s vs. DS3/fract DS3. 2) logistics of support. 3) infrastructure issues. 1) is pretty cut and dry. 2) is the largest "hidden" cost. With this set up, you need a competent net engineer or two to babysit this so that the packets are flowing in the right direction. You also ideally need significant automation of your router configurations so that you can pull correct info and generate configs that match a very fluid reality of today's routing. You'll also need a decent NOC to deal with the 6-7 ISP NOCs and possibly different carrier's NOCs when trouble hits. I find that much of trouble shooting involves live body at an end of a phone much more than a brain. 3) you need to have different equipment to do this. It costs more to provision hardware and to do 15 T1s then it does for 1-2 DS3/fract DS3. This doesn't even go to redundancy and sparing issues. Something like Cisco 7200 might this better, but I'm not sure. The other scenario of two fract DS3 alleviates the problem #2 and #3, but still doesn't make them go away altogether. You also need to pick providers with interesting enough traffic sinks so that you can load balance effectively (as effectively as you can get in that situation anyway) in a somewhat straigh forward fashion. ( like taking internal customer routes from ISP A and the rest via ISP B.
From a provider's point of view, if a site wanted to connect, andwas willing to sign a use-policy saying they wouldn't use the connection for transit to other providers (i.e. would only ask for customer BGP and only route to the nets you provide in BGP updates), would that site have lower costs associated with it? (that you could pass on?)
It would seem to me that, as long as the site is an interesting enough traffic source, and the ISP can recoup whatever cost of offering that connection + margin(or not). Speaking only for CICNet, given that the site is an interesting traffic source, we'd gradly offer a connection for what it costs to provide that connection, if such request came to us. Hope this helps, -dorian