Brett writes:
MAE-Houston, a small NAP in the scheme of things, but it makes for a good example so we'll use it. $2000/month to get your foot in the door, then another large chunk of cash to connect to the Gigaswitch which all things considered, isn't really needed. Rather than waste their money on equipment that all in all just doesn't need to be there, why not make it more economic for local players to get involved and cross connect to eachother. In the end you not only save money by not bringing in useless hardware but you garner more customers by lessening the price of the private interconnect.
Hmmm. According to what I learnt in school, the cost of a connected network like a GIGAswitch or Catalyst or DELNI with N participants is: (N x interface_cost) + (N x port_cost) ...while the cost of a connected network made up of wire peers is: (2 x sum(N - 1) x interface_cost) "sum(N-1)" is an interesting function. Here are some examples: % calc > define sum(n) = n > 0 ? n + sum(n-1) : 0; "sum" defined > for (n = 2; n < 20; n++) print n,2*sum(n-1); 2 2 3 6 4 12 5 20 6 30 7 42 8 56 9 72 10 90 11 110 12 132 13 156 14 182 15 210 16 240 17 272 18 306 19 342 That means with 19 ISP's in a GIGAswitch-free room, there are 342 FIP's at a cost of, what, US$12000 each after discount? I'll betcha I can buy quite a few GIGAswitches for US$4.1M. Oops, that's not a fair comparison, since with a GIGAswitch I also need 19 FIP's. Figure that a fully configured GIGAswitch retails without discount for US$80K and that 19 FIPs are going to run another US$228K. That's still a *lot* less than 2*sum(n-1). This also assumes that we all have VIP2 cards and want to burn 9 7513 slots just on local peering, and it further assumes that a 7513 won't just simply melt if all the interfaces ever get hot at the same time. The breakeven is between N=3 and N=4. On the Internet, N never stays small. (And that breakeven assumes that the 4 people have to buy the whole GIGAswitch with noone like MFS to underwrite the costs of the unused ports; that means four people in a room together could SAVE MONEY buying the GIGA- switch.) Gah.
On Wed, 15 Jan 1997, Paul A Vixie wrote:
Brett writes:
considered, isn't really needed. Rather than waste their money on equipment that all in all just doesn't need to be there, why not make it more economic for local players to get involved and cross connect to eachother. In the end
That means with 19 ISP's in a GIGAswitch-free room, there are 342 FIP's at a cost of, what, US$12000 each after discount? I'll betcha I can buy quite a few GIGAswitches for US$4.1M. Oops, that's not a fair comparison, since with a GIGAswitch I also need 19 FIP's. Figure that a fully configured GIGAswitch retails without discount for US$80K and that 19 FIPs are going to run another US$228K. That's still a *lot* less than 2*sum(n-1).
This is probably a worst-case scenario. What about Ethernet cross-connects using the 6 port cards? Or zero-mile T1's using the 8 port serial cards? And you are assuming a full mesh which isn't necessarily what people need. I don't think you can generalize about what a provider wants from an Exchange Point especially not in a world in which exchange points are breeding like rabbits. Michael Dillon - Internet & ISP Consulting Memra Software Inc. - Fax: +1-250-546-3049 http://www.memra.com - E-mail: michael@memra.com
On Thu, 16 Jan 1997, Michael Dillon wrote:
This is probably a worst-case scenario. What about Ethernet cross-connects using the 6 port cards? Or zero-mile T1's using the 8 port serial cards? And you are assuming a full mesh which isn't necessarily what people need. I don't think you can generalize about what a provider wants from an Exchange Point especially not in a world in which exchange points are breeding like rabbits.
Maybe, maybe not, I've done a fair bit of watching to see where packets are flying in my short time. Now I don't pretend to be an expert, not by a long shot, and someone spank me if I'm way off but... From what I've seen, in any given city (assume a reasonable size of 200,000+) 50% or more of the traffic is local. By providing reasonable rates for private interconnects at a local peering point, one can not only speed up the response for customers but cut down on a great deal of traffic that needn't circle the globe to get to its destination. If we cut down on the amount of hops packets have to take, we cut down on congestion, etc.. or am I just being idealistic :) [-] Brett L. Hawn (blh @ nol dot net) [-] [-] Networks On-Line - Houston, Texas [-] [-] 713-467-7100 [-]
On Thu, 16 Jan 1997, Brett L. Hawn wrote:
On Thu, 16 Jan 1997, Michael Dillon wrote: shot, and someone spank me if I'm way off but... From what I've seen, in any given city (assume a reasonable size of 200,000+) 50% or more of the traffic is local. By providing reasonable rates for private interconnects at a local
Nope. In a given reasonably sized (.i.e a city or so) geographical area, you'd be lucky to get better than 20% locality of your traffic. There are some exceptions where there are major traffic sources in the area, but those tend to be pretty concentrated. The percentage decreases further when you take into account traffic to/from NSPs' customers in the locality as the NSPs are not likely to private peer with local providers. This is in no way a case against local peering, (every bit less traffic dumped into the core from every locality adds up) but one needs to be aware of what is gained from "exchange in every town" scenario. -dorian
At 8:25 AM -0500 1/16/97, Dorian R. Kim wrote:
On Thu, 16 Jan 1997, Brett L. Hawn wrote:
On Thu, 16 Jan 1997, Michael Dillon wrote: shot, and someone spank me if I'm way off but... From what I've seen, in any given city (assume a reasonable size of 200,000+) 50% or more of the traffic is local. By providing reasonable rates for private interconnects at a local
Nope. In a given reasonably sized (.i.e a city or so) geographical area, you'd be lucky to get better than 20% locality of your traffic. There are some exceptions where there are major traffic sources in the area, but those tend to be pretty concentrated.
The percentage decreases further when you take into account traffic to/from NSPs' customers in the locality as the NSPs are not likely to private peer with local providers.
This is in no way a case against local peering, (every bit less traffic dumped into the core from every locality adds up) but one needs to be aware of what is gained from "exchange in every town" scenario.
-dorian
Interesting. I wonder if this will continue to be a long term trend. I can't claim to have recent numbers that suggest otherwise, but, some historical information might at least be interesting. In the early 80s, I did a good deal of X.25 capacity planning. At what was then GTE Telenet, we found that up to 50% of our traffic stayed local in large cities. The larger the city, the more that seemed to stay local...this was especially obvious in New York, where a great deal of financial data flowed. Now, these old statistics reflect mainframe-centric traffic, and more private-to-private than arbitrary public access. The latter is much more characteristic of Internet traffic. SNA and X.25 tended to emphasize the ability to fine tune access to a limited number of well-known resources, with relatively well-understood traffic patterns. The Internet, however, has emphasized arbitrary and flexible connectivity, possibly to the detriment of performance tuning and reliability. While I recognize that putting mission critical applications into the general Internet (as opposed to VPNs), in many cases, is a clear indication that someone needs prompt psychiatric help, I wonder whether the increasing commercialization of Internet information resources might tend to have greater volumes of traffic that stays within the service area of an exchange point. Web cacheing would seem to encourage traffic to stay local. Howard Berkowitz PSC International
At 7:00 -0800 1/16/97, Howard C. Berkowitz wrote:
I can't claim to have recent numbers that suggest otherwise, but, some historical information might at least be interesting. In the early 80s, I did a good deal of X.25 capacity planning. At what was then GTE Telenet, we found that up to 50% of our traffic stayed local in large cities. The larger the city, the more that seemed to stay local...this was especially obvious in New York, where a great deal of financial data flowed.
remember that in the early 80's you basically couldn't lease a T1 from AT&T (I think it was 82 or so when they were first tariffed?) (watch out for that DC voltage...ouch! :-). also DDS services were scarce, etc. So (expensive) low speed analog was the option for leased lines - and private networks were rare. Since then of course the fallout from Judge Greene has changed some things, and it is cheap and easy to put up a DS0 across town - the cost justification vs. per packet charges is a lot different.
Now, these old statistics reflect mainframe-centric traffic, and more private-to-private than arbitrary public access. The latter is much more characteristic of Internet traffic.
SNA and X.25 tended to emphasize the ability to fine tune access to a limited number of well-known resources, with relatively well-understood traffic patterns. The Internet, however, has emphasized arbitrary and flexible connectivity, possibly to the detriment of performance tuning and reliability.
well the strategies for performance tuning are certainly different. [stuff cut]
Web cacheing would seem to encourage traffic to stay local.
ahhh....yup. dave
At 2:17 PM -0800 1/17/97, dave o'leary wrote:
At 7:00 -0800 1/16/97, Howard C. Berkowitz wrote:
I can't claim to have recent numbers that suggest otherwise, but, some historical information might at least be interesting. In the early 80s, I did a good deal of X.25 capacity planning. At what was then GTE Telenet, we found that up to 50% of our traffic stayed local in large cities. The larger the city, the more that seemed to stay local...this was especially obvious in New York, where a great deal of financial data flowed.
remember that in the early 80's you basically couldn't lease a T1 from AT&T (I think it was 82 or so when they were first tariffed?)
Dave, reality was funnier than that. It was 1980 or so when we actually did get a T1 between Washington and New York, but eventually released it because all of the DC-NY public network traffic wasn't enough to justify that HUGE amount of bandwidth. I did get the first nonmilitary T1 in the DC area in '77 or '78 at the Library of Congress. The then C&P Telephone couldn't really figure out how to charge for it, so we got it dirt cheap -- and it worked very well.
(watch out for that DC voltage...ouch! :-).
I have a very painful memory of running my finger over a punchdown with some stranded wire that slightly got loose and broke the skin. Knocked me flat and sprained my shoulder.
also DDS services were scarce, etc. So (expensive) low speed analog was the option for leased lines - and private networks were rare. Since then of course the fallout from Judge Greene has changed some things, and it is cheap and easy to put up a DS0 across town - the cost justification vs. per packet charges is a lot different.
Now, these old statistics reflect mainframe-centric traffic, and more private-to-private than arbitrary public access. The latter is much more characteristic of Internet traffic.
SNA and X.25 tended to emphasize the ability to fine tune access to a limited number of well-known resources, with relatively well-understood traffic patterns. The Internet, however, has emphasized arbitrary and flexible connectivity, possibly to the detriment of performance tuning and reliability.
well the strategies for performance tuning are certainly different.
[stuff cut]
Web cacheing would seem to encourage traffic to stay local.
ahhh....yup.
dave
From what I've seen, in any given city (assume a reasonable size of 200,000+) 50% or more of the traffic is local.
For those who are new here, this one has been around a decade or two. A host is a host from coast to coast. no one will talk to a host that's close. Unless the host (that isn't close) is busy, hung or dead. -- David Lesher wb8foz@nrk.com randy
This is probably a worst-case scenario. What about Ethernet cross-connects using the 6 port cards? Or zero-mile T1's using the 8 port serial cards?
I keep thinking that I live in a world where providers have MUCH faster trunking pipes than the pipes they sell to their average customers. If you can get away with a T1's worth of bandwidth, then the only reason you and your "peer" would be in the same room is to share access to longhaul, and failing that cause, the situation you describe would not occur and you would just run the T1 through the TelCo from your closest hub to theirs. Ethernet is a slightly different case but only slightly. An ethernet switch costs a lot less than a GIGAswitch, or for that matter, there are probably occasions (as occured at the Phoenix IXP) where unswitched 10Mb/s Ethernet is a fine way to start out -- which means you can grow to a switch and even to 100Mb/s if you plan it right, but you cannot easily grow to FDDI (which some customers think you should have to get them 4K PMTU to their semi-local destinations.)
And you are assuming a full mesh which isn't necessarily what people need.
I thought I'd covered this. I'm assuming full mesh because if you are paying to colocate equipment and tie your colo back to the rest of your net in some way, you will pretty naturally want to get as much bang for your buck as can be had. Each person you don't peer with represents additional load on the people you do peer with, or on your upstream transit if you're buying any, and on the upstream transit link of your unchosen-peer if he's got transit. Once you're in the same room, the cost of not peering is a LOT higher than the cost of peering, unless and only unless you already have a private interconnect to the unchosen peer.
I don't think you can generalize about what a provider wants from an Exchange Point especially not in a world in which exchange points are breeding like rabbits.
I agree that we're going to see a lot of IXP's of all sizes and shapes soon.
On Thu, 16 Jan 1997, Paul A Vixie wrote:
This is probably a worst-case scenario. What about Ethernet cross-connects using the 6 port cards? Or zero-mile T1's using the 8 port serial cards?
Ethernet is a slightly different case but only slightly. An ethernet switch costs a lot less than a GIGAswitch,
the cost of peering, unless and only unless you already have a private interconnect to the unchosen peer.
I guess I was visualizing something quite different from current exchanges. Rather than have an Ethernet switch I was thinking of using Ethernet point-to-point. And the exchange point was more like a big colo center in which you could set up as many private interconnects as you want at the lowest possible cost (interface ports plus installing a cable versus running T1's or DS3's across town). The colo nature of such a beast would lead ISP's to install terminal servers, web farms, etc which would have an effect on the topology. Squid cache hierarchies would be nice here as well. I'm not sure if this is a viable exchange point architecture yet but I think it will be viable and useful as the Internet scales through the next order of magnitude. My gut feel is that breaking out the traffic into lots of smaller non-shared circuits will be easier to manage and less susceptible to being swamped during overload conditions. Not that overloading cannot occur but the effects would be more isolated than if the overload occurs on a shared media. I think that people would be interested in traffic flow data that would either prove or disprove my theories. Michael Dillon - Internet & ISP Consulting Memra Software Inc. - Fax: +1-250-546-3049 http://www.memra.com - E-mail: michael@memra.com
I could equally well see a colo center where the plan is to run a DS3 to the colo center, put a router there, and buy transit from as many providers as you wanted by connecting to each provider's switch. For example, a room where Sprint, MCI, BBNPlanet, PSI, Netcom, and whoever else wanted to come would each have their own Ethernet switch or Gigaswitch. ISPs could then colo a router at the center and with no telco loop cost obtain transit connections from whatever combination of providers they wished. If the operators of the colo center had their own regional OC48 sonet ring, the cost to bring a DS3 to the center could be quite low for both ISPs and the big boys. DS On Thu, 16 Jan 1997, Michael Dillon wrote:
I guess I was visualizing something quite different from current exchanges. Rather than have an Ethernet switch I was thinking of using Ethernet point-to-point. And the exchange point was more like a big colo center in which you could set up as many private interconnects as you want at the lowest possible cost (interface ports plus installing a cable versus running T1's or DS3's across town).
On Fri, 17 Jan 1997, David Schwartz wrote:
I could equally well see a colo center where the plan is to run a DS3 to the colo center, put a router there, and buy transit from as many providers as you wanted by connecting to each provider's switch. For example, a room where Sprint, MCI, BBNPlanet, PSI, Netcom, and whoever else wanted to come would each have their own Ethernet switch or Gigaswitch.
ISPs could then colo a router at the center and with no telco loop cost obtain transit connections from whatever combination of providers they wished. If the operators of the colo center had their own regional OC48 sonet ring, the cost to bring a DS3 to the center could be quite low for both ISPs and the big boys.
This is a great idea, but runs counter to the telco's, all of whom run the current major NAP's. The other element needed would be multiple fiber drops from multiple carriers. .stb
On Thu, 16 Jan 1997, Michael Dillon wrote:
I guess I was visualizing something quite different from current exchanges. Rather than have an Ethernet switch I was thinking of using Ethernet point-to-point. And the exchange point was more like a big colo center in which you could set up as many private interconnects as you want at the lowest possible cost (interface ports plus installing a cable versus running T1's or DS3's across town).
That's pretty much the nightmare scenario for the long-haul networks. Frictionless capitalism with buying decision being made by machines, i.e. routers, based on the current state of the network. The product (long haul packet transport) becomes a total commodity with non-existent customer loyalty. Kewl. Dirk On Fri, 17 Jan 1997, David Schwartz wrote:
I could equally well see a colo center where the plan is to run a DS3 to the colo center, put a router there, and buy transit from as many providers as you wanted by connecting to each provider's switch. For example, a room where Sprint, MCI, BBNPlanet, PSI, Netcom, and whoever else wanted to come would each have their own Ethernet switch or Gigaswitch.
ISPs could then colo a router at the center and with no telco loop cost obtain transit connections from whatever combination of providers they wished. If the operators of the colo center had their own regional OC48 sonet ring, the cost to bring a DS3 to the center could be quite low for both ISPs and the big boys.
DS
On Thu, 16 Jan 1997, Michael Dillon wrote:
I guess I was visualizing something quite different from current exchanges. Rather than have an Ethernet switch I was thinking of using Ethernet point-to-point. And the exchange point was more like a big colo center in which you could set up as many private interconnects as you want at the lowest possible cost (interface ports plus installing a cable versus running T1's or DS3's across town).
participants (10)
-
Brett L. Hawn
-
dave o'leary
-
David Schwartz
-
Dirk Harms-Merbitz
-
Dorian R. Kim
-
Howard C. Berkowitz
-
Michael Dillon
-
Paul A Vixie
-
randy@psg.com
-
Stephen Balbach