My objectives are far more pedestrian than any effort to confuse or divide the Internet community. I'm working at getting a handle on what about the NAPs does and does not work. I've seen some discussion on frames vs cells in the NAPs, and that's an interesting albeit religiously charged subject. I've heard speakers soap-boxing about the need for a decent policy-free L2 interconnect. Are we there yet? Who's heading in the right direction? Does the current NAP model meet your needs as ISPs? Do we just light more fibre or do we need a better solution? Reply to the list or to me directly. I'm all ears. David R. Pickett Northchurch Communications Inc 5 Corporate Drive Andover, MA 01080 978-691-4649
On Wed, May 20, 1998 at 01:56:19PM -0400, Pickett, David wrote:
My objectives are far more pedestrian than any effort to confuse or divide the Internet community. I'm working at getting a handle on what about the NAPs does and does not work. I've seen some discussion on frames vs cells in the NAPs, and that's an interesting albeit religiously charged subject. I've heard speakers soap-boxing about the need for a decent policy-free L2 interconnect. Are we there yet? Who's heading in the right direction? Does the current NAP model meet your needs as ISPs? Do we just light more fibre or do we need a better solution? Reply to the list or to me directly. I'm all ears.
David R. Pickett Northchurch Communications Inc 5 Corporate Drive Andover, MA 01080 978-691-4649
This, unfortunately, is a highly-charged and highly emotional topic. Part of the problem with "public exchanges" is that they get congested. But the REASONS for that congestion are not, I believe, that well understood. Can we just build faster exchanges? Sure. Will it solve the problem? Not if carriers don't provision fast enough circuits into them! If you're seeing poor performance between <X> and <Y> at an exchange, is it due to the exchange fabric's poor performnace, or is one of <X> or <Y> under-provisoned into that fabric? Have either of those carriers DELIBERATELY (or through negligence) failed to provide adequate connectivity to the exchange? How do you determine which is the case in these situations? I'd love to see an exchange which PUBLISHED the "saturation rates" at each *PORT* to the world, identifying the carrier and speed of connection at each PORT. Also, in the same breath, they would need to publish the total FABRIC capacity as well as its saturation. That would mean that those who say "oh, don't route though <X>, come to this nice private interconnect (perhaps at some cost)" would get called on the carpet if the reason for their statement was that they had impaired performance through the exchange due to a lack of appropriate commitment. Likewise, if the *EXCHANGE* operator was negligent (or just unable to keep up with demand) we could hold THEIR feet to the fire as operators. Sadly, I know of NO exchange currently in operation that subscribes to these operating rules and policies. -- -- Karl Denninger (karl@MCS.Net)| MCSNet - Serving Chicagoland and Wisconsin http://www.mcs.net/ | T1's from $600 monthly / All Lines K56Flex/DOV | NEW! Corporate ISDN Prices dropped by up to 50%! Voice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS Fax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost
I'd love to see an exchange which PUBLISHED the "saturation rates" at each *PORT* to the world, identifying the carrier and speed of connection at each PORT. Also, in the same breath, they would need to publish the total FABRIC capacity as well as its saturation.
Only one problem, *many* customers (ISP's) *demand* NDA's. :(
Sadly, I know of NO exchange currently in operation that subscribes to these operating rules and policies.
See above. We used to.... :\
On Wed, May 20, 1998 at 02:20:07PM -0500, Richard Irving wrote:
I'd love to see an exchange which PUBLISHED the "saturation rates" at each *PORT* to the world, identifying the carrier and speed of connection at each PORT. Also, in the same breath, they would need to publish the total FABRIC capacity as well as its saturation.
Only one problem, *many* customers (ISP's) *demand* NDA's.
:(
That's why rules for an exchange are important :-)
Sadly, I know of NO exchange currently in operation that subscribes to these operating rules and policies.
See above.
We used to.... :\
Some folks need to start publicizing the names of carriers which demand this. -- -- Karl Denninger (karl@MCS.Net)| MCSNet - Serving Chicagoland and Wisconsin http://www.mcs.net/ | T1's from $600 monthly / All Lines K56Flex/DOV | NEW! Corporate ISDN Prices dropped by up to 50%! Voice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS Fax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost
On Wed, 20 May 1998, Karl Denninger wrote:
Part of the problem with "public exchanges" is that they get congested. But the REASONS for that congestion are not, I believe, that well understood.
Some of the reasons are well understood. One is that when providers do not upgrade the bandwidth of their pipes into the NAP then packets coming towards them through the NAP get dropped. Thus they create congestion on all these flows. Another reason is that NAP architectures do not scale well past a certain point. A NAP attempts to flatten the Interconnect hierarchy into a fully meshed fabric but when the NAP traffic grows beyond the ability of a specific full-mesh technology or device, then congestion occurs. For instance, 10Mbps and 100Mbps Ethernet and 100 Mbps FDDI are all examples of full-mesh technologies. But when the traffic exceeds their capacity, congestion occurs. The Digital Gigaswitch is a device that can handle a much higher traffic flow but when all the interface slots are full on a single box congestion occurs. We can interconnect multiple devices and/or multiple technologies but now there is no longer a full-mesh and much manual labor is involved in adjusting things like trunk capacity. At this point we appear to be attempting to collapse the entire Internet into an exchange point fabric which is doomed to failure. In some ways, private interconnects appear to be a superior technique as long as both providers keep them upgraded to handle the traffic flows. The ideal exchange point for private interconnects does not have a shared fabric, merely shared power and HVAC in a building with lots of fiber ingress and no restrictions on cross connecting. But even private interconnects can run into a scaling issue because if we attempt to interconnect every pair of providers through a private interconnect, we are attempting to create a full mesh which is not technically feasible. Thus we are led to a solution in which the largest number of the smallest providers use fully-meshed fabrics to interconnect and the larger providers manually build a full-mesh between themselves. This reduces the problem to one of interconnecting the many fully-meshed exchange point fabrics with the larger full-mesh fabric created by the large providers and their private interconnects.
Can we just build faster exchanges? Sure. Will it solve the problem? Not if carriers don't provision fast enough circuits into them!
Karl is right. Faster exchanges may move various limits and bottlenecks around but cannot solve the underlying problem. For instance, a faster exchange raises the traffic level at which a mid-size provider MUST go to private interconnects to handle traffic levels.
If you're seeing poor performance between <X> and <Y> at an exchange, is it due to the exchange fabric's poor performnace, or is one of <X> or <Y> under-provisoned into that fabric? Have either of those carriers DELIBERATELY (or through negligence) failed to provide adequate connectivity to the exchange?
This is the same problem of bandwidth upgrades that I started my message with. But it is also the heart of the issue of interconnecting the single national mesh created by private interconnects with the many smaller meshes created by the exchange points. There is a conflict of interest when a major Internet backbone owns the exchange points because they can attempt to deflect criticism of their connections to the exchange points by pointing out that they are attempting to upgrade the bandwidth capability of the exchange point. They fail to mention that growing the exchange point is like the labors of Sisyphus and cannot succeed.
Likewise, if the *EXCHANGE* operator was negligent (or just unable to keep up with demand) we could hold THEIR feet to the fire as operators.
Sadly, I know of NO exchange currently in operation that subscribes to these operating rules and policies.
Not enough people really understand how the network mesh works to hold anyone's feet to the fire. It doesn't matter whether you collapse portions of the mesh into an exchange point or into a Gigaswitch; it's still the same mesh. And you can even take the opposite tactic and expand an exchange point mesh nationally but the mesh still has to handle the same traffic levels. It is essentially a juggling game where you interconnect various technologies that might be able to handle the traffic flows in a given region of the mesh and hope that you don't drop too many balls. -- Michael Dillon - Internet & ISP Consulting Memra Communications Inc. - E-mail: michael@memra.com http://www.memra.com - *check out the new name & new website*
On Wed, May 20, 1998 at 12:57:18PM -0700, Michael Dillon wrote:
adjusting things like trunk capacity. At this point we appear to be attempting to collapse the entire Internet into an exchange point fabric which is doomed to failure. In some ways, private interconnects appear to
Am I _still_ the only one banging on the "Geographical Locality Of Reference" drum? OF _COURSE_ you're going to have trouble if you try to stuff half the internet through a router in the basement of a parking grage in Pennsauken. The problem is that the majors want to sell all the hoses at (ISP) retail, and they want to back haul all their traffic to some big exchange somewhere. Economies of scale don't scale. If they could be happy selling OC3 and 12's to local exchanges, and let _them_ sell the damned connectivity at "ISP retail", all the local traffic would stay _local_, and I strongly suspect that the rest of the big exchanges might start _working_. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "Two words: Darth Doogie." -- Jason Colby, Tampa Bay, Florida on alt.fan.heinlein +1 813 790 7592 Managing Editor, Top Of The Key sports e-zine ------------ http://www.totk.com
If they could be happy selling OC3 and 12's to local exchanges, and let _them_ sell the damned connectivity at "ISP retail", all the local traffic would stay _local_, and I strongly suspect that the rest of the big exchanges might start _working_.
That means I'm buying second tier bandwidth from a local exchange... or in essence just another ISP who sells only to other ISPs. The upstream portion doesn't seem to matter unless all the local ISPs use that exchange exclusively. Brian
On Wed, May 20, 1998 at 05:17:17PM -0400, Brian Horvitz wrote:
If they could be happy selling OC3 and 12's to local exchanges, and let _them_ sell the damned connectivity at "ISP retail", all the local traffic would stay _local_, and I strongly suspect that the rest of the big exchanges might start _working_.
That means I'm buying second tier bandwidth from a local exchange... or in essence just another ISP who sells only to other ISPs. The upstream portion doesn't seem to matter unless all the local ISPs use that exchange exclusively.
I do so wish we could get over the "tier" fixation. If I start the Tampa Bay Internet Exchange, let's say, and I haul in OC-3 links from the 5 top backbones, and DS-3's to the 4 NAP's, I can then (very likely) a) resell bandwidth to local ISP's for quite a bit less than the backbones could sell them a local drop, which would b) be quintuply redundant in cast of feed failure, and c) unload all the cross provider traffic from the NAP's, and indeed, the backbone itself. This worked perfectly well with Usenet topology, until the commercial wonks started screwing it up. In fact, I could operate the exchange as a co-op, _owned_ by all the local providers. Except for the back bone operators, who's best interests is such a scheme _not_ it? (And please note here: just because I _could_ oversubscribe the uplinks doesn't meant I _have_ to.) Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "Two words: Darth Doogie." -- Jason Colby, Tampa Bay, Florida on alt.fan.heinlein +1 813 790 7592 Managing Editor, Top Of The Key sports e-zine ------------ http://www.totk.com
At 17:17 -0400 5/20/98, Jay R. Ashworth wrote:
If I start the Tampa Bay Internet Exchange, let's say, and I haul in OC-3 links from the 5 top backbones, and DS-3's to the 4 NAP's, I can then (very likely) a) resell bandwidth to local ISP's for quite a bit less than the backbones could sell them a local drop, which would b) be quintuply redundant in cast of feed failure, and c) unload all the cross provider traffic from the NAP's, and indeed, the backbone itself.
No one is stopping you. No one is stopping SAVVIS, either which is doing similar work in the major metro areas. Differences do exist of course. Primarily the fact that you wouldn't be interconnecting such arrangements located in different areas.
In fact, I could operate the exchange as a co-op, _owned_ by all the local providers.
That has already happened in a number of rural areas and some metro areas (Colorado Co-op?). The problem traditionally has been getting small competing providers to cooperate.
Except for the back bone operators, who's best interests is such a scheme _not_ it?
If you are telling me that MCI, et. al. won't sell you DS3 connections you might have a point. I doubt that is the case. As for redundancy, five connections through a single physical location does not constitute a redundant network. Jim Browne jbrowne@jbrowne.com "Lesson: PC's have a `keyboard lock' switch, and it works." - Kevin Brick, after RMA'ing a motherboard with a "bad keyboard connector"
On Wed, 20 May 1998, Jim Browne wrote:
In fact, I could operate the exchange as a co-op, _owned_ by all the local providers.
That has already happened in a number of rural areas and some metro areas (Colorado Co-op?). The problem traditionally has been getting small competing providers to cooperate.
It has also happened in the UK, where both of the two exchanges are co-ops owned by their members. Both the LINX (http://www.linx.org) in London and MaNAP (http://www.manap.org) in Manchester are run along the lines of clubs, with operational guidelines laid down by their members. Both are growing rapidly, act to keep local traffic local, and have ample capacity. Costs to members are low and the benefit returned, especially at the LINX (which is considerably older), far exceeds the cost to each member. But if the UK experience is anything to go by, if the exchange is to succeed it must be neutral, and in particular it must not sell bandwidth. At both UK exchanges, the members have consistently rejected any sale of bandwidth by the exchange or any use of the exchange for sale of transit. In order for this model to work, someone else must provide the colocation facilities. The LINX is based at Telehouse Europe, a large purpose-built building which is the hub of the UK telecommunications system. The LINX itself is a non-profit coop; Telehouse is run for a profit (and is very profitable indeed). MaNAP is based at the University of Manchester, in a facility that used to house the university's supercomputer. Venture capital has funded a new colocation facility several hundred yards away (Telecity), and MaNAP will shortly expand into the new facility, linking its two halves by fiber optics. In both cases the exchanges apply pressure on the colocation facility operators to improve the quality of their offerings. Splitting MaNAP between Telecity and the University puts pressure on both suppliers to reduce costs and improve service.
From our experience, what is needed for a successful exchange is
* a local market of decent size (there are about 16 million people in the Liverpool-Manchester-Leeds corridor) * an exchange organized as a coop, one that guarantees that it will not compete with its members * openness; it must not just be an "us against them" grouping of little guys - the larger ISPs must feel comfortable in joining * technical competence * good colocation facilities with round the clock security, 24x7 access, and preferably 24x7 remote hands Counter-intuitively, it appears that the more independent the exchange is of the colocation facility, the more successful the colocation facility is likely to be as an investment. -- Jim Dixon VBCnet GB Ltd http://www.vbc.net tel +44 117 929 1316 fax +44 117 927 2015
I do so wish we could get over the "tier" fixation.
If I start the Tampa Bay Internet Exchange, let's say, and I haul in OC-3 links from the 5 top backbones, and DS-3's to the 4 NAP's, I can then (very likely) a) resell bandwidth to local ISP's for quite a bit less than the backbones could sell them a local drop, which would b) be quintuply redundant in cast of feed failure, and c) unload all the cross provider traffic from the NAP's, and indeed, the backbone itself.
I'm not disagreeing with anything here but, the "tier" thing is a real concern especially for the marketing weasels at the smaller companies. The network construction is quite sound.
This worked perfectly well with Usenet topology, until the commercial wonks started screwing it up.
In fact, I could operate the exchange as a co-op, _owned_ by all the local providers.
This is the best I've heard yet. A non-profit co-op run by any interested local providers would be just a fantastic idea. The reason I brought up the whole tier issue is that if this becomes a commercial entity then it looses its effectiveness.
Except for the back bone operators, who's best interests is such a scheme _not_ it?
(And please note here: just because I _could_ oversubscribe the uplinks doesn't meant I _have_ to.)
Right..see above. Brian
participants (8)
-
Brian Horvitz
-
Jay R. Ashworth
-
Jim Browne
-
Jim Dixon
-
Karl Denninger
-
Michael Dillon
-
Pickett, David
-
Richard Irving