Wayne, You'd probably want at least a panel, if not a wrestling pit. You'd certainly want a set of translators available, as a lot of folks looking at the problem seem to use the same terms to mean slightly different things. As an example, "Traffic exchange" can be used as synonymous with "peering" or it can be a broader term that includes what happens when one party hands another packets and pays it for transit. As an example, Equinix IBX centers are put together essentially as dark fiber exchanges--most of the traffic is carried by fiber (or copper) cross-connects between one customer's cage and another's. You could say that those cross connects make up the bulk of the exchange fabric. There is an ATM switch available for those who need to do a lot of aggregation or want a device in the middle to do some policing, but even that is seen as a way of doing entity-to-entity cross connects. Some would say that our model doesn't include a "public exchange architecture" at all, but is a way of doing private traffic exchange in shared space. Others would say that we are a public exchange, but one which lacks some of the characteristics of a shared-medium exchange. In either case, Mark's point about scaling interconnect bandwidth is a key question. On one hand you have Tier-2 ISPs who want fast ethernet based systems, because they don't really have an immediate need to go for anything faster; on the other, you have backbone-to-backbone traffic that is rapidly moving to the point where the only thing that will make sense is to trade a lambda. It's hard to have public exchange that it is a good entry point for a Tier-2 that also meets the needs of the backbones. Using multiple different exchange methods to handle the different needs is one way around the problem, but it comes at a cost in gear, support, and network engineering. The inertia in existing traffic exchange mechanisms is also high enough that what tends to happen is that new connections take new forms but the old mechanisms aren't taken out of service at any speed--which again has a cost in gear, support, and network engineering. Anyone else want to jump in this particular wrestling pit? I'd be very happy to hear what other folks are thinking along these lines. regards, Ted Hardie Equinix (Not speaking for the company)
This kind of topic has the makings of a good presentation for the WDC nanog...
Any takers?
No public exchange architecture can hope to cope with the massive amounts of traffic being exchanged between the larger backbone networks. Public exchanges are good entry points for new networks while they build their customer base and traffic levels. At some point private interconnects must take over in order for a company to continue to provide the level of connectivty and service that their customers expect.
The next operational issue that I forsee is the effective scaling of private interconnect bandwidth (especially with the lack of real port density on a certain router vendor's product).
Mark
dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ? For better or worse -- the FDDI switch MAE can't hope to keep up when people are building OC-48 cross-country backbones. All that traffic goes someplace. It becomes expensive to have multiple ports on the FDDI switch paying for each.
---------------------------------------------------------------------- Wayne Bouchard [Immagine Your ] web@typo.org [Company Name Here] Network Engineer http://www.typo.org/~web/resume.html ----------------------------------------------------------------------