Re: The future of NAPs & IXPs
For what it's worth...I just finished a paper that highlights the trade offs between the direct circuit interconnect model and the exchange point interconnection model for ISPs. The paper discusses the operations and financial models (taking into account the circuit costs, cost of exchange
All is well but you missed one of the most critical issues with the current IXPs: lack of scalability. The private point-to-point interconnects are at least as fast as backbones. Fixing IXP scalability issues requires somewhat radical departure from the current router architecture; such as being done by terabit router vendors. In other words, even if multi-party IXPs are more cost-effective, they are currently (and in the near-term future) unable to handle the load. Also, the number of interconnects (IXPs or direct) cannot be large because of flap-amplification properties of the inter-backbone connections. (BTW, O(5) can be an arbitrarily large fixed number, simply speaking :) --vadim William B. Norton <wbn@equinix.com> wrote: participation, cost of dark fiber, etc.) and the implications of these strategies across the # of interconnection participants and bandwidth utilization between the participants. To cut to the chase, the major points from the paper:
1) For ISP interconnection, direct circuit interconnection is financially attractive for low #s of connections (O(5)) of relatively low bandwidth (DS-3/OC-3).
2) As the bandwidth and # of interconnections grow, the exchange point interconnection model proves much more scalable for two reasons: First, as bandwidth grows between participants, ISPs are able to aggregate interconnection traffic over increasingly large pipe back to their cloud, yielding potentially significant economies of scale. The direct circuit interconnection does not provide for this aggregation since the pipes are destined to different plances.
At 11:18 AM 4/19/99 -0700, Vadim Antonov wrote:
All is well but you missed one of the most critical issues with the current IXPs: lack of scalability. The private point-to-point interconnects are at least as fast as backbones. Fixing IXP scalability issues requires somewhat radical departure from the current router architecture; such as being done by terabit router vendors.
In both the direct circuit interconnection model and the exchange based interconnection model, point-to-point interconnection can be accomplished with at least equal scalability. Private cross connects (a piece of fiber) within an exchange can be driven at the same speed as a piece of fiber that travels across many miles under the ground. (I think you inferred that there was a switch involved in the model...If so, I agree, there are alternative ways to interconnect within an exchange (i.e. switch vs. terabit routing technology, etc.) that each have different characteristics and scalability issues. I'm comparing interconnection environments apples to apples. ) ----- snip -----
(BTW, O(5) can be an arbitrarily large fixed number, simply speaking :)
OK - I'll restate; about, ~, roughly, and in the neighborhood of 5 ;-)
--vadim
William B. Norton <wbn@equinix.com> wrote:
For what it's worth...I just finished a paper that highlights the trade offs between the direct circuit interconnect model and the exchange point interconnection model for ISPs. The paper discusses the operations and financial models (taking into account the circuit costs, cost of exchange participation, cost of dark fiber, etc.) and the implications of these strategies across the # of interconnection participants and bandwidth utilization between the participants.
To cut to the chase, the major points from the paper:
1) For ISP interconnection, direct circuit interconnection is financially attractive for low #s of connections (O(5)) of relatively low bandwidth (DS-3/OC-3).
2) As the bandwidth and # of interconnections grow, the exchange point interconnection model proves much more scalable for two reasons: First, as bandwidth grows between participants, ISPs are able to aggregate interconnection traffic over increasingly large pipe back to their cloud, yielding potentially significant economies of scale. The direct circuit interconnection does not provide for this aggregation since the pipes are destined to different plances.
---------------------------------------------------------------- William B. Norton <wbn@equinix.com> +1 650.298.0400 x2225 Equinix Director of Business Development
From traffic engineering point of view, I suspect the direct circuit interconnection, private peering model to be considerably simpler to deal with than the exchanged based model.
In the direct circuit case, if the interconnect pipe does not have the umph to satisfy the performance characteristics of the peering traffic, it can be relatively easily detected (percentage of packet loss) and resolved between the two parties (buy more or fatter pipe). In the exchange case, assuming the exchange box is connecting an OC12 link from each of 4 providers. If provider A is experiencing problem with provider D, it could be due to the problem with the pipe and other stuff associated with provider D, but just as likely, it may be due to having too much concurrent traffic from provider B and C with D. Such overloading due to [temporal] traffic aggregation can be pretty tricky to identify [particularly provider A will unlikely to have access to traffic profile/log of provide B and C] and even tricker to figure out what to do. Regards, John leong -- --------------------------------------------------------- Bell Labs Research johnleong@research.bell-labs.com 4995 Patrick Henry Dr. Tel: 408-567-4459 Santa Clara, CA 95054 Fax: 408-567-4448
participants (3)
-
John Leong
-
Vadim Antonov
-
William B. Norton