MAE-EAST Moving? from Tysons corner to reston VA.
Isn't the push to MAE-ATM ? For better or worse -- the FDDI switch MAE can't hope to keep up when people are building OC-48 cross-country backbones. All that traffic goes someplace. It becomes expensive to have multiple ports on the FDDI switch paying for each. On Wed, 14 Jun 2000, Christian Nielsen wrote:
On Wed, 14 Jun 2000, Vinny India wrote:
Has anyone else heard this?
i think we have all heard it is going away. maybe this is the start.
No public exchange architecture can hope to cope with the massive amounts of traffic being exchanged between the larger backbone networks. Public exchanges are good entry points for new networks while they build their customer base and traffic levels. At some point private interconnects must take over in order for a company to continue to provide the level of connectivty and service that their customers expect. The next operational issue that I forsee is the effective scaling of private interconnect bandwidth (especially with the lack of real port density on a certain router vendor's product). Mark dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ? For better or worse -- the FDDI switch MAE can't hope to keep up when people are building OC-48 cross-country backbones. All that traffic goes someplace. It becomes expensive to have multiple ports on the FDDI switch paying for each.
This kind of topic has the makings of a good presentation for the WDC nanog... Any takers?
No public exchange architecture can hope to cope with the massive amounts of traffic being exchanged between the larger backbone networks. Public exchanges are good entry points for new networks while they build their customer base and traffic levels. At some point private interconnects must take over in order for a company to continue to provide the level of connectivty and service that their customers expect.
The next operational issue that I forsee is the effective scaling of private interconnect bandwidth (especially with the lack of real port density on a certain router vendor's product).
Mark
dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ? For better or worse -- the FDDI switch MAE can't hope to keep up when people are building OC-48 cross-country backbones. All that traffic goes someplace. It becomes expensive to have multiple ports on the FDDI switch paying for each.
---------------------------------------------------------------------- Wayne Bouchard [Immagine Your ] web@typo.org [Company Name Here] Network Engineer http://www.typo.org/~web/resume.html ----------------------------------------------------------------------
I'd happily be part of a panel discussion. I doubt you will get very many takers though since it is the perfect opportunity for overzealous and frustrated engineers to take pot shots at a few well intentioned people :( Mark ----- Original Message ----- From: "Wayne Bouchard" <web@typo.org> To: "Mark Tripod" <mark@exodus.net> Cc: <dhudes@hudes.org>; <nanog@merit.edu> Sent: Wednesday, June 14, 2000 3:28 PM Subject: Re: MAE-EAST Moving? from Tysons corner to reston VA. | | This kind of topic has the makings of a good presentation for the WDC | nanog... | | Any takers? | | > No public exchange architecture can hope to cope with the massive | > amounts of traffic being exchanged between the larger backbone networks. | > Public exchanges are good entry points for new networks while they build | > their customer base and traffic levels. At some point private | > interconnects must take over in order for a company to continue to | > provide the level of connectivty and service that their customers | > expect. | > | > The next operational issue that I forsee is the effective scaling of | > private interconnect bandwidth (especially with the lack of real port | > density on a certain router vendor's product). | > | > Mark
Wayne, You'd probably want at least a panel, if not a wrestling pit. You'd certainly want a set of translators available, as a lot of folks looking at the problem seem to use the same terms to mean slightly different things. As an example, "Traffic exchange" can be used as synonymous with "peering" or it can be a broader term that includes what happens when one party hands another packets and pays it for transit. As an example, Equinix IBX centers are put together essentially as dark fiber exchanges--most of the traffic is carried by fiber (or copper) cross-connects between one customer's cage and another's. You could say that those cross connects make up the bulk of the exchange fabric. There is an ATM switch available for those who need to do a lot of aggregation or want a device in the middle to do some policing, but even that is seen as a way of doing entity-to-entity cross connects. Some would say that our model doesn't include a "public exchange architecture" at all, but is a way of doing private traffic exchange in shared space. Others would say that we are a public exchange, but one which lacks some of the characteristics of a shared-medium exchange. In either case, Mark's point about scaling interconnect bandwidth is a key question. On one hand you have Tier-2 ISPs who want fast ethernet based systems, because they don't really have an immediate need to go for anything faster; on the other, you have backbone-to-backbone traffic that is rapidly moving to the point where the only thing that will make sense is to trade a lambda. It's hard to have public exchange that it is a good entry point for a Tier-2 that also meets the needs of the backbones. Using multiple different exchange methods to handle the different needs is one way around the problem, but it comes at a cost in gear, support, and network engineering. The inertia in existing traffic exchange mechanisms is also high enough that what tends to happen is that new connections take new forms but the old mechanisms aren't taken out of service at any speed--which again has a cost in gear, support, and network engineering. Anyone else want to jump in this particular wrestling pit? I'd be very happy to hear what other folks are thinking along these lines. regards, Ted Hardie Equinix (Not speaking for the company)
This kind of topic has the makings of a good presentation for the WDC nanog...
Any takers?
No public exchange architecture can hope to cope with the massive amounts of traffic being exchanged between the larger backbone networks. Public exchanges are good entry points for new networks while they build their customer base and traffic levels. At some point private interconnects must take over in order for a company to continue to provide the level of connectivty and service that their customers expect.
The next operational issue that I forsee is the effective scaling of private interconnect bandwidth (especially with the lack of real port density on a certain router vendor's product).
Mark
dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ? For better or worse -- the FDDI switch MAE can't hope to keep up when people are building OC-48 cross-country backbones. All that traffic goes someplace. It becomes expensive to have multiple ports on the FDDI switch paying for each.
---------------------------------------------------------------------- Wayne Bouchard [Immagine Your ] web@typo.org [Company Name Here] Network Engineer http://www.typo.org/~web/resume.html ----------------------------------------------------------------------
On Wed, 14 Jun 2000 dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ? For better or worse -- the FDDI switch MAE can't hope to keep up when people are building OC-48 cross-country backbones. All that traffic goes someplace. It becomes expensive to have multiple ports on the FDDI switch paying for each.
Are there any other suggestions, other than ATM? Here in Sweden the exchange point is being replaced/extended using DPT/SRP (currently OC12, will be OC48 when cisco can deliver hardware). I have serious doubts that using this technology is the answer though, the rings become fairly large and I doubt it'll be efficient in the long run (even though it is better than FDDI at current traffic amounts). Gigabit ethernet sounds nice but the MTU of 1500 is really restricting that technology. I have heard rumours that 10gig ethernet will have at least 4000+ byte MTU, probably 9000+ which might make it viable to use for a shared medium exchange point as the hardware probably will be fairly cheap and/as it will be widely available supported by many vendors. Shared medium exchange points that work well must be the technology to be most economic as you can talk to several others using one interface instead of having to set up separate interfaces for each peering you want to establish? Isn't the lack of good technology here percieved as a threat to internet growth? -- Mikael Abrahamsson email: swmike@swm.pp.se
At 10:58 PM 6/14/00 +0200, Mikael Abrahamsson wrote:
On Wed, 14 Jun 2000 dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ?
snip
Are there any other suggestions, other than ATM?
Cross-connects ala PAIX & Equinix models. Fewer opportunities for provisioning delays and/or back hoe trauma. A cross-connect inventory is much easier to manage, troubleshoot during an outage, and grow into larger amounts of capacity than a mix of circuits ordered from ever changing / merging carriers. Just think about how many people-hours are wasted in the direct circuit process if a merger gets in the way and one side of the peer suddenly has to order capacity over a carrier not in your facility because their MSA is in the way... hmm. Better yet, how many folks get pulled into conference calls trying to troubleshoot circuits when they have gone down... This expense is rarely calculated in cost model projections.... Cheers, -Ren
There are still hidden costs associated with PNI at PAIX type facilities. Have you ever tried to disconnect a PNI or CNI after you've upgraded to a higher port size? It is not as easy as it might seem. The up side to telco based private interconnects is that it allows for greater network diversity in metropolitan areas where more than one interconnect location exists. The real problem that I see is how do you realistically expect to dump an OC48s worth of IP traffic on another backbone in one place wihtout effectively destroying their network? It continues to amaze me how everyone wants to sell settlement based peering or restricted transit when their networks are no where near the size, capacity-wise, to offer that service. Mark Lauren F. Nowlin wrote:
Are there any other suggestions, other than ATM?
Cross-connects ala PAIX & Equinix models.
Fewer opportunities for provisioning delays and/or back hoe trauma.
A cross-connect inventory is much easier to manage, troubleshoot during an outage, and grow into larger amounts of capacity than a mix of circuits ordered from ever changing / merging carriers. Just think about how many people-hours are wasted in the direct circuit process if a merger gets in the way and one side of the peer suddenly has to order capacity over a carrier not in your facility because their MSA is in the way... hmm.
Better yet, how many folks get pulled into conference calls trying to troubleshoot circuits when they have gone down... This expense is rarely calculated in cost model projections....
Cheers, -Ren
Hrmm.. Seems to me that a shared medium (a la GigE or whatever), or a VC based thing (a la ATM, but at OC12 or more) would be the best interconnection method of networks, considering the bursty flows that are inherent in the Internet. For example, Randy, would you prefer a Channelized OC48 with a OC3 to 16 providers, or, a ATM OC48 interface (leaving overhead out of this argument for the moment) with VCs to 16 providers? I'd believe the ATM would be better use of the available bandwidth... Some people may deem the FDDI's a failure, but they've served the net in a reasonable-good way for along time... They are just too small now. On Wed, 14 Jun 2000, Randy Bush wrote:
Are there any other suggestions, other than ATM? Cross-connects ala PAIX & Equinix models.
i have been suggesting a DACS for many years. then all one needs is one channelized oc48 and it's fat city.
randy
Do you have a favorite DACS vendor? Sean probably has started compiling stats...if I remember right, Alcatel is one to avoid... Adi
i have been suggesting a DACS for many years. then all one needs is one channelized oc48 and it's fat city.
randy
i have been suggesting a DACS for many years. then all one needs is one channelized oc48 and it's fat city.
As passive DWDM becomes dirt cheap, having a loop carrying to a redundant optical (wavelength) cross-connect has advantages, like you can bilaterally upgrade your peering line cards without having to bother to inform / be bothered by the exchange point (remember MAE-ATM). I can't see why people would want to put in less than an OC-3. n x OC-3 + passive DWDM is (I believe) cheaper than OC-48 channelized (per bit per second) for many values of n (i.e. spreading the DWDM cost across many OC-x's.) - & this will last until n x parallel OC-192 is insufficient for bilateral cross connect. By the time anyone listens to your (good) DACS idea, 2.4Gbps (total) won't be enough. -- Alex Bligh VP Core Network, Concentric Network Corporation (formerly GX Networks, Xara Networks)
i have been suggesting a DACS for many years. then all one needs is one channelized oc48 and it's fat city. As passive DWDM becomes dirt cheap, having a loop carrying to a redundant optical (wavelength) cross-connect has advantages, like you can bilaterally upgrade your peering line cards without having to bother to inform / be bothered by the exchange point (remember MAE-ATM).
is adding new 'terminations' not a bit more of a bother?
I can't see why people would want to put in less than an OC-3.
when you want a lot of peers. but the self-provisioning would be way cool. lunch or dinner wednesday? randy
Randy,
is adding new 'terminations' not a bit more of a bother?
Less so if when you put the DWDM unit in, you pre-install OC-X cards and pre-fiber them to your DWDM unit (may be some on OC-3, some on OC-12). Yours has less visits till you fill your OC-48, then a big wait :-) Either strategy (DACS/optical cross connect) is superior to private interconnect. And either strategy allows for economical distribution of the exchange poit across the metro area (i.e. you take the loop to your own prem or favourite colo house). And either can take a connection made resilient beneath layer 2. And either is compatible with a 'multi-IXP-operator' model. All major wins if you are worried about running out of space or your IXP operator's quality going down the toilet. -- Alex Bligh VP Core Network, Concentric Network Corporation (formerly GX Networks, Xara Networks)
On Wed, 14 Jun 2000, Mikael Abrahamsson wrote:
On Wed, 14 Jun 2000 dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ?
Are there any other suggestions, other than ATM?
[snip]
Gigabit ethernet sounds nice but the MTU of 1500 is really restricting that technology.
While there are certainly shortcomings to using GbE as a public exchange infrastructure, I fail to see how a 1500 byte MTU has anything to do with it. In every network I have ever seen, there have very, very rarely been any packets larger than 1500 bytes. The only case that I can think of where that becomes important is in a MPLS exhange model where adding the label to large packets would cause fragmentation or broken TCP sessions for the PMTUD challenged. Brandon Ross 404-522-5400 VP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! Stop Smurf attacks! Configure your router interfaces to block directed broadcasts. See http://www.quadrunner.com/~chuegen/smurf.cgi for details.
Has anyone seen a percentage of traffic that is statistically significant for MTU's higher than 1500. Stats that I see pretty much show 99% of traffic below 1500. Bora ----- Original Message ----- From: "Brandon Ross" <bross@netrail.net> To: <nanog@merit.edu> Sent: Wednesday, June 14, 2000 7:35 PM Subject: Re: exchange point media (was: Re: MAE-EAST Moving? ...)
On Wed, 14 Jun 2000, Mikael Abrahamsson wrote:
On Wed, 14 Jun 2000 dhudes@hudes.org wrote:
Isn't the push to MAE-ATM ?
Are there any other suggestions, other than ATM?
[snip]
Gigabit ethernet sounds nice but the MTU of 1500 is really restricting that technology.
While there are certainly shortcomings to using GbE as a public exchange infrastructure, I fail to see how a 1500 byte MTU has anything to do with it. In every network I have ever seen, there have very, very rarely been any packets larger than 1500 bytes.
The only case that I can think of where that becomes important is in a MPLS exhange model where adding the label to large packets would cause fragmentation or broken TCP sessions for the PMTUD challenged.
Brandon Ross 404-522-5400 VP Engineering, NetRail http://www.netrail.net AIM: BrandonNR ICQ: 2269442 Read RFC 2644! Stop Smurf attacks! Configure your router interfaces to block directed broadcasts. See http://www.quadrunner.com/~chuegen/smurf.cgi for details.
On Wed, Jun 14, 2000 at 08:46:09PM -0700, Bora Akyol wrote:
Has anyone seen a percentage of traffic that is statistically significant for MTU's higher than 1500. Stats that I see pretty much show 99% of traffic below 1500.
Please be aware that any traffic passing through FDDI XP's using DEC gigaswitches is biased. These switches break path mtu discovery. It wouldn't surprise me if this issue is just another item keeping the Internet MTU low.
Bora
-- Jeffrey Haas - Merit RSng project - jeffhaas@merit.edu
On Wed, 14 Jun 2000, Brandon Ross wrote:
While there are certainly shortcomings to using GbE as a public exchange infrastructure, I fail to see how a 1500 byte MTU has anything to do with it. In every network I have ever seen, there have very, very rarely been any packets larger than 1500 bytes.
One of the major arguments against gigE I've heard is the lower MTU. Yes, currently there are not much packets that are larger than 1500 due to endsystems, but if we limit the infrastructure then there'll never be a larger MTU here. ATM/FDDI/POS and even Token Ring (to mix apples with oranges) all support larger MTUs and future standards will probably will as well. Therefore I believe that any shared medium used for exchanging traffic should also have the same capabilities. Larger packets mean fewer forwarding/routing decisions per second for the same megabit rate of traffic. Think ahead 3-5 years, do we really want to stick with the 1500 MTU just because this limitation is built in into the backbone infrastructure? I have received a few emails stating that several hardware vendors support jumbo frames on gigE. The question whether cisco does as well immediately pops up. So far nobody has mentioned their name as one that does support jumbo frames. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Wed, 14 Jun 2000, Brandon Ross wrote:
While there are certainly shortcomings to using GbE as a public exchange infrastructure, I fail to see how a 1500 byte MTU has anything to do with it. In every network I have ever seen, there have very, very rarely been any packets larger than 1500 bytes.
One of the major arguments against gigE I've heard is the lower MTU. Yes, currently there are not much packets that are larger than 1500 due to endsystems, but if we limit the infrastructure then there'll never be a larger MTU here. ATM/FDDI/POS and even Token Ring (to mix apples with oranges) all support larger MTUs and future standards will probably will as well. Therefore I believe that any shared medium used for exchanging traffic should also have the same capabilities. Larger packets mean fewer forwarding/routing decisions per second for the same megabit rate of traffic.
This was one of my concerns when GE was released. It is likely that many desktop users will be connected by fastethernet now and probably ge later on. Many web servers are already connected by ge. So there are instances where there COULD have been a continous path at 4k but instead, there is 1.5. My motiviation in wanting a higher MTU is in efficiency. One of the concerns that customers were bringing to me on occassion was not necessarily bandwidth OR latency, but rather, the composit.. data per unit of latency. They wanted to get be able to send larger amounts of data without having it fragmented. (particularly important in the multiplayer real-time gaiming bit.) As a side effect (albeit, neglibible), if there were retransmissions of a packet, there might be 1 instead of 3 with the current 1500 mtu. Some gain there as well. (Of course, given that the average packet size on the net today is less than 75K, the relative benifit of increasing the size breaks off quickly.) The utility of a higher MTU is still dubious in the practicle realm, but I agree that I would like to see the infrastructure move to 4 or 9k. ---------------------------------------------------------------------- Wayne Bouchard [Immagine Your ] web@typo.org [Company Name Here] Network Engineer http://www.typo.org/~web/resume.html ----------------------------------------------------------------------
On Thu, 15 Jun 2000, Mikael Abrahamsson wrote:
I have received a few emails stating that several hardware vendors support jumbo frames on gigE. The question whether cisco does as well immediately pops up. So far nobody has mentioned their name as one that does support jumbo frames.
-- Mikael Abrahamsson email: swmike@swm.pp.se
The other question is what is the actual capability of the GigaE cards in a cisco box? I have been told (not played with GigaE on cisco yet and for that matter, neither has the person who told me) that it is limited to around 400Mb/s. --- John Fraizer EnterZone, Inc
The other question is what is the actual capability of the GigaE cards in a cisco box? I have been told (not played with GigaE on cisco yet and for that matter, neither has the person who told me) that it is limited to around 400Mb/s.
On a 7xxx Cisco, yes, it's limited to 400Mb/s, due to limitations on the interface to the backplane (or VIP2 in the 75xx). On the Cisco 12000, the backplane interface is 2.5Gb/s, so the GigE cards can run at full speed. Interestingly, no-one seems to have mentioned LINX yet in this discussion. LINX have been offering GigE connections to their switch for over a year now as a trial, and in the last few months they are now "live". There are apparently some LINX members pushing a few 100Mb/s through their GigE connection. E.g. see http://stats.sjc.above.net/traffic/lhr/linx-1.html Simon -- Simon Lockhart | Tel: +44 (0)1737 839676 Internet Engineering Manager | Fax: +44 (0)1737 839516 BBC Internet Services | Email: Simon.Lockhart@bbc.co.uk Kingswood Warren,Tadworth,Surrey,UK | URL: http://support.bbc.co.uk/
CMH-IX has been offering GigE connections since it's inception on 23JUL99 as well. We (AS13944) are the only peer using GigE but, it's still available. More information about CMH-IX is available at http://www.cmh-ix.net/ --- John Fraizer EnterZone, Inc On Thu, 15 Jun 2000, Simon Lockhart wrote:
Interestingly, no-one seems to have mentioned LINX yet in this discussion. LINX have been offering GigE connections to their switch for over a year now as a trial, and in the last few months they are now "live". There are apparently some LINX members pushing a few 100Mb/s through their GigE connection.
E.g. see http://stats.sjc.above.net/traffic/lhr/linx-1.html
Simon -- Simon Lockhart | Tel: +44 (0)1737 839676 Internet Engineering Manager | Fax: +44 (0)1737 839516 BBC Internet Services | Email: Simon.Lockhart@bbc.co.uk Kingswood Warren,Tadworth,Surrey,UK | URL: http://support.bbc.co.uk/
On a 7xxx Cisco, yes, it's limited to 400Mb/s, due to limitations on the interface to the backplane (or VIP2 in the 75xx).
On the Cisco 12000, the backplane interface is 2.5Gb/s, so the GigE cards can run at full speed.
Both have other limitation. Number of packets etc.
On Thu, 15 Jun 2000, Simon Lockhart wrote:
Interestingly, no-one seems to have mentioned LINX yet in this discussion. LINX have been offering GigE connections to their switch for over a year now as a trial, and in the last few months they are now "live". There are apparently some LINX members pushing a few 100Mb/s through their GigE connection.
I am aware of the LINX and I have no doubt that it'll work to interchange over GigE. I do believe that GigE and 10GigE with jumbo frames could be accepted as an upgrade from the existing FDDI shared media exchanges. -- Mikael Abrahamsson email: swmike@swm.pp.se
For all the money that's already been spent getting stuff into the garage, I'd have to wonder if just buying more space there would not be less disruptive and more efficient. -- A host is a host from coast to coast.................wb8foz@nrk.com & no one will talk to a host that's close........[v].(301) 56-LINUX Unless the host (that isn't close).........................pob 1433 is busy, hung or dead....................................20915-1433
participants (20)
-
Alex
-
Alex Bligh
-
Bora Akyol
-
Brandon Ross
-
Christian Nielsen
-
David Lesher
-
dhudes@hudes.org
-
hardie@equinix.com
-
James Howard
-
Jeff Haas
-
John Fraizer
-
Lauren F. Nowlin
-
mark@exodus.net
-
Mikael Abrahamsson
-
Neil J. McRae
-
R.P. Aditya
-
Randy Bush
-
Simon Lockhart
-
Vinny India
-
Wayne Bouchard