Folks, I know that many of the Inter-exchange points are using FDDI and switched FDDI for inter-exchange in lieu (or anticipation) of ATM. I am wondering if anyone has considered using switched 100baseFx or similar technology for an inter-exchange. Given the relative newness of these interfaces on high-end routers I would not be suprised to hear that no one is using them right now. But has anyone looked into using them? Is there a good reason to prefer switched FDDI over 100baseFx (which can do full-duplex 100 Mb/s switched)? <tex@isc.upenn.edu>
I know of plans for at least 2 exchanges that will use 100baseTx as a connection option. The only real problem is the backplane limit of boxes that switch 100baseTX, (Cisco Cat 5000, 1.2gbps). This is only slightly better than the Dec Gigaswitch limit of 800mbps. Certainly the hardware to do switched ethernet is much cheaper than FDDI, and with full duplex 100mbps there is very little advantage to FDDI. (FDDI does do full duplex now, but I don't know who offers interfaces that support this.) FDDI does offer larger frame sizes, assuming you have FDDI each end with no "MTU 1500"'s in between. To me it seems like a big cost win to drop in a C5000 with 4 slots at 24 switched ethernet/slot, or 12 switched fast ethernet/slot. Particularlly since the router interfaces are about 1/2 to 1/4 as much. The exchanges with the most traffic are the ones based on the oldest technology, because there are always bugs.
Folks,
I know that many of the Inter-exchange points are using FDDI and switched FDDI for inter-exchange in lieu (or anticipation) of ATM.
I am wondering if anyone has considered using switched 100baseFx or similar technology for an inter-exchange. Given the relative newness of these interfaces on high-end routers I would not be suprised to hear that no one is using them right now. But has anyone looked into using them? Is there a good reason to prefer switched FDDI over 100baseFx (which can do full-duplex 100 Mb/s switched)?
<tex@isc.upenn.edu>
-- Jeremy Porter, Freeside Communications, Inc. jerry@fc.net PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-339-6094 http://www.fc.net
I know of plans for at least 2 exchanges that will use 100baseTx as a connection option. The only real problem is the backplane limit of boxes that switch 100baseTX, (Cisco Cat 5000, 1.2gbps). This is only slightly better than the Dec Gigaswitch limit of 800mbps.
Certainly the hardware to do switched ethernet is much cheaper than FDDI, and with full duplex 100mbps there is very little advantage to FDDI. (FDDI does do full duplex now, but I don't know who offers interfaces that support this.) FDDI does offer larger frame sizes, assuming you have FDDI each end with no "MTU 1500"'s in between.
Don't gloss over the MTU issue. Fragmenting packets is a serious performance hit. Also, DS3 customers want a hssi mtu end-to-end.
To me it seems like a big cost win to drop in a C5000 with 4 slots at 24 switched ethernet/slot, or 12 switched fast ethernet/slot. Particularlly since the router interfaces are about 1/2 to 1/4 as much.
Compared to the other costs in running an ISP it is just noise...
The exchanges with the most traffic are the ones based on the oldest technology, because there are always bugs.
??!! Exchanges with the most traffic are the ones with the new technology because they have no choice. Erik
In message <199604250621.BAA09394@freeside.fc.net>, Jeremy Porter writes:
I know of plans for at least 2 exchanges that will use 100baseTx as a connection option. The only real problem is the backplane limit of boxes that switch 100baseTX, (Cisco Cat 5000, 1.2gbps). This is only slightly better than the Dec Gigaswitch limit of 800mbps.
Certainly the hardware to do switched ethernet is much cheaper than FDDI, and with full duplex 100mbps there is very little advantage to FDDI. (FDDI does do full duplex now, but I don't know who offers interfaces that support this.) FDDI does offer larger frame sizes, assuming you have FDDI each end with no "MTU 1500"'s in between.
To me it seems like a big cost win to drop in a C5000 with 4 slots at 24 switched ethernet/slot, or 12 switched fast ethernet/slot. Particularlly since the router interfaces are about 1/2 to 1/4 as much.
The exchanges with the most traffic are the ones based on the oldest technology, because there are always bugs.
For many products, there is a big difference between how many interfaces physically fit in the card cage and how many actually work under a fairly heavy load. Have you tested a 12 port switched 100 Mb/s ethernet under load? The DEC gigaswitch has been tested under load and has held up so far under load. The Cisco 5000 may bridge better than it routes since there is no route change to deal with, but I'd be a bit worried about deploying without stress testing. The problem for the major exchanges may soon be what to do when the gigaswitch runs out of bandwidth. Curtis
Folks,
I know that many of the Inter-exchange points are using FDDI and switched FDDI for inter-exchange in lieu (or anticipation) of ATM.
I am wondering if anyone has considered using switched 100baseFx or similar technology for an inter-exchange. Given the relative newness of these interfaces on high-end routers I would not be suprised to hear that no one is using them right now. But has anyone looked into using them? Is there a good reason to prefer switched FDDI over 100baseFx (which can do full-duplex 100 Mb/s switched)?
<tex@isc.upenn.edu>
-- Jeremy Porter, Freeside Communications, Inc. jerry@fc.net PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-339-6094 http://www.fc.net
The problem for the major exchanges may soon be what to do when the gigaswitch runs out of bandwidth.
Curtis
There are rumors/rumblings afoot that we have a operational 600M/b thingie which now has PCI interfaces in addition to the existing sbus, nubus and EISA bus cards. Now if we can just get infrastructure vendors to provide something that we can plug into, this would potentially make a reasonable ExchangeNG (tm). Imagine a lan technology that would keep up with multiple incoming OC3s! (where is that gigabit lan technology when you need it... :) --bill
On Thu, 25 Apr 1996 bmanning@isi.edu wrote:
There are rumors/rumblings afoot that we have a operational 600M/b thingie which now has PCI interfaces in addition to the existing sbus, nubus and EISA bus cards. Now if we can just get infrastructure vendors to provide something that we can plug into, this would potentially make a reasonable ExchangeNG (tm). Imagine a lan technology that would keep up with multiple incoming OC3s! (where is that gigabit lan technology when you need it... :)
Yell at the router/switch/xxx vendor of your choice. :) -dorian
uh, not use them?? why we insist on having a contest to see how much technology we can melt down into slag is beyond me. if people are exchanging that much traffic, there are better, more straightforward ways to do it. -mo
The problem for the major exchanges may soon be what to do when the gigaswitch runs out of bandwidth.
Right. I don't believe the 800Mb/s claim of the GIGAswitch's backplane, since I thought it was 3.5Gb/s or something. But either way, the bottleneck is going to be the 100Mb/s port speed, and the ISP backhaul. Even if we had a 200 port GIGAswitch with a 200*100Mb/s full cross bar backplane, we would soon reach the point where the individual 100Mb/s ports were just too full. And if we had 1000Mb/s Ethernet with a full cross bar switch in the middle, we'd discover that OC12 intercarrier backhaul is very difficult to get. As I've said before, I believe that this is going to push us in the direction of more and more NAPs so that everyone can do the hot potato routing thing as early and as often as possible. Many medium pipes will add up to the nec'y aggregate bit rate, with some great cost in complexity compared to a few fat pipes. Ceding the "core" to the people who own their own trenches and who can there- fore build out reasonable worldwide OC12 or OC48 nets is another approach but I'm not entirely comfortable with the transit rates they'd probably charge if their corporate officers knew they had a monopoly. I've chosen to leave my usual dire predictions about the inevitability of ATM out of this particular message; you've all heard it before, anyway. There ought to be a business opportunity in here somewhere.
we would soon reach the point where the individual 100Mb/s ports were just too full.
Some of the larger providers (like MCI, Sprint, & AlterNet) are there (or nearly there) already. My understanding is that the providers that are pushing lots & lots of bits between them are going off & implementing a number of point-to-point links to offload traffic from the interconnects. I've heard that MCI & Sprint are putting up something like 6 T3s between themselves; that MCI & AlterNet are doing 4 T3s; and that Sprint & AlterNet are doing 2 T3s. There may be other folks doing similar things. [AlterNet has had a number of private peering in place for years - to folks like NEARnet, BARRNet, Sesquinet, & THEnet.] Once these are in place, a fair amount of traffic should be removed from the major interconnects. --asp@partan.com (Andrew Partan)
On Thu, 25 Apr 1996, Paul A Vixie wrote:
As I've said before, I believe that this is going to push us in the direction of more and more NAPs so that everyone can do the hot potato routing thing as early and as often as possible. Many medium pipes will add up to the nec'y aggregate bit rate, with some great cost in complexity compared to a few fat pipes.
Bingo! And this is precisely what is happening now as regional exchanges are being set up all over the place in Utah, Pennsylvania, Texas, the Phillipines and so on. Not to mention the number of large ISP's that are moving towards things like national frame-relay networks of their own. If you want to understand how this is all going to play out, study the shape of soap bubbles in a foam, the way to connect power systems in 4 cities arranged at the corners of a square using the shortest wires and the distribution of market towns in any of the ancient civilizations. Michael Dillon Voice: +1-604-546-8022 Memra Software Inc. Fax: +1-604-546-3049 http://www.memra.com E-mail: michael@memra.com
On Thu, 25 Apr 1996, Jeremy Porter wrote:
I know of plans for at least 2 exchanges that will use 100baseTx as a connection option. The only real problem is the backplane limit of boxes that switch 100baseTX, (Cisco Cat 5000, 1.2gbps). This is only slightly better than the Dec Gigaswitch limit of 800mbps.
DEC Gigaswitch is actually a 3.6Gb/s switch --Ismat
participants (11)
-
Andrew Partan
-
bmanning@isi.edu
-
Curtis Villamizar
-
Dorian Kim
-
Erik Sherk
-
ipasha@sprintlink.net
-
Jeremy Porter
-
Jon 'tex' Boone
-
Michael Dillon
-
Mike O'Dell
-
Paul A Vixie