RE: High Density Multimode Runs BCP?
Just in case some folks are wondering what we are talking about, here's a decent URL covering it: http://images.google.com/imgres?imgurl=http://www.tpub.com/neets/tm/30NVM053 .GIF&imgrefurl=http://www.tpub.com/neets/tm/107-8.htm&h=387&w=397&sz=13&tbni d=gGUI7fKu6OwJ:&tbnh=116&tbnw=119&start=16&prev=/images%3Fq%3Dfiber%2Bribbon %2Bcable%26hl%3Den%26lr%3D%26c2coff%3D1 -- Martin Hannigan (c) 617-388-2663 VeriSign, Inc. (w) 703-948-7018 Network Engineer IV Operations & Infrastructure hannigan@verisign.com
-----Original Message----- From: Scott McGrath [mailto:mcgrath@fas.harvard.edu] Sent: Wednesday, January 26, 2005 10:44 PM To: Hannigan, Martin Cc: nanog@merit.edu Subject: RE: High Density Multimode Runs BCP?
Hi, Martin
Yes indeed the ribbon cable. Tho' due to the damage factor I probably would not specify it again unless I could use innerduct to protect it as we had some machine room renovations done and the construction workers managed to kink the underfloor runs as well as setting off the Halon system several times...
The ribbon cables work well if they are adequately protected. If the people in the machine room environment are skilled at handling fiber there should be no problems. If however J. Random Laborer has access I would go with conventional armored runs.
Scott C. McGrath
On Wed, 26 Jan 2005, Hannigan, Martin wrote:
The ribbon cable?
-- Martin Hannigan (c) 617-388-2663 VeriSign, Inc. (w) 703-948-7018 Network Engineer IV Operations &
Infrastructure
hannigan@verisign.com
-----Original Message----- From: Scott McGrath [mailto:mcgrath@fas.harvard.edu] Sent: Wednesday, January 26, 2005 6:44 PM To: Hannigan, Martin Cc: Thor Lancelot Simon; nanog@merit.edu Subject: RE: High Density Multimode Runs BCP?
Hi, Thor
We used it to create zone distribution points throughout our datacenter's which ran back to a central distribution point. This solution has been in place for almost 4 years. We have 10Gb SM ethernet links traversing the datacenter which link to the campus distribution center.
The only downsides we have experienced are
1 - Lead time in getting the component parts
2 - easiliy damaged by careless contractors
3 - somewhat higher than normal back reflection on poor terminations
Scott C. McGrath
On Wed, 26 Jan 2005, Hannigan, Martin wrote:
-----Original Message----- From: Thor Lancelot Simon [mailto:tls@netbsd.org] Sent: Wednesday, January 26, 2005 3:17 PM To: Hannigan, Martin; nanog@merit.edu Subject: Re: High Density Multimode Runs BCP?
On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan,
> > > > When running say 24-pairs of multi-mode across a datacenter, I have > > considered a few solutions, but am not sure what is > common/best practice. > > I assume multiplexing up to 10Gb (possibly two links thereof) and then > back down is cost-prohibitive? That's probably the "best" practice.
I think he's talking physical plant. 200m should be fine. Consult your equipment for power levels and support distance.
Sure -- but given the cost of the new physical plant installation he's talking about, the fact that he seems to know the present maximum data rate for each physical link, and so forth, I think it does make sense to ask the question "is the right solution to simply be more economical with physical plant by multiplexing to a higher data rate"?
I've never used fibre ribbon, as advocated by someone else in this thread, and that does sound like a very clever space- and
solution to the puzzle. But even so, spending tens of
dollars to carry 24 discrete physical links hundreds of meters across a
Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice is $25 per splice per strand including termination. The 100m patch chords are $100.00. It's cheaper to bundle and splice.
How much does the mux cost?
datacenter, each at what is, these days, not a
Martin wrote: possibly cost-saving thousands of particularly high data
rate, may not be the best choice. There may well be some question about at which layer it makes sense to aggregate the links -- but to me, the question "is it really the best choice of design constraints to take aggregation/multiplexing off the table" is a very substantial one here and not profitably avoided.
Fiber ribbon doesn't "fit" in any long distance (+7') distribution system, rich or poor, that I'm aware of. Racks, cabinets, et. al. are not very conducive to it. The only application I've seen was IBM fiber channel.
Datacenters are sometimes permanent facilities and it's better, IMHO, to make things more permanent with cross connect than aggregation. It enables you to make your cabinet cabling and your termination area cabling almost permanent and maintenance free - as well as giving you test,add, move, and drop. It's more cable, but less equipment to maintain, support, and reduces failure points. It enhances security as well. You can't open the cabinet and just jack something in. You have to provision behind the locked term area.
I'd love to hear about a positive experience using ribbon cable inside a datacenter.
Thor
participants (1)
-
Hannigan, Martin