High Density Multimode Runs BCP?
I have a situation where I want to run Nx24 pairs of GE across a datacenter to several different customers. Runs are about 200meters max. When running say 24-pairs of multi-mode across a datacenter, I have considered a few solutions, but am not sure what is common/best practice. a) Find/adapt a 24/48 thread inside-plant cable (either multimode, or condition single mode) and connectorize the ends. Adv: Clean, Single, high density cable runs, Dis: Not sure if such a beast exists in multimode, and the whole cable has to be replaced/made redundant if one fiber dies and you need a critical restore, may need a break out shelf. b) Run 24 duplex MM cables of the proper lengths. Adv: Easy to trace, color code, understand. Easy to replace/repair one cable should something untoward occur. Can buy/stock pre-terminated cables of the proper length for easy restore. Dis: Lots of cables, more riser space. c) ?? ---- So... is there an option C? Does a multimode beastie like A exist commonly? Is it generally more cost effective to terminate your own MM cables or buy them pre-terminated? Assume that each of these pairs is going to be used for something like 1000B-SX full duplex, and that these are all aggregated trunk links so you can't take a single pair of 1000B-SX and break it out to 24xGE at the end points with a switch. I priced up one of these runs at 100m, and I was seeing a list price in the ballpark of $2500-$3000 plenum. So I figured it was worth asking if there is a better way when we're talking about N times that number. :) Thanks in advance, I'm sure I just haven't had enough caffeine today. DJ
Look into MPO cabling MPO uses fiber ribbon cables the most common of which is 6x2 six strands by two layers Panduit has several solutions which use cartridges so you get a cartridge with your desired termination type and run the MPO cable between the cartridges. This cabling under another name is also used for IBM Mainframe channel connections Scott C. McGrath On Tue, 25 Jan 2005, Deepak Jain wrote:
I have a situation where I want to run Nx24 pairs of GE across a datacenter to several different customers. Runs are about 200meters max.
When running say 24-pairs of multi-mode across a datacenter, I have considered a few solutions, but am not sure what is common/best practice.
a) Find/adapt a 24/48 thread inside-plant cable (either multimode, or condition single mode) and connectorize the ends. Adv: Clean, Single, high density cable runs, Dis: Not sure if such a beast exists in multimode, and the whole cable has to be replaced/made redundant if one fiber dies and you need a critical restore, may need a break out shelf.
b) Run 24 duplex MM cables of the proper lengths. Adv: Easy to trace, color code, understand. Easy to replace/repair one cable should something untoward occur. Can buy/stock pre-terminated cables of the proper length for easy restore. Dis: Lots of cables, more riser space.
c) ??
----
So... is there an option C? Does a multimode beastie like A exist commonly? Is it generally more cost effective to terminate your own MM cables or buy them pre-terminated?
Assume that each of these pairs is going to be used for something like 1000B-SX full duplex, and that these are all aggregated trunk links so you can't take a single pair of 1000B-SX and break it out to 24xGE at the end points with a switch.
I priced up one of these runs at 100m, and I was seeing a list price in the ballpark of $2500-$3000 plenum. So I figured it was worth asking if there is a better way when we're talking about N times that number. :)
Thanks in advance, I'm sure I just haven't had enough caffeine today.
DJ
Speaking on Deep Background, the Press Secretary whispered:
Look into MPO cabling
MPO uses fiber ribbon cables the most common of which is 6x2 six strands by two layers
I've helped deploy/retrieve MBO at the recent IETF at the Hinckley Hilton here in DC. It's not Mil-Spec sturdy but with reasonable care, it does the job. -- A host is a host from coast to coast.................wb8foz@nrk.com & no one will talk to a host that's close........[v].(301) 56-LINUX Unless the host (that isn't close).........................pob 1433 is busy, hung or dead....................................20915-1433
On Tue, Jan 25, 2005 at 07:23:17PM -0500, Deepak Jain wrote:
I have a situation where I want to run Nx24 pairs of GE across a datacenter to several different customers. Runs are about 200meters max.
When running say 24-pairs of multi-mode across a datacenter, I have considered a few solutions, but am not sure what is common/best practice.
I assume multiplexing up to 10Gb (possibly two links thereof) and then back down is cost-prohibitive? That's probably the "best" practice. Thor
Thor Lancelot Simon wrote:
On Tue, Jan 25, 2005 at 07:23:17PM -0500, Deepak Jain wrote:
I have a situation where I want to run Nx24 pairs of GE across a datacenter to several different customers. Runs are about 200meters max.
When running say 24-pairs of multi-mode across a datacenter, I have considered a few solutions, but am not sure what is common/best practice.
I assume multiplexing up to 10Gb (possibly two links thereof) and then back down is cost-prohibitive? That's probably the "best" practice.
Thor
It's best practice to put two new points of failure (mux + demux) in a 200m fiber run? John
On Wed, Jan 26, 2005 at 09:17:44PM -0500, John Fraizer wrote:
I assume multiplexing up to 10Gb (possibly two links thereof) and then back down is cost-prohibitive? That's probably the "best" practice.
It's best practice to put two new points of failure (mux + demux) in a 200m fiber run?
Well, that depends. To begin with, it's not one run, it's 24 runs. Deepak described the cost of those 24 runs as:
I priced up one of these runs at 100m, and I was seeing a list price in the ballpark of $2500-$3000 plenum. So I figured it was worth asking if here is a better way when we're talking about N times that number. :)
So, to take his lower estimate 24 x $2500, we're talking about $60,000 worth of cable -- and all the bulk and management hassle of 48 strands of fibre for what is in one sense logically a single run. It still probably doesn't cover the cost of muxing it up and back down, but particularly when you consider that space for 48 strands isn't free either, it is certainly worth thinking about. I was a little surprised by the $2500/pair figure but that's what he said. Thor
a) Find/adapt a 24/48 thread inside-plant cable (either multimode, or condition single mode) and connectorize the ends. Adv: Clean, Single, high density cable runs, Dis: Not sure if such a beast exists in multimode, and the whole cable has to be replaced/made redundant if one fiber dies and you need a critical restore, may need a break out shelf.
We use multicore fibre cables in our datacentre and nodes. In Europe we can obtain these to order on specific size. If we need to bring these back to a central area we use an ODF and and then patch accordingly with fibre boxes in cabinets. The multicores are in a rugged PVC type plastic sheath [same type of plastic/pvc that is used for gas piping in the streets here in Europe]. You have to do some serious hacking to damage this type of cable, and if you've ever been to Telehouse in London you'll know how the type of situation that I mean. This is the company we use in the UK to do a lot of this work http://www.mainframecomms.co.uk/products_cables.html I can't recommend them highly enough. You may also want to look at passive optical stuff that you can use to cheaply use to do cdwm. Regards, Neil.
participants (6)
-
David Lesher
-
Deepak Jain
-
John Fraizer
-
Neil J. McRae
-
Scott McGrath
-
Thor Lancelot Simon