Data Center Wiring Standards
Heya folks, I hope this is on-topic. I read the charter, and it falls somewhere along the fuzzy border I think... Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation? I've a sneaking suspicion that we're doing it in a fairly non-scalable way. (I am not responsible for the current method, and I think I'm glad to say that.) Strangely enough, I can find like NO resources on this. I've spent the better part of two hours looking. Right now, we have a rack filled with nothing but patch panels. We have some switches in another rack, and colocation customers scattered around other racks. When a new customer comes in, we run a long wire from their computer(s) and/or other device(s) to the patch panel. Then, from the appropriate block connectors on the back of the panel, we run another wire that terminates in a RJ-45 to plug into the switch. Sounds bonkers I think, doesn't it? My thoughts go like this: We put a patch panel in each rack. Each of these patch panels is permanently (more or less) wired to a patch panel in our main patch cabinet. So, essentially what you've got is a main patch cabinet with a patch panel that corresponds to a patch panel in each other cabinet. Making connection is cinchy and only requires 3-6 foot off-the-shelf cables. Does that sound more correct? I talked to someone else in the office here, and they believe that they've seen it done with a switch in each cabinet, although they couldn't remember is there was a patch panel as well. If you're running 802.1q trunks between a bunch of switches (no patch-panels needed), I can see that working too, I suppose. Any standards? Best practices? Suggestions? Resources, in the form of books, web pages, RFCs, or white papers? Thanks! Rick Kunkel
Hello Rick,
Does that sound more correct?
I talked to someone else in the office here, and they believe that they've seen it done with a switch in each cabinet, although they couldn't remember is there was a patch panel as well. If you're running 802.1q trunks between a bunch of switches (no patch-panels needed), I can see that working too, I suppose.
Thats the best solution I think. Less cables, less work for precabling. Build 1gige or 2x1gige fiber uplinks. Perhaps wheels on the racks (then you can play google). You have check what switches you use in the racks and in the core. Number of vlans supported is the main goal. Perhaps some hp procurve in the racks and some real coreswitches in the backbone. When you reach 4096 vlans you could use vlan in vlan or mpls to grow more. Kind regards, Ingo Flaschberger
[ Disclaimer - my experience is as someone who has setup lots of racks, dealt with a number of colocation facilities and cabling contractors. However, I haven't ever run a colo. ] On Fri, Sep 08, 2006 at 05:36:09PM -0700, Rick Kunkel wrote:
Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation?
Right now, we have a rack filled with nothing but patch panels. We have some switches in another rack, and colocation customers scattered around other racks. When a new customer comes in, we run a long wire from their computer(s) and/or other device(s) to the patch panel. Then, from the appropriate block connectors on the back of the panel, we run another wire that terminates in a RJ-45 to plug into the switch.
This way of doing things *can* be done neatly in some cases - it really depends on how you have things setup, your size, and what your customers' needs are. For large carrier neutral places like Equinix, Switch and Data, etc., where each customer usually has a small number of links coming into their cage, and things are pretty non-standard (i.e., customers have stuff other than a few ethernet cables going to their equipment), that's pretty much what they do - run a long cable through overhead cable trough or fiber tray, and terminate it in a patch panel in the customer's rack.
My thoughts go like this: We put a patch panel in each rack. Each of these patch panels is permanently (more or less) wired to a patch panel in our main patch cabinet. So, essentially what you've got is a main patch cabinet with a patch panel that corresponds to a patch panel in each other cabinet. Making connection is cinchy and only requires 3-6 foot off-the-shelf cables.
This is a better way to do it IF your customers have pretty standard needs. One facility I've worked at has 6 cables bundled together (not 25 pr cable, but similar - 6 cat5 or cat6 cables bundled within some sort of jacket), going into a patch panel. 25 pair or bundled cabling will make things neater, but usually costs more. Obviously, be SUPER anal retentive about labelling, testing, running cables, etc., or it's not worth doing at all. Come up with a scheme for labelling (in our office, it's "a.b.c where a is the rack number, b is the rack position, and c is the port number) and stick to it. Get a labeller designed for cables if you don't already have one (a Brady, industrial P-Touch, Panduit, or something similar). Make sure there is a standard way for everything, and document / enforce the standard. Someone has to be the cable n**i (does that count as a Godwin?) or things will get messy fast. If you're doing a standard setup to each rack, hire someone to do it for you if you can afford it. It will be expensive, but probably worth it unless you're really good (and fast) at terminating cable. Either way, use (in the customer's rack) one of the patch panels that's modular, so you can put a different kind of connector in each slot. That gives you more flexibility later. In terms of whether patch panels / switches should be mixed in the same rack; opinions differ. It's of course difficult to deal with terminating patch panels when there are also big fat switches in the same rack. I've usually done a mix anyway, but for your application, it might be better to alternate, running the connections sideways. Invest in lots of cable management, the bigger, the better. I assume you already have cable management on these racks? I like the Panduit horizontal ones, and either the Panduit vertical ones, or the CPI "MCS" ones. If you're doing a new buildout, or can start a new set of racks, put extra space between them and do 10" wide cable management sections (or bigger). I can give you some suggestions in terms of vendors and cabling outfits, though most of the people I know of are in the Southern California area.
I talked to someone else in the office here, and they believe that they've seen it done with a switch in each cabinet, although they couldn't remember is there was a patch panel as well.
Ok, so if most of your customers have a full rack or half rack, I would suggest not putting a switch in each rack. In that case, you should charge them a port fee for each uplink, which should encourage them to use their own networking equipment. Now if most of your customers are using < 1/2 rack, and aren't setting up their own network equipment, and you're managing everything for them, then you might want to put 1 48 port / 2 24 port switch in each individual rack, with two uplinks from some central aggregation switches to each. I really don't think you want more than 4-6 cables going to any one rack. Maybe you can clarify your typical customer setup?
Any standards? Best practices? Suggestions? Resources, in the form of books, web pages, RFCs, or white papers?
I think the best thing is just to look around as much as possible, and then see what works (and doesn't work) for you. I think some of the manufacturers of cable, cable management equipment and stuff may publish some standards / guidelines as well. w
My thoughts go like this: We put a patch panel in each rack. Each of these patch panels is permanently (more or less) wired to a patch panel in our main patch cabinet. So, essentially what you've got is a main patch cabinet with a patch panel that corresponds to a patch panel in each other cabinet. Making connection is cinchy and only requires 3-6 foot off-the-shelf cables.
Does that sound more correct?
I talked to someone else in the office here, and they believe that they've seen it done with a switch in each cabinet, although they couldn't remember is there was a patch panel as well. If you're running 802.1q trunks between a bunch of switches (no patch-panels needed), I can see that working too, I suppose.
Any standards? Best practices? Suggestions? Resources, in the form of books, web pages, RFCs, or white papers?
Theres a series of ISO Standard for data cabling but nothing is yet set in stone around datacentres. I think the issue of Standards in datacentres was touched on here some time back? Ok, a quick google later, TIA-942 Telecommunications Infrastructure Standards for Data Centres covers off a lot of the details. Its pretty new and I don't know if its fully ratified yet? I quote... --8<-- Based on existing cabling standards, TIA-942 covers cabling distances, pathways and labeling requirements, but also touches upon site selection, demarcation points, building security and electrical considerations. As the first standard to specifically address data centres, TIA-942 is a valuable tool for the proper design, installation and management of data centre cabling. The standard provides specifications for pathways, spaces and cabling media, recognizing copper cabling, multi-mode and single-mode fiber, and 75-ohm coaxial cable. However, much of TIA-942 deals with facility specifications. For each space within a data centre, the standard defines equipment planning and placement based on a hierarchical star topology for backbone and horizontal cabling. The standard also includes specifications for arranging equipment and racks in an alternating pattern to create ìhotî and ìcoldî aisles, which helps airflow and cooling efficiency. To assist in the design of a new data centre and to evaluate the reliability of an existing data centre, TIA-942 incorporates a tier classification, with each tier outlining guidelines for equipment, power, cooling and redundant components. These guide-lines are then tied to expectations for the data centre to maintain service without interruption. --8<-- The source url for the above was http://www.networkcablingmag.ca/index.php?option=com_content&task=view&id=432&Itemid=2. You may like to see if you can track down a copy of the referenced standard. From my personal POV - You have a couple of options depending on your switching infrastructure and required cabling density - and bandwidth requirements. One way would be to have a decent switch at the top of each cabinet along with a Fibre tie to your core patch / switching cabinet. All devices in that rack feed into the local switch, which could be VLAN'd as required to cater for ILO or any other IP management requirements. Uplink would be a trunk of 1000SX, 1000LX, MultiLink Trunk combinations of same, or perhaps even 10Gig Fibre. The other option would be to preconfigure each rack with a coupla rackunits of fixed copper or fibre ties to a core cabinet and just patch things around as you need to. Useful if you are in a situation where bringing as much as possible direct into your core switch is appropriate, and cheaper from a network hardware pov - if not from a structure cabling pov. Good luck. I know what a prick it is to inhereit someone elses shoddy cable work - I find myself accumulating lots of after-hours overtime, involving essentially ripping out everything and putting it all back _tidily_ - and hoping that I don't overlook some un-documented 'feature'... Mark.
Rick, The organization and standards you are looking for are: BICSI - http://www.bicsi.org/ and TIA/EIA 568 et al for structured cabling design for low voltage distribution. The BICSI organization has training and certification for RCDD Registered Communications Distribution Designer A BICSI article that is on there web site about data center design is http://www.bicsi.org/Content/Files/PDF/link2006/Kacperski.pdf. TIA/EIA 568(ab) how ever many they are up to discuss structured cabling design for UTP/STP/fiber/coax including patch cables single and multi pair UTP/STP/fiber patch panels, HVAC control, fire system control and security systems. John (ISDN) Lee Rick Kunkel wrote:
Heya folks,
I hope this is on-topic. I read the charter, and it falls somewhere along the fuzzy border I think...
Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation? I've a sneaking suspicion that we're doing it in a fairly non-scalable way. (I am not responsible for the current method, and I think I'm glad to say that.) Strangely enough, I can find like NO resources on this. I've spent the better part of two hours looking.
Right now, we have a rack filled with nothing but patch panels. We have some switches in another rack, and colocation customers scattered around other racks. When a new customer comes in, we run a long wire from their computer(s) and/or other device(s) to the patch panel. Then, from the appropriate block connectors on the back of the panel, we run another wire that terminates in a RJ-45 to plug into the switch.
Sounds bonkers I think, doesn't it?
My thoughts go like this: We put a patch panel in each rack. Each of these patch panels is permanently (more or less) wired to a patch panel in our main patch cabinet. So, essentially what you've got is a main patch cabinet with a patch panel that corresponds to a patch panel in each other cabinet. Making connection is cinchy and only requires 3-6 foot off-the-shelf cables.
Does that sound more correct?
I talked to someone else in the office here, and they believe that they've seen it done with a switch in each cabinet, although they couldn't remember is there was a patch panel as well. If you're running 802.1q trunks between a bunch of switches (no patch-panels needed), I can see that working too, I suppose.
Any standards? Best practices? Suggestions? Resources, in the form of books, web pages, RFCs, or white papers?
Thanks!
Rick Kunkel
Rick Kunkel wrote:
Heya folks,
I hope this is on-topic. I read the charter, and it falls somewhere along the fuzzy border I think...
Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation? I've a sneaking suspicion that we're doing it in a fairly non-scalable way. (I am not responsible for the current method, and I think I'm glad to say that.) Strangely enough, I can find like NO resources on this. I've spent the better part of two hours looking.
Right now, we have a rack filled with nothing but patch panels. We have some switches in another rack, and colocation customers scattered around other racks. When a new customer comes in, we run a long wire from their computer(s) and/or other device(s) to the patch panel. Then, from the appropriate block connectors on the back of the panel, we run another wire that terminates in a RJ-45 to plug into the switch.
Sounds bonkers I think, doesn't it?
My thoughts go like this: We put a patch panel in each rack. Each of these patch panels is permanently (more or less) wired to a patch panel in our main patch cabinet. So, essentially what you've got is a main patch cabinet with a patch panel that corresponds to a patch panel in each other cabinet. Making connection is cinchy and only requires 3-6 foot off-the-shelf cables.
Does that sound more correct?
I talked to someone else in the office here, and they believe that they've seen it done with a switch in each cabinet, although they couldn't remember is there was a patch panel as well. If you're running 802.1q trunks between a bunch of switches (no patch-panels needed), I can see that working too, I suppose.
Any standards? Best practices? Suggestions? Resources, in the form of books, web pages, RFCs, or white papers?
Thanks!
Rick Kunkel
Ideally from each core router would go to a two distribution-a switch (Cat 4900 or something similar), from both dist-a switch then go to two bigger distribution (dist-b) switches (cat 6500 etc) Then from each 6500 go to there own patch panels. Then from the two patch panels run a cables to access level (2900's etc) switches in each rack / shelf. This way you have full redundancy in each shelf for your co-located / dedicated customers. My .02 cents -Bill Sehmel -- Bill Sehmel - bsehmel@HopOne.net -- 1-703-288-3081 Systems Administrator, HopOne Internet Corp. DCA2 NOC Bandwidth & full range of carrier/web host colo + networking services: http://www.hopone.net ASN 14361
Rick Kunkel <kunkel@w-link.net> writes:
Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation?
Network Cabling Handbook by Chris Clark is a bit dated (5 years old) but probably should be on your bookshelf anyway, particularly since it is ridiculously cheap used/new on Amazon (I got my copy a couple of years ago after a friend tipped me off that they were on sale for $5.99 on clearance at Micro Center). It's mostly geared to the enterprise but it does have a chapter on doing communication rooms which is probably a good starting point. ISBN 0-07-213233-7 Also, no substitute for visiting your competition and taking a survey of how others, particularly larger datacenters, are doing it. :) ---Rob
Rick Kunkel <kunkel@w-link.net> writes:
Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation?
Network Cabling Handbook by Chris Clark is a bit dated (5 years old) but probably should be on your bookshelf anyway, particularly since it is ridiculously cheap used/new on Amazon (I got my copy a couple of years ago after a friend tipped me off that they were on sale for $5.99 on clearance at Micro Center). It's mostly geared to the enterprise but it does have a chapter on doing communication rooms which is probably a good starting point. ISBN 0-07-213233-7
Also, no substitute for visiting your competition and taking a survey of how others, particularly larger datacenters, are doing it. :)
Having seen so many different things over the years, I don't actually think there's any one particular right way to do it. Is the data center carrier neutral? If so, that tends to lead to solutions where circuits need to be run point-to-point (whether physically or virtually). Are customers expected to be requiring large amounts of bandwidth? If not, aggregation based solutions may make more sense (such as putting a switch in each rack). What's the smallest and largest customer footprint? If you're going to sell 5 racks to a customer, in a shared cage with doors and side panels, and the customer needs multiple gigE connections internally, do you want to try to solve that problem as part of your site strategy, or do you figure it out on a case by case basis? Possible solutions are varied. For a colo where they'll be buying your bandwidth, and nobody's using gigabits of it, for example, there's an excellent manageability argument to be made for running a (single, pair of) gig uplink to each cabinet and having a 24- or 48-port 1U switch in the cabinet. You will have a minimal amount of wiring, which makes problem resolution easier, and you can even do vlan stuff to allow customers with equipment in different cabinets to have virtual private segments. I've seen providers that put a 24-port patch panel in each cab and then ran it back to a central switching point, which is arguably more useful but eats up a lot of wiring, and you have a fundamental problem in that some cabs may be populated with colo'ed 1U's (so you hit the wall or have to add another panel) and others have a single customer with a bunch of goofy equipment, and they just want a link to their own router/firewall, so you only use 1/24th the cable. Facilities like Equinix probably don't have a lot of realistic options other than what they already do, given the sheer complexity of it all. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Thanks much for all the info folks. I'm sure I can amalgamate this info into a good plan, or at least a pie-in-the-sky place to reach for. On a related but dissimilar topic: What are people using for storing customer assignment info and stuff? Right now, we've got an Excel spreadsheet covering patch panels, another covering colo customers and the types of usage plans that they're on, and our general customer database that hasn't been updated since the colo biz has picked up, and is thus currently poorly equipped to deal with it. Additionally, we use RTG for usage stuff, and a combination of well-commented DNS zone files and customized Excel spreadsheets for managing IP Space. Needless to say, the integration of these things is pretty non-existent. Are people using off-the-shelf products (freeware or otherwise) for these types of things, or are they custom designing their own? I've recently started to create a "proper" database that stores patch panel, switchport, customer, VLAN, and usage information, but the queries I'm dealing with in an attempt to extract information from it are so complex that I just can't seem to justify spending the time on this, when -- regardless of the low-techiness of them -- the current method of spreadsheets and such gets by. Eventually though, I'm sure it's the scalability that will be the killer. I've messed briefly with IPTrack (or was that the old name for it?) for IP address management, but nothing else too much. Any suggestions? Thanks in advance, Rick Kunkel On Sat, 9 Sep 2006, Joe Greco wrote:
Rick Kunkel <kunkel@w-link.net> writes:
Can anyone tell me the standard way to deal with patch panels, racks, and switches in a data center used for colocation?
Network Cabling Handbook by Chris Clark is a bit dated (5 years old) but probably should be on your bookshelf anyway, particularly since it is ridiculously cheap used/new on Amazon (I got my copy a couple of years ago after a friend tipped me off that they were on sale for $5.99 on clearance at Micro Center). It's mostly geared to the enterprise but it does have a chapter on doing communication rooms which is probably a good starting point. ISBN 0-07-213233-7
Also, no substitute for visiting your competition and taking a survey of how others, particularly larger datacenters, are doing it. :)
Having seen so many different things over the years, I don't actually think there's any one particular right way to do it.
Is the data center carrier neutral? If so, that tends to lead to solutions where circuits need to be run point-to-point (whether physically or virtually).
Are customers expected to be requiring large amounts of bandwidth? If not, aggregation based solutions may make more sense (such as putting a switch in each rack).
What's the smallest and largest customer footprint? If you're going to sell 5 racks to a customer, in a shared cage with doors and side panels, and the customer needs multiple gigE connections internally, do you want to try to solve that problem as part of your site strategy, or do you figure it out on a case by case basis?
Possible solutions are varied.
For a colo where they'll be buying your bandwidth, and nobody's using gigabits of it, for example, there's an excellent manageability argument to be made for running a (single, pair of) gig uplink to each cabinet and having a 24- or 48-port 1U switch in the cabinet. You will have a minimal amount of wiring, which makes problem resolution easier, and you can even do vlan stuff to allow customers with equipment in different cabinets to have virtual private segments.
I've seen providers that put a 24-port patch panel in each cab and then ran it back to a central switching point, which is arguably more useful but eats up a lot of wiring, and you have a fundamental problem in that some cabs may be populated with colo'ed 1U's (so you hit the wall or have to add another panel) and others have a single customer with a bunch of goofy equipment, and they just want a link to their own router/firewall, so you only use 1/24th the cable.
Facilities like Equinix probably don't have a lot of realistic options other than what they already do, given the sheer complexity of it all.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
In my experience, most folks roll their own IP management software. Most coming from spreadsheets such as yourself, end up with some sort of custom written provisioning software that integrates into their existing applications. I've seen very few commercial products in use, though I can't say whether or not they were any better or worse than the home grown solutions. IIRC, there were two major open source projects, FreeIPDB, and Northstar. Both had some promise for a decent open source IP management suite. As far as Switch Ports/Patch panels, I've not seen anyone keep real good track of usage other than switch port descriptions. --- Andy ----- Original Message ----- From: "Rick Kunkel" <kunkel@w-link.net> To: <nanog@merit.edu> Sent: Tuesday, September 12, 2006 1:30 PM Subject: Database for customer assignments [WAS Re: Data Center Wiring Standards]
Thanks much for all the info folks. I'm sure I can amalgamate this info into a good plan, or at least a pie-in-the-sky place to reach for.
On a related but dissimilar topic: What are people using for storing customer assignment info and stuff? Right now, we've got an Excel spreadsheet covering patch panels, another covering colo customers and the types of usage plans that they're on, and our general customer database that hasn't been updated since the colo biz has picked up, and is thus currently poorly equipped to deal with it. Additionally, we use RTG for usage stuff, and a combination of well-commented DNS zone files and customized Excel spreadsheets for managing IP Space.
Needless to say, the integration of these things is pretty non-existent.
Are people using off-the-shelf products (freeware or otherwise) for these types of things, or are they custom designing their own? I've recently started to create a "proper" database that stores patch panel, switchport, customer, VLAN, and usage information, but the queries I'm dealing with in an attempt to extract information from it are so complex that I just can't seem to justify spending the time on this, when -- regardless of the low-techiness of them -- the current method of spreadsheets and such gets by. Eventually though, I'm sure it's the scalability that will be the killer.
I've messed briefly with IPTrack (or was that the old name for it?) for IP address management, but nothing else too much.
Any suggestions?
Thanks in advance,
Rick Kunkel
On Friday 08 September 2006 19:36, Rick Kunkel wrote:
Heya folks,
I hope this is on-topic. I read the charter, and it falls somewhere along the fuzzy border I think...
Can anyone tell me the standard way to deal with patch panels, racks...
As many have mentioned here, TIA/EIA-942 is a good starting point. There are a couple of good Data Centers books out there, also (a visit to your local Borders or B&N could allow for an interesting afternoon browsing). I have personally had positive experience with some docs and advice from some folks with expertise in cable management and data centers infrastructure: http://www.panduit.com/enabling_technologies/091903.asp HTH, Stefan
participants (10)
-
Andy Johnson
-
Bill Sehmel
-
Ingo Flaschberger
-
Joe Greco
-
John L Lee
-
Mark Foster
-
Netfortius
-
Rick Kunkel
-
Robert E.Seastrom
-
William Yardley