Care to share some pics? David On 14 November 2017 at 02:04, Ken Chase <math@sizone.org> wrote:
Some tricks I've learned managing multicustomer/shared cabinets over the last 20+ years...sorry it's long, but I think there's some good info on keeping things clean and maintaining sanity. Please send your protips.
Most of this is lower-end 1-4U sized mixes of gear specific and specific to cabinets that have 2-6U+ flux per quarter with some rushed installs. Huge one-time 12U blade installs of $1M appliances usually lend to gorgeous cable management schemes (and proper budgets) being included. No such lux here!
TL;DR: thin premade ethernet of exact lengths and multiple random colours (never black!), use min gauge required power cable thickness of exact length, face A/B PDU's backwards on one side of rack cable management on other side, never get less than 30" wide x 36" deep cabinet (if not, wider better than deeper), premeasure vert mount rail positions to be compatible with rail length/ front of server clearance, prewire front of rack power/ether if needed (leave string too), practice tooless rail removal while you can still get in above/below, rack similar-depth gear together, switches face backwards (with front-to-back airflow switch config option of course) on rail-shelves not ears (that bend over time anyway) so they can be extracted out front and easily replaced in emergency.
Details:
Installing in 30" wide x 36" long cabinets makes all the diff over 24" x 30". A/B 0U PDUs on one side, cable wrangling ladders on other. More room = more flexbility. (If have no side panels and no neighbours, 24x30" is ok). 36" deep allows facing the PDUs backwards not sideways - cableheads extend backwards, not into the rail-tail path/airflow/etc. Worth getting the 90degree-bent-head cables too if you need the spare inches. (I ofset my PDUs vertically by 1/2 a plug-spacing distance so cable from left one fits between cableheads of right one.)
Avoid racks that don't use cagenuts. Prethreaded holes get abused and stripped. Try to get the right size of cagenut, there's a few standards out there. Some will fit - poorly. (Either they fall out under weight or you end up trying to force them in with a thin screwdriver - I've seen people stab themselves in the hand. Ask for a cagenut tool (J-hooked shaped piece of metal that looks like a bent desktop-case PCI slot cover.)
Having many power cables of varying lengths is key (but why doesnt anyone make 15" and 21" power cables?). Not having ziptied loops of 12ga wire hanging around made things much nicer (and better airflow). More $ but worth sanity. Esp. with varied coloured heads. Great for tracing (see ethernet below). Wire the gauge required - I find 10ga (6' long..) wire delivered with 100VA-max server configs often. Too thick to manage properly and usually unnecessary. But check your warranties and theoretical max power envelopes.
Yes, full rack solns w/extendible arms exist but generally require vendor compatibility. Expensive too. Great for one time well-funded installs. Not practical for varied species installed over longer periods.
Prewire any front-of-rack-powered gear when you first get the rack. I have 5 pairs (A/B) going to the front permanently ziptied and labelled - 3x2 in use for my back-facing switches, 1 for a small piece of gear (low watt microtik), others spare. Also prewire some proper length (multiple colours of) ether. Fishing ether through the side can be impossible in a full cabinet in a dense row (we're in APC pods). I leave string in there too (probably will use it for a twinax pull to the microtik soon, and pull more string with).
Curse vendors for not picking a standard side (left vs right) for power ingress! (ibm and dell vs supermicro, sun and hp, IIRC?)
Beware Dell's long fins/tails on their rails - won't fit in a 30" cabinet if your vert mount rails are too far back - or it blocks the power cord head on the pdu if it faces sideways/etc. And beware max/min rail extension - Dell seems 'longest', with many min. rail lengths of 25.5". I think I saw min. 26.5" once.
Also had a cx jam a long Dell rail's tail into his fully assigned cabinet - in between a powercable head and the pdu body it was plugged into. BZZT! Took 20 min to get a monkey to reset at central panel. Thank proper cabinet grounding cables, right?)
If you have an entirely empty cabinet to start with, grab a few different rails and ensure your cab's vertical mount rails are all within spacing spec. and give door-closing clearance to server noses. (See reference tables.) Moving them later can be impossible (though with sunfire rails that slid to varying lengths, it worked out luckily!)
Must admit Dell's tooless rail installs are awesome now. Better than supermicro's and sun's (Sunfires). Learn how to derack them before you install, and practice a few times while you can still get your fingers/tools in from above/below. Make notes on how it works. Trying to guess how to derack a single U rail sandwiched in with no other access can be nearly impossible, especially an old EOL one with no identifying marks to google.
Try to rack similar depth gear together. Nothing like having 1U 27" long Intels sandwiching 1U of 3/4 depth supermicro between them - cant get your hands in to do anything. 2U minimum together, but 3+ is better. Stash a headlamp in the rack too to see into these wells.
Having more switchports than necessary (3 redundant 48 port switches with avg 2xc/box, avg 1.5U per box, 42U cabinet) allows 4' ethernet cables to be put in at exactly 3.75' extension into a switchport - minimal extra slack, just enough to manage the cable and keep it unstressed. Folding spare cable into management ladders makes manual tracing hard, and is bad form. Crimping your own rj45 ends for exact lengths is risky. Done it many times (and once saw a tech do it before a bgp session timed out on a damaged live cable! :), but results are poorer than factory crimps and dont resist stress well. ('Sides, do you have 8 different colour spools? See below.)
Switch racking - I'd rather them on shelves not ears - then you can pull out your backwards-facing switch ass-first from front of the cabinet and slide the replacement in without moving cables too far -- trying to get the ears past the cabling can be heinous (and most ears cause major cantilevering due to deep heavy switches - after a few years I find things are bent from weight, impinging on the U below). There are thin little 3" wide rail-edge 'shelves' you can get, but your switch may not give the spare vertical mm required (about 3-4mm) - works best with 2u+ sized gear where there's spare mm play.
Proper cable management trays/guides between the switches is great too (though eats U). Your density/financials will determine if that's viable. Its worth the U/sanity usually.
Out of U for cable mgmt but have spare depth? Bungee cords won't work as anchor points to ziptie too - they sag in the heat over time. But rubber bungees ('tarp straps') work great.
Though not for full 100m ether lengths, narrow gauge ethernet is key. You can fit like 8-12 of them into a run of what 4-5 took previously. I've had zero issues with them with rack-lengths of cable. Worth the $.
I wish there was thin bicolour ethernet - it's sexy to have all your ether coloured the same (or same per category - red mgmt, yellow public, white internal) -- but then you need to know where the failed port 34 cable is... tracing identically coloured cables can suck, your eyes start playing tricks looking at 30 cables in a twisting bundle that you ziptied too tight trying to be pretty - and under tension, yanking on them may not identify the right cable properly -- even with labels (you stop trusting them anyway after the first couple errors or when job security is thin).
I tend to use as many random colours as possible -- and note it in the switch configs. If the switch config doesnt match, slow down and check things twice from scratch. (Keep many colours and lengths handy for new installs! Blue is so ugly - peach ftw, amirite?).
Note that no pattern of colour use will work - wire all mgmt ports as one colour, or all ports of one server one colour - either way, you assume the colour means the cable is wired a certain way, and that leads to errors. With no pattern, no dangerous assumptions are made - must check with the switch config. Maxing out #s of different colours leads to easier identification/fewer chances of neighbouring cables of same colour.
If the management layer of your company doenst have to slave over racks, this is likely not an option. They like the pretty and impractical stuff best of course.
Labelling only works so well - in 100-120F exhaust paths most glue just melts off. One client I had no input on the install for has 100s of little silvery labels littering the floor/lower U of their cabinet. So pretty! And goey cables. And thank god they're all white! Makes tracing so easy! :P
(Of course no one ever wires black ethernet, right? That's a capital offence!)
/kc
On Mon, Nov 13, 2017 at 05:05:35PM -0500, Chuck Anderson said:
On Mon, Nov 13, 2017 at 01:30:25PM -0800, Seth Mattinen wrote:
On 11/13/17 12:49, Mike Hammett wrote:
Keep the humans out of the rack and you should be fine.
Where should I send the invoice?:-P
It's easy to keep a rack nice if you take the time. I've spent hours removing and replacing cables in neatly dressed bundles because equipment changes required a different length/type cable, but sometimes that's what you gotta do to keep things neat and tidy.
Exactly. Most people do not want to spend the time to do it properly.
-- Ken Chase - math@sizone.org Guelph Canada
-- -- My opinion is mine.