Hopefully this classifies as on-topic... I am discussing with some investors the possible setup of new datacenter space. Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack. Right now we are in spreadsheet mode evaluating different scenarios. Are there cases where more than 6000W per rack would be needed? (We are not worried about cooling due to the special circumstances of the space.) Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ? Cordially Patrick Giagnocavo patrick@zill.net
Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.
Which is the hardest part of designing and running a datacenter.
Are there cases where more than 6000W per rack would be needed?
Yes. We are seeing 10kw rack, and requests for 15 to 20kw are starting to come in. Think blade.
(We are not worried about cooling due to the special circumstances of the space.)
You've already lost.
Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?
These days, there is not (or should not) be a connection between rack pricing and what you charge for power. As for how much, ask HP or IBM or whoever how many blades they can shove in 42U.
Alex Rubenstein wrote:
As for how much, ask HP or IBM or whoever how many blades they can shove in 42U.
Coming from the "implementing the server gear" side of things... If we're talking IBM gear (as that's what I know) the magic numbers are: - 4 x Bladecenter H chassis in a 42U rack - 14 x Blades per chassis - 56 blades per rack Each chassis has 4 x 2900W power supplies (plus two 1KW fans), so that's 13600W per chassis. 54400W per rack total power. (There's something to think about - are you talking about providing 6KW of capacity, or 6KW of a continual load to each rack?) Naturally, that's redundant, so theoretical maximum usage per rack is half that, 23200W. Plus, the blades available today don't draw enough to fully load those power supplies. In the config I'm looking at now, a single blade (2x Quad-core 2GHz Intel, 4GB memory, no hard drives) draws 232W max, 160W lightly loaded. Let's pull a number of 195W out of the air to use. The chassis itself draws 420W (assuming 4 I/O modules) plus a hand-waving 400W for the fans, so a magic number of (195*14+400+420=3550W) times 4 gives 14.2kW for a loaded rack. But you need to make 54.4kW of power availble, which is relatively immense. You'll find this requirement in most blade scenarios, so be prepared for it. The plus side is that if you are the hardware provider in a co-lo scenario and you own the chassis, you can meter and bill your customers for the individual blade power (and a magic coefficient for cooling cost) they use if you so decide. So, as many others have already said, over 8kW in a rack is a no-brainer. Getting those BTUs out of the rack into the datacenter is easy to do (at least on the Bladecenter H). It's getting those BTUs out of the datacenter that's usually a problem, except in your special situation. Which I also am curious about. M.
Sorry to resurrect a slightly old thread, but I did want to touch on something I noticed while catching up. On Mar 25, 2008, at 6:12 PM, Michael Brown wrote:
Naturally, that's redundant, so theoretical maximum usage per rack is half that, 23200W. Plus, the blades available today don't draw enough to fully load those power supplies. In the config I'm looking at now, a single blade (2x Quad-core 2GHz Intel, 4GB memory, no hard drives) draws 232W max, 160W lightly loaded. Let's pull a number of 195W out of the air to use.
Don't be so sure that's actually redundant. At $JOB->{prev}, we had a fully populated IBM H chassis that had fully populated power supplies where the chassis spent its entire life in an alarm state that there was "insufficient power redundancy" ... the draw of the loaded chassis (14 blades, 2 mgmt cards, 2 switches, 2 FC switches) was more than a single "side" of power could handle. The chassis notified us that if it lost a side of power it was going to throttle back the CPUs to account for the loss. So your theoretical maximum draw is NOT "1/2 the total"... in a nicely populated chassis it will draw more than 1/2 the total and complain the whole time about it. Cheers, D
At 03:50 PM 4/3/2008, Derek J. Balling wrote: So your theoretical maximum draw is NOT "1/2 the total"... in a nicely
populated chassis it will draw more than 1/2 the total and complain the whole time about it.
That should probably have read in a well designed and fully populated chassis... I personally know for a fact that the Dell blade chassis can be fully loaded and operate with only two of four power supplies when fully loaded on the old 10 slot chassis and 3 of 6 in the new 16 slot chassis when fully loaded. HP also claims the C7000 chassis is fully redundant with only 3 of 6 power supplies. This is true for all configurations I have ever seen. -Robert Tellurian Networks - Global Hosting Solutions Since 1995 http://www.tellurian.com | 888-TELLURIAN | 973-300-9211 "Well done is better than well said." - Benjamin Franklin
On Sat, 22 Mar 2008 22:02:49 EDT, Patrick Giagnocavo said:
Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.
Right now we are in spreadsheet mode evaluating different scenarios.
Are there cases where more than 6000W per rack would be needed?
(We are not worried about cooling due to the special circumstances of the space.)
Ooh. Please share. You're the first case I've seen in a *long* time where getting the BTU's *out* of the rack wasn't more of a challenge than getting the watts *into* the rack. Once you're talking about "datacenter" sized spaces, those BTU's add up, and there's plenty of current spaces where the limit isn't the power feed into the building, it's the room on the roof for the chillers....
There comes a point where you cant physically transfer the energy using air any more - not less you wana break the laws a physics captin (couldn't resist sorry) - to your DX system, gas, then water, then in rack (expensive) cooling, water and CO2. Sooner or later we will sink the hole room in oil, much like they use to do with Cray's. Alternatively we might need to fit the engineers with crampons, climbing ropes and ice axes to stop them being blown over by the 70 mph winds in your datacenter as we try to shift the volumes of area necessary to transfer the energy back to the HVAC for heat pump exchange to remote chillers on the roof. In my humble experience, the problems are 1> Heat, 2> Backup UPS, 3> Backup Generators, 4> LV/HV Supply to building. While you will be very constrained by 4 in terms of upgrades unless spending a lot of money to upgrade - the practicalities of 1,2&3 mean that you will have spent a significant amount of money getting to the point where you need to worry about 4. Given you are not worried about 1, I wonder about the scale of the application or your comprehension of the problem. The bigger trick is planning for upgrades of a live site where you need to increase Air con, UPS and Generators. Economically, that 10,000KW of electricity has to be paid for in addition to any charge for the rack space. Plus margined, credit risked and cash flowed. The relative charge for the electricity consumption - which has less about our ability to deliver and cool it in a single rack versus the cost of having four racks in a 2,500KW datacenter and paying for the same amount of electric. Is the racking charge really the significant expense any more. For the sake of argument, 4 racks at £2500 pa in a 2500KW datacenter or 1 rack at £10,000 pa in a 10000KW datacenter - which would you rather have? Is the cost of delivering (and cooling) 10000KW to a rack more or less than 400% of the cost of delivering 2500KW per rack. I submit that it is more that 400%. What about the hardware - per mip / cpu horse power am I paying more or less in a conventional 1U pizza box format or a high density blade format - I submit the blades cost more in Capex and there is no opex saving. What is the point having a high density server solution if I can only half fill the rack. I think the problem is people (customers) on the whole don't understand the problem and they can grasp the concept of paying for physical space, but cant wrap their heads around the more abstract concept of electricity consumed by what you put in the space and paying for that to come up with a TCO for comparisons. So they simply see the entire hosting bill and conslude they have to stuff as many processors as possible into the rack space and if that is a problem is is one for the colo facility to deliver at the same price. I do find myself increasingly feeling that the current market direction is simply stupid and had far to much input from sales and marketing people. Let alone the question of is the customers business efficient in terms of the amount of CPU compute power required for their business to generate 1$ of customer sales/revenue. Just because some colo customers have cr*ppy business models delivering marginal benefit for very high computer overheads and an inability to pay for things in a manner that reflects their worth because they are incapable of extracting the value from them. Do we really have to drag the entire industry down to the lowest common denominator of f*ckwit. Surly we should be asking exactly is driving the demand for high density computing and in which market sectors and is this actually the best technical solution to solve them problem. I don't care if IBM, HP etc etc want to keep selling new shiny boxes each year because they are telling us we need them - do we really? ...? Kind Regards Ben -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: 23 March 2008 02:34 To: Patrick Giagnocavo Cc: nanog@nanog.org Subject: Re: rack power question
Surly we should be asking exactly is driving the demand for high density computing and in which market sectors and is this actually the best technical solution to solve them problem. I don't care if IBM, HP etc etc want to keep selling new shiny boxes each year because they are telling us we need them - do we really? ...?
Perhaps not. But until projects like <http://www.lesswatts.org/> show some major success stories, people will keep demanding big blade servers. Given that power and HVAC are such key issues in building big datacenters, and that fiber to the office is now a reality virtually everywhere, one wonders why someone doesn't start building out distributed data centers. Essentially, you put mini data centers in every office building, possibly by outsourcing the enterprise data centers. Then, you have a more tractable power and HVAC problem. You still need to scale things but it since each data center is roughly comparable in size it is a lot easier than trying to build out one big data center. If you move all the entreprise services onto virtual servers then you can free up space for colo/hosting services. You can even still sell to bulk customers because few will complain that they have to deliver equipment to three dara centers, one two blocks west, and another three blocks north. X racks spread over 3 locations will work for everyone except people who need the physical proximity for clustering type applications. --Michael Dillon
Surly we should be asking exactly is driving the demand for high density computing and in which market sectors and is this actually the best technical solution to solve them problem. I don't care if IBM, HP etc etc want to keep selling new shiny boxes each year because they are telling us we need them - do we really? ...?
Perhaps not. But until projects like <http://www.lesswatts.org/> show some major success stories, people will keep demanding big blade servers.
Disagreed. Customers who don't run datacenters general don't understand the issues around high density computing, and most enterprises I deal with don't care about the cost. More and Faster is their vocabulary.
If you move all the entreprise services onto virtual servers then you can free up space for colo/hosting services.
We do quite a bit of VMWare and Xen, both our own and our customers. We have found power consumption still goes up, simply because there is always a backlog of the need of resources. In other words, it's almost "if you build it they will come" relates to CPU cycles as well. I have never seen a decrease in customer power consumption when they have virtualized. They still have more iron, with a lot more VM's.
You can even still sell to bulk customers because few will complain that they have to deliver equipment to three dara centers, one two blocks west, and another three blocks north. X racks spread over 3 locations will work for everyone except people who need the physical proximity for clustering type applications.
Send me those customers, because I haven't seen them. Especially the ones with lots of fiber channel and InfiniBand.
Here's another project which has dubbed themselves "teraflops from milliwatts" which I believe is shipping iron. I have no first-hand experience with their products: http://www.sicortex.com/ -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Login: Nationwide Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
On Sun, Mar 23, 2008 at 2:15 PM, <michael.dillon@bt.com> wrote:
Given that power and HVAC are such key issues in building big datacenters, and that fiber to the office is now a reality virtually everywhere, one wonders why someone doesn't start building out distributed data centers. Essentially, you put mini data centers in every office building, possibly by outsourcing the enterprise data centers. Then, you have a more tractable power and HVAC problem. You still need to scale things but it since each data center is roughly comparable in size it is a lot easier than trying to build out one big data center.
Latency matters. Also, multiple small data centers will be more expensive than a few big ones, especially if you are planning on average load vs peak load heat rejection models.
If you move all the entreprise services onto virtual servers then you can free up space for colo/hosting services.
There is no such thing in my experience. You free up a few thousand cores, they get consumed by the next lower priority project that was sitting around waiting on cpu.
You can even still sell to bulk customers because few will complain that they have to deliver equipment to three dara centers, one two blocks west, and another three blocks north. X racks spread over 3 locations will work for everyone except people who need the physical proximity for clustering type applications.
Racks spread over n locations that aren't within a campus will be more expensive to connect. /vijay
--Michael Dillon
Ben Butler wrote:
There comes a point where you cant physically transfer the energy using air any more - not less you wana break the laws a physics captin (couldn't resist sorry) - to your DX system, gas, then water, then in rack (expensive) cooling, water and CO2. Sooner or later we will sink the hole room in oil, much like they use to do with Cray's.
The problem there is actually the thermal gradient involved. the fact of the matter is you're using ~15c air to keep equipment cooled to ~30c. Your car is probably in the low 20% range as far as thermal efficiency goes, is generating order of 200kw and has an engine compartment enclosing a volume of roughly half a rack... All that waste heat is removed by air, the difference being that it runs a around 250c with some hot spots approaching 900c. Increase the width of the thermal gradient and you can pull much more heat out of the rack without moving more air. 15 years ago I would have told you that gallium arsenide would be a lot more common in general purpose semiconductors for precisely this reason. but silicon has proved superior along a number of other dimensions.
Alternatively we might need to fit the engineers with crampons, climbing ropes and ice axes to stop them being blown over by the 70 mph winds in your datacenter as we try to shift the volumes of area necessary to transfer the energy back to the HVAC for heat pump exchange to remote chillers on the roof.
In my humble experience, the problems are 1> Heat, 2> Backup UPS, 3> Backup Generators, 4> LV/HV Supply to building.
While you will be very constrained by 4 in terms of upgrades unless spending a lot of money to upgrade - the practicalities of 1,2&3 mean that you will have spent a significant amount of money getting to the point where you need to worry about 4.
Given you are not worried about 1, I wonder about the scale of the application or your comprehension of the problem.
The bigger trick is planning for upgrades of a live site where you need to increase Air con, UPS and Generators.
Economically, that 10,000KW of electricity has to be paid for in addition to any charge for the rack space. Plus margined, credit risked and cash flowed. The relative charge for the electricity consumption - which has less about our ability to deliver and cool it in a single rack versus the cost of having four racks in a 2,500KW datacenter and paying for the same amount of electric. Is the racking charge really the significant expense any more.
For the sake of argument, 4 racks at £2500 pa in a 2500KW datacenter or 1 rack at £10,000 pa in a 10000KW datacenter - which would you rather have? Is the cost of delivering (and cooling) 10000KW to a rack more or less than 400% of the cost of delivering 2500KW per rack. I submit that it is more that 400%. What about the hardware - per mip / cpu horse power am I paying more or less in a conventional 1U pizza box format or a high density blade format - I submit the blades cost more in Capex and there is no opex saving. What is the point having a high density server solution if I can only half fill the rack.
I think the problem is people (customers) on the whole don't understand the problem and they can grasp the concept of paying for physical space, but cant wrap their heads around the more abstract concept of electricity consumed by what you put in the space and paying for that to come up with a TCO for comparisons. So they simply see the entire hosting bill and conslude they have to stuff as many processors as possible into the rack space and if that is a problem is is one for the colo facility to deliver at the same price.
I do find myself increasingly feeling that the current market direction is simply stupid and had far to much input from sales and marketing people.
Let alone the question of is the customers business efficient in terms of the amount of CPU compute power required for their business to generate 1$ of customer sales/revenue.
Just because some colo customers have cr*ppy business models delivering marginal benefit for very high computer overheads and an inability to pay for things in a manner that reflects their worth because they are incapable of extracting the value from them. Do we really have to drag the entire industry down to the lowest common denominator of f*ckwit.
Surly we should be asking exactly is driving the demand for high density computing and in which market sectors and is this actually the best technical solution to solve them problem. I don't care if IBM, HP etc etc want to keep selling new shiny boxes each year because they are telling us we need them - do we really? ...?
Kind Regards
Ben
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: 23 March 2008 02:34 To: Patrick Giagnocavo Cc: nanog@nanog.org Subject: Re: rack power question
The interesting thing is how in a way we seem to have come full circle. I am sure lots of people can remember large rooms full of racks of vacuum tube equipment, which required serious power and cooling. On one NASA project I worked on, when the vacuum tube stuff was replaced by solid state in the late 1980's, there was lots of empty floor space and we marveled at how much power we were saving. In fact, after the switch there was almost 2 orders of magnitude too much cooling for the new equipment (200 tons to 5 IIRC), and we had to spend good money to replace the old cooling system with a smaller one. Now, we seem to have expanded to more than fill the previous tube-based power and space requirements, and I suspect some people wish they could get their old cooling plants back. Regards Marshall On Mar 23, 2008, at 5:23 PM, Joel Jaeggli wrote
Ben Butler wrote:
There comes a point where you cant physically transfer the energy using air any more - not less you wana break the laws a physics captin (couldn't resist sorry) - to your DX system, gas, then water, then in rack (expensive) cooling, water and CO2. Sooner or later we will sink the hole room in oil, much like they use to do with Cray's.
The problem there is actually the thermal gradient involved. the fact of the matter is you're using ~15c air to keep equipment cooled to ~30c. Your car is probably in the low 20% range as far as thermal efficiency goes, is generating order of 200kw and has an engine compartment enclosing a volume of roughly half a rack... All that waste heat is removed by air, the difference being that it runs a around 250c with some hot spots approaching 900c.
Increase the width of the thermal gradient and you can pull much more heat out of the rack without moving more air.
15 years ago I would have told you that gallium arsenide would be a lot more common in general purpose semiconductors for precisely this reason. but silicon has proved superior along a number of other dimensions.
Alternatively we might need to fit the engineers with crampons, climbing ropes and ice axes to stop them being blown over by the 70 mph winds in your datacenter as we try to shift the volumes of area necessary to transfer the energy back to the HVAC for heat pump exchange to remote chillers on the roof. In my humble experience, the problems are 1> Heat, 2> Backup UPS, 3> Backup Generators, 4> LV/HV Supply to building. While you will be very constrained by 4 in terms of upgrades unless spending a lot of money to upgrade - the practicalities of 1,2&3 mean that you will have spent a significant amount of money getting to the point where you need to worry about 4. Given you are not worried about 1, I wonder about the scale of the application or your comprehension of the problem. The bigger trick is planning for upgrades of a live site where you need to increase Air con, UPS and Generators. Economically, that 10,000KW of electricity has to be paid for in addition to any charge for the rack space. Plus margined, credit risked and cash flowed. The relative charge for the electricity consumption - which has less about our ability to deliver and cool it in a single rack versus the cost of having four racks in a 2,500KW datacenter and paying for the same amount of electric. Is the racking charge really the significant expense any more. For the sake of argument, 4 racks at £2500 pa in a 2500KW datacenter or 1 rack at £10,000 pa in a 10000KW datacenter - which would you rather have? Is the cost of delivering (and cooling) 10000KW to a rack more or less than 400% of the cost of delivering 2500KW per rack. I submit that it is more that 400%. What about the hardware - per mip / cpu horse power am I paying more or less in a conventional 1U pizza box format or a high density blade format - I submit the blades cost more in Capex and there is no opex saving. What is the point having a high density server solution if I can only half fill the rack. I think the problem is people (customers) on the whole don't understand the problem and they can grasp the concept of paying for physical space, but cant wrap their heads around the more abstract concept of electricity consumed by what you put in the space and paying for that to come up with a TCO for comparisons. So they simply see the entire hosting bill and conslude they have to stuff as many processors as possible into the rack space and if that is a problem is is one for the colo facility to deliver at the same price. I do find myself increasingly feeling that the current market direction is simply stupid and had far to much input from sales and marketing people. Let alone the question of is the customers business efficient in terms of the amount of CPU compute power required for their business to generate 1$ of customer sales/revenue. Just because some colo customers have cr*ppy business models delivering marginal benefit for very high computer overheads and an inability to pay for things in a manner that reflects their worth because they are incapable of extracting the value from them. Do we really have to drag the entire industry down to the lowest common denominator of f*ckwit. Surly we should be asking exactly is driving the demand for high density computing and in which market sectors and is this actually the best technical solution to solve them problem. I don't care if IBM, HP etc etc want to keep selling new shiny boxes each year because they are telling us we need them - do we really? ...? Kind Regards Ben -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: 23 March 2008 02:34 To: Patrick Giagnocavo Cc: nanog@nanog.org Subject: Re: rack power question
So perhaps the question isn't so much how many kW's I can pack into a 42U rack, but for the data center designer, what's the best price point if real estate is not a significant issue. Or to say it another way, what kW density per rack will give me the lowest priced capital and operating cost per square foot. Does it really matter if you can only offer 5kW/rack if you can price it at 80% of the guy who can sells a 10kW/rack product? Or is this a tough point for the sales person to make? Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Ben Butler Sent: Sunday, March 23, 2008 12:11 PM To: nanog@merit.edu Subject: RE: rack power question There comes a point where you cant physically transfer the energy using air any more - not less you wana break the laws a physics captin (couldn't resist sorry) - to your DX system, gas, then water, then in rack (expensive) cooling, water and CO2. Sooner or later we will sink the hole room in oil, much like they use to do with Cray's. Alternatively we might need to fit the engineers with crampons, climbing ropes and ice axes to stop them being blown over by the 70 mph winds in your datacenter as we try to shift the volumes of area necessary to transfer the energy back to the HVAC for heat pump exchange to remote chillers on the roof. In my humble experience, the problems are 1> Heat, 2> Backup UPS, 3> Backup Generators, 4> LV/HV Supply to building. While you will be very constrained by 4 in terms of upgrades unless spending a lot of money to upgrade - the practicalities of 1,2&3 mean that you will have spent a significant amount of money getting to the point where you need to worry about 4. Given you are not worried about 1, I wonder about the scale of the application or your comprehension of the problem. The bigger trick is planning for upgrades of a live site where you need to increase Air con, UPS and Generators. Economically, that 10,000KW of electricity has to be paid for in addition to any charge for the rack space. Plus margined, credit risked and cash flowed. The relative charge for the electricity consumption - which has less about our ability to deliver and cool it in a single rack versus the cost of having four racks in a 2,500KW datacenter and paying for the same amount of electric. Is the racking charge really the significant expense any more. For the sake of argument, 4 racks at £2500 pa in a 2500KW datacenter or 1 rack at £10,000 pa in a 10000KW datacenter - which would you rather have? Is the cost of delivering (and cooling) 10000KW to a rack more or less than 400% of the cost of delivering 2500KW per rack. I submit that it is more that 400%. What about the hardware - per mip / cpu horse power am I paying more or less in a conventional 1U pizza box format or a high density blade format - I submit the blades cost more in Capex and there is no opex saving. What is the point having a high density server solution if I can only half fill the rack. I think the problem is people (customers) on the whole don't understand the problem and they can grasp the concept of paying for physical space, but cant wrap their heads around the more abstract concept of electricity consumed by what you put in the space and paying for that to come up with a TCO for comparisons. So they simply see the entire hosting bill and conslude they have to stuff as many processors as possible into the rack space and if that is a problem is is one for the colo facility to deliver at the same price. I do find myself increasingly feeling that the current market direction is simply stupid and had far to much input from sales and marketing people. Let alone the question of is the customers business efficient in terms of the amount of CPU compute power required for their business to generate 1$ of customer sales/revenue. Just because some colo customers have cr*ppy business models delivering marginal benefit for very high computer overheads and an inability to pay for things in a manner that reflects their worth because they are incapable of extracting the value from them. Do we really have to drag the entire industry down to the lowest common denominator of f*ckwit. Surly we should be asking exactly is driving the demand for high density computing and in which market sectors and is this actually the best technical solution to solve them problem. I don't care if IBM, HP etc etc want to keep selling new shiny boxes each year because they are telling us we need them - do we really? ...? Kind Regards Ben -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: 23 March 2008 02:34 To: Patrick Giagnocavo Cc: nanog@nanog.org Subject: Re: rack power question
On Mon, 24 Mar 2008, Frank Bulk - iNAME wrote:
So perhaps the question isn't so much how many kW's I can pack into a 42U rack, but for the data center designer, what's the best price point if real estate is not a significant issue. Or to say it another way, what kW density per rack will give me the lowest priced capital and operating cost per square foot. Does it really matter if you can only offer 5kW/rack if you can price it at 80% of the guy who can sells a 10kW/rack product? Or is this a tough point for the sales person to make?
While there are certainly customers out there who think along these lines, most of the enterprise customers I've run across in the past who would be in the market for data center colo would just as soon play the how-many- servers-can-i-jam-into-this-rack game, which is one part of the how-many-racks-can-i-jam-into-this-cage game for some folks... You might get some traction with the responsible deployment angle, but I could only guess at how much traction... jms
On Mon, Mar 24, 2008 at 8:46 PM, Justin M. Streiner <streiner@cluebyfour.org> wrote:
While there are certainly customers out there who think along these lines, most of the enterprise customers I've run across in the past who would be in the market for data center colo would just as soon play the how-many- servers-can-i-jam-into-this-rack game, which is one part of the how-many-racks-can-i-jam-into-this-cage game for some folks...
You might get some traction with the responsible deployment angle, but I could only guess at how much traction...
Speaking as one who used to play both of those games, it's a hard habit to break. The folks paying the bills don't like to see empty space, because they translate that into wasted $$'s. It's especially difficult when trying to justify building out an additional cage (or making the one you have bigger if there's empty adjacent space) because your current one is at max kva per ft^2 - but has physical room for several more racks. The trick for us was getting enough management clue in place to where you (gasp!) plan ahead for your power needs first and make raw ft^2 the secondary concern. --D
While I enjoy hand waving as much as the next guy... reading over this thread, there are several definitions of sq ft (ft^2) here and folks are interchanging their uses whether aware of it or not. 1) sq ft = the amount of sq ft your cabinet/cage sits on. 2) sq ft = the amount of sq ft attributed to your cabinet/cage on the data center floor including aisles and access-ways 3) sq ft = the amount of sq ft attributed to your cabinet/cage on the data center floor including aisles and access-ways and on-the-floor cooling equipment 4) sq ft = the amount of sq ft attributed to your cabinet/cage on the data center floor including aisles and access-ways and on-the-floor cooling equipment AND the amount attributed to your cabinet/cage from the equipment room (UPS, batteries, transformers, etc). The first definition only applies to those renting cabinets. The first/second definitions apply to those renting cabinets and cages with aisles or access-ways in them The first/second/third definitions apply to operators of datacenters within non-datacenter buildings (where datacenter is NOT the entire load in the facility) and renters. All the definitions apply to anyone with a dedicated datacenter space (and equipment room) within a building or a stand-alone datacenter. By rough figuring... A 30KW cabinet while one sounds lovely, a huge amount of space is going to turned over to most or all of a dedicated PCU and 1/15th of the infrastructure of 500KVA UPS (@0.9PF) including batteries, transformers, etc. Assuming power costs and associated maintenance are assigned appropriately to this one cabinet, the amount of square footage associated (definition #4) for that one cabinet changes by less than 30% whether you are going 30KW in one-cabinet or 3KW in each of 10 cabinets. As an owner/operator of very large dedicated data centers for very large customers of all sorts, I can promise you no one is doing datacenters full (500+ cabinets) of 10KW+ (production, not theoretical) each in a dedicated facility with no other uses to lower the average heat demand. Even smaller numbers probably too. Easy caveat: A "datacenter" that is a fraction of a large building (e.g. a 20,000 sq ft data center within a 250,000 sq ft building) can appear to bend these rules because the overall load (by definition #4) is averaged against it. There is simply no economic reason to do so (at scale) -- short of water cooling -- there is a fixed amount of space taken up per unit-ton of air cooling (medium-<air>-medium) for heat-rejection. Factor in the premiums associated with the highest density equipment (e.g. blades, PDUs -in-cabinet, etc) and the economics become even clearer. Even ignoring heat rejection, the battery + UPS gear for 500KVA (even with minimal battery times) is approximately the same size (physically) as the 12 cabinets or so it takes to reach that capacity. [same applies for flywheel/kinetic systems] Our friends who do calculus in their heads can already figure out the engineering or business min-max equation to optimize this equation based on a certain level of redundancy, run-time, etc and there aren't multiple answers. (Hint: certain variables drop out as rounding errors). TAANSTAFL, if you are a 1-4 cabinet (or similarly small) use in a larger datacenter (definitions 1-2) by all means shove as much gear as you can in as long as there is no additional power premium. If they are giving you space for power or the premium is too high, take as much space as you can for the amount of power you need -- your equipment and your budgets will thank you. If you are operating a data center without a bigger use in the building to average against, you really don't have many ways to cheat the math here. (e.g. geothermal only provides a delta between definition #3 and #4 and a lower energy premium). Deepak Jain AiNET
Thanks for the spelling it out in more detail. One point I failed to make was that as power consumption and heat/sq.ft increases, the cost to dissipate that heat appears to reach a cost/performance curve which then swings up dramatically. There appears to be a sweet spot where it's cheaper to spread the power consumption/heat dissipation around with more racks than invest in products that solve those density problems. And that sweet spot is a moving target as vendors come up with products to address the density problems. So rather than argue about how much we can pack in, perhaps we should find the number with the maximum cost/benefit for the data center owner/operator, taking into the necessary variables. Previously in the thread the discussion was around identifying the highest number possible. Also, if one designs for the highest density technically possible, they're building an infrastructure that solves expensive power/heat density issues that won't exist for all customers, which translates into higher cost/sq foot when the sales team may only be able to earn prices that are equivalent to those who designed for 75% of their density capabilities. Again, I'm not sure what that upper-level number is, but it's there. Is the solution to segregate the data center into different tiers of low power/heat and those that need higher power/density? Perhaps people shouldn't be selling U's, but selling power consumption and heat dissipation (try and measure that!) and charging a nominal fee for U's. Please feel free to set me straight as I'm rambling on about something I don't know about. =) Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Deepak Jain Sent: Monday, March 24, 2008 10:27 PM Cc: nanog@merit.edu Subject: Re: rack power question While I enjoy hand waving as much as the next guy... reading over this thread, there are several definitions of sq ft (ft^2) here and folks are interchanging their uses whether aware of it or not. 1) sq ft = the amount of sq ft your cabinet/cage sits on. 2) sq ft = the amount of sq ft attributed to your cabinet/cage on the data center floor including aisles and access-ways 3) sq ft = the amount of sq ft attributed to your cabinet/cage on the data center floor including aisles and access-ways and on-the-floor cooling equipment 4) sq ft = the amount of sq ft attributed to your cabinet/cage on the data center floor including aisles and access-ways and on-the-floor cooling equipment AND the amount attributed to your cabinet/cage from the equipment room (UPS, batteries, transformers, etc). The first definition only applies to those renting cabinets. The first/second definitions apply to those renting cabinets and cages with aisles or access-ways in them The first/second/third definitions apply to operators of datacenters within non-datacenter buildings (where datacenter is NOT the entire load in the facility) and renters. All the definitions apply to anyone with a dedicated datacenter space (and equipment room) within a building or a stand-alone datacenter. By rough figuring... A 30KW cabinet while one sounds lovely, a huge amount of space is going to turned over to most or all of a dedicated PCU and 1/15th of the infrastructure of 500KVA UPS (@0.9PF) including batteries, transformers, etc. Assuming power costs and associated maintenance are assigned appropriately to this one cabinet, the amount of square footage associated (definition #4) for that one cabinet changes by less than 30% whether you are going 30KW in one-cabinet or 3KW in each of 10 cabinets. As an owner/operator of very large dedicated data centers for very large customers of all sorts, I can promise you no one is doing datacenters full (500+ cabinets) of 10KW+ (production, not theoretical) each in a dedicated facility with no other uses to lower the average heat demand. Even smaller numbers probably too. Easy caveat: A "datacenter" that is a fraction of a large building (e.g. a 20,000 sq ft data center within a 250,000 sq ft building) can appear to bend these rules because the overall load (by definition #4) is averaged against it. There is simply no economic reason to do so (at scale) -- short of water cooling -- there is a fixed amount of space taken up per unit-ton of air cooling (medium-<air>-medium) for heat-rejection. Factor in the premiums associated with the highest density equipment (e.g. blades, PDUs -in-cabinet, etc) and the economics become even clearer. Even ignoring heat rejection, the battery + UPS gear for 500KVA (even with minimal battery times) is approximately the same size (physically) as the 12 cabinets or so it takes to reach that capacity. [same applies for flywheel/kinetic systems] Our friends who do calculus in their heads can already figure out the engineering or business min-max equation to optimize this equation based on a certain level of redundancy, run-time, etc and there aren't multiple answers. (Hint: certain variables drop out as rounding errors). TAANSTAFL, if you are a 1-4 cabinet (or similarly small) use in a larger datacenter (definitions 1-2) by all means shove as much gear as you can in as long as there is no additional power premium. If they are giving you space for power or the premium is too high, take as much space as you can for the amount of power you need -- your equipment and your budgets will thank you. If you are operating a data center without a bigger use in the building to average against, you really don't have many ways to cheat the math here. (e.g. geothermal only provides a delta between definition #3 and #4 and a lower energy premium). Deepak Jain AiNET
this has been, to me, one of the most fascinating nanog threads in years. at the moment my own datacenter problem is filtration. isc lives in a place where outside air is quite cool enough for server inlet seven or more months out of the year. we've also got quite high ceilings. a 2HP roof fan will move 10000 cubic feet per minute. we've got enough make-up air for that. but, the filters on the make-up air have to be cleaned several times a week, and at the moment that's a manual operation. mechanical systems, by comparison, only push 20% make-up air, and the filters seem to last a month or more between maintainance events. i'm stuck with the same question that vexes the U S Army when they send the M1A1 into sandstorms, or that caused a lot of shutdowns in NYC in the days after 9/11: what kind of automation can i deploy that will precipitate the particulates so that air can move (for cooling) and so that air won't bring grit (which is conductive)? -- Paul Vixie
what kind of automation can i deploy that will precipitate the particulates so that air can move (for cooling) and so that air won't bring grit (which is conductive)?
Have you considered a two-step process using water in the first step to remove particulates (water spray perhaps?) and then an industrial air-drier in the second step? Alternatively, have you considered air liquifiers like those used in mining (Draegerman suits) which produce very cold liquid air? The idea would be to spray the liquid air inside the data center rather than blowing in the gaseous form. Of course, I don't know if the economics of this work out, although there are people working on increasing the efficiency of air liquification so there is quite a bit of price variation between older methods and newer ones. --Michael Dillon
This thread begs a question - how much do you think it'd be worth to do things more efficiently? Adrian
I still think the industry needs to standardise water cooling to popularise it; if there were two water ports on all the pizzaboxes next to the RJ45s, and a standard set of flexible pipes, how many people would start using it? There's probably a medical, automotive or aerospace standard out there. On Tue, Mar 25, 2008 at 12:23 PM, Leigh Porter <leigh.porter@ukbroadband.com> wrote:
$5
Adrian Chadd wrote:
This thread begs a question - how much do you think it'd be worth to do things more efficiently?
Adrian
That would be pretty good. But seeing some of the disastrous cabling situations it'd have to be made pretty idiot proof. Nice double sealed idiot proof piping with self-sealing ends.. -- Leigh -- Leigh Alexander Harrowell wrote:
I still think the industry needs to standardise water cooling to popularise it; if there were two water ports on all the pizzaboxes next to the RJ45s, and a standard set of flexible pipes, how many people would start using it? There's probably a medical, automotive or aerospace standard out there.
On Tue, Mar 25, 2008 at 12:23 PM, Leigh Porter <leigh.porter@ukbroadband.com> wrote:
$5
Adrian Chadd wrote:
This thread begs a question - how much do you think it'd be worth to do things more efficiently?
Adrian
A valve in the connector; has to be pushed in by the other connector to let the water flow. Water pressure pushes it shut otherwise so it fails-safe. On Tue, Mar 25, 2008 at 12:35 PM, Leigh Porter <leigh.porter@ukbroadband.com> wrote:
That would be pretty good. But seeing some of the disastrous cabling situations it'd have to be made pretty idiot proof.
Nice double sealed idiot proof piping with self-sealing ends..
-- Leigh
-- Leigh
I still think the industry needs to standardise water cooling to
Alexander Harrowell wrote: popularise
it; if there were two water ports on all the pizzaboxes next to the RJ45s, and a standard set of flexible pipes, how many people would start using it? There's probably a medical, automotive or aerospace standard out there.
On Tue, Mar 25, 2008 at 12:23 PM, Leigh Porter < leigh.porter@ukbroadband.com> wrote:
$5
Adrian Chadd wrote:
This thread begs a question - how much do you think it'd be worth to do things more efficiently?
Adrian
It would sure be nice if along with choosing to order servers with DC or AC power inputs one could choose air or water cooling. Or perhaps some non-conductive working fluid instead of water. That might not carry quite as much heat as water, but it would surely carry more than air and if chosen correctly would have more benign results when the inevitable leaks and spills occur. Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :) A close second might be liquid cooled air tight cabinets with the air/water heat exchangers (redundant pair) at the bottom where leaks are less of an issue (drip tray, anyone? :) )... Less practical but more fun to contemplate would be data centers pressurized with a working gas that offers better heat transfer than oxygen/nitrogen and no oxidation potential. Airlocks and suits for the techs, but no fire worries ever. Heck, just close the room and inject liquid nitrogen under the raised floor to be scavenged overhead and re-compressed, chilled, liquefied and sent round again. Reserve cooling for power outages is just huge dewars full of liquid nitrogen :) Not so serious today, -Dorn On Tue, Mar 25, 2008 at 8:31 AM, Alexander Harrowell <a.harrowell@gmail.com> wrote:
I still think the industry needs to standardise water cooling to popularise it; if there were two water ports on all the pizzaboxes next to the RJ45s, and a standard set of flexible pipes, how many people would start using it? There's probably a medical, automotive or aerospace standard out there.
On Tue, Mar 25, 2008 at 12:23 PM, Leigh Porter < leigh.porter@ukbroadband.com> wrote:
$5
Adrian Chadd wrote:
This thread begs a question - how much do you think it'd be worth to do things more efficiently?
Adrian
Once upon a time, Dorn Hetzel <dhetzel@gmail.com> said:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Fluorinert - it worked (more or less) for the Cray Triton. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
I think the modern equivalent is HFE, manufactured by 3M; HFE-7100 is commonly used in the ATE industry for liquid cooling of test heads. It is designed for very low temperatures (-135degC to 61degC) so it might not be suitable for general datacenter use. HFE-7500 looks like a better fit. (-100degC to 130degC) -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Chris Adams Sent: Tuesday, March 25, 2008 6:38 AM To: nanog list Subject: Re: rack power question Once upon a time, Dorn Hetzel <dhetzel@gmail.com> said:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Fluorinert - it worked (more or less) for the Cray Triton. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On 25 Mar 2008, at 09:11 , Dorn Hetzel wrote:
It would sure be nice if along with choosing to order servers with DC or AC power inputs one could choose air or water cooling.
Or perhaps some non-conductive working fluid instead of water. That might not carry quite as much heat as water, but it would surely carry more than air and if chosen correctly would have more benign results when the inevitable leaks and spills occur.
The conductivity of (ion-carrying) water seems like a sensible thing to worry about. The other thing is its boiling point. I presume that the fact that nobody ever brings that up means it's a non-issue, but it'd be good to understand why. Seems to me that any large-scale system designed to distribute water for cooling has the potential for hot spots to appear, and that any hot spot that approaches 100C is going to cause some interesting problems. Wouldn't some light mineral oil be a better option than water? Joe
Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though. Justin
While it has the potential to catch fire - it does however work fine in my car engine. -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Justin Shore Sent: 25 March 2008 14:20 To: Dorn Hetzel Cc: nanog list Subject: Re: rack power question Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though. Justin
Question: what worries you more, fire or leaks? On Tue, Mar 25, 2008 at 3:06 PM, Ben Butler <ben.butler@c2internet.net> wrote:
While it has the potential to catch fire - it does however work fine in my car engine.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Justin Shore Sent: 25 March 2008 14:20 To: Dorn Hetzel Cc: nanog list Subject: Re: rack power question
Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though.
Justin
Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
For some of us over-the-edge pc enthusiasts, we use a non-conductive heat transfer fluid for 'water-cooling' our over-clocked cpus: http://www.dangerden.com/store/home.php?cat=63. If you buy it in gallons, I'm sure you'll get a better price. If I recall, conductivity is somewhat less than water, but still good enough to do the job. I have a solution in place that has been running continuously for two or three years now. By distributing the heat to larger slower fans, my room is quieter. If I was really inclined, I could put the fans out my window and have practically dead silence, if it wasn't for the power supply fan. Another poster mentioned dripless quick-disconnects. They do exist. -- Scanned for viruses and dangerous content at http://www.oneunified.net and is believed to be clean.
Well, seeing as that most pad mounted transformers use mineral oil as a heat transfer agent (in applications up to and exceeding 230kv), I don't suspect it is of issue. However, we've all seen nice transformer fires.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Justin Shore Sent: Tuesday, March 25, 2008 10:20 AM To: Dorn Hetzel Cc: nanog list Subject: Re: rack power question
Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though.
Justin
Russia (or the USSR at that time) used to use liquid graphite to cool their nuclear reactors, even thought it was flammable.... of course that was what they were using in Chernobyl. -- Brian Raaen Network Engineer braaen@zcorum.com On Tuesday 25 March 2008, you wrote:
Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though.
Justin
On Mar 25, 2008, at 11:15 AM, Brian Raaen wrote:
Russia (or the USSR at that time) used to use liquid graphite to cool their nuclear reactors, even thought it was flammable.... of course that was what they were using in Chernobyl.
The RBMK-1000 used graphite for moderation and water for cooling. Regards Marshall
-- Brian Raaen Network Engineer braaen@zcorum.com
On Tuesday 25 March 2008, you wrote:
Dorn Hetzel wrote:
Of course, my chemistry is a little rusty, so I'm not sure about the prospects for a non-toxic, non-flammable, non-conductive substance with workable fluid flow and heat transfer properties :)
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though.
Justin
Brian Raaen wrote:
Russia (or the USSR at that time) used to use liquid graphite to cool their nuclear reactors, even thought it was flammable.... of course that was what they were using in Chernobyl.
This has diverged far enough that it's now off the topic of cooling. The melting point of carbon however is 3800k... you can get it to ignite in graphite form at roughly half that.
Joel Jaeggli wrote:
Brian Raaen wrote:
Russia (or the USSR at that time) used to use liquid graphite to cool their nuclear reactors, even thought it was flammable.... of course that was what they were using in Chernobyl.
This has diverged far enough that it's now off the topic of cooling. The melting point of carbon however is 3800k...
you can get it to ignite in graphite form at roughly half that.
The graphite was used as a moderator not as a coolant. -- Leigh
Mineral oil? I'm not sure about the non-flammable part though. Not all oils burn but I'm not sure if mineral oil is one of them. It is used for immersion cooling though.
It burns quite well .. http://video.aol.com/video-detail/transformer-explosion/1599831229 Cheers, Michael Holstein Cleveland State University
Or perhaps some non-conductive working fluid instead of water. That might not carry quite as much heat as water, but it would surely
carry more than air and if chosen correctly would have more benign results when the inevitable leaks and spills occur.
Less practical but more fun to contemplate would be data centers
HCFC-123 is likely what would be used, which means that you would want to limit the amount of time that you spend inside the data center because, with the large number of connections in the facility, leaks will be inevitable and inhaling the gas causes liver damage. Essentially, you are saying that we should get rid of chillers and turn the entire data center into a giant chiller. Instead of being a building with rooms and equipment, the data center becomes a machine and humans only venture inside when the machine is shut down for maintenance. pressurized
with a working gas that offers better heat transfer than oxygen/nitrogen and no oxidation potential. Airlocks and suits for the techs, but no fire worries ever. Heck, just close the room and inject liquid nitrogen under the raised floor to be scavenged overhead and re-compressed, chilled, liquefied and sent round again. Reserve cooling for power outages is just huge dewars full of liquid nitrogen :)
Not so serious today,
Why not? If you take your pressurized liquid nitrogen scenario and turn it inside out, then it might well be workable and there would be no need for suits. For instance, imagine a cylinder containing the liquid nitro cooling (liquid air might be cheaper) with devices attached all around like the petals on a flower. Each device has heat exchangers for cooling the hottest parts (CPUs) and the heat exchangers are attached to the cooling cylinder. With continued increase in density of cores, this could be feasible. In essence it would be a kind of blade server with the cooling and backplane in a central cylinder. Added benefits might come from supercooling the backplane. Consider what is happening beyond the consumer dual and 8-core (PS3) machines. <http://www.tilera.com/products/boards.php> <http://www.sicortex.com/architecture_tour> --Michael Dillon
On Tue, 25 Mar 2008, Dorn Hetzel wrote:
A close second might be liquid cooled air tight cabinets with the air/water heat exchangers (redundant pair) at the bottom where leaks are less of an issue (drip tray, anyone? :) )...
Something like what you suggest has been around for a year or two now, though using liquid CO2 as the coolant. It doesn't require particularly tight cabs. http://www.troxaitcs.co.uk/aitcs/products/ Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ SHANNON ROCKALL: NORTHWESTERLY BACKING SOUTHERLY 5, INCREASING 6 TO GALE 8, PERHAPS SEVERE GALE 9 LATER. MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH. SQUALLY SHOWERS THEN RAIN. MODERATE OR GOOD.
A close second might be liquid cooled air tight cabinets with the air/water heat exchangers (redundant pair) at the bottom where leaks are less of an issue (drip tray, anyone? :) )...
Something like what you suggest has been around for a year or two now, though using liquid CO2 as the coolant. It doesn't require particularly tight cabs.
Is anyone using these over here? This is a far more significant strategy that simply using an alternative to water to carry the heat from the cabinets. The game is PHASE CHANGE, but unlike our traditional fairly complicated refrigeration system systems with oil return issues and artificiaally high head pressures simply to have a 100PSI MOPD to keep full flow through the TXV (even with low ambients outside) this is in its simplest form viewed as a PUMPED LIQUID heat pipe system, where there is no need for large pressure drops as the fluid goes around the loop. Your pump only has to cover piping loses and any elevation differences between the COLO space and the central machinery. There is NO insulation at all. The liquid being pumped out to finned coils on the back of each cabinet is at room temperature and as it grabs heat from the cabinet exhaust air (which is very efficient because you have it HOT and not needlessly undiluted with other room air) some of the liquid flashes to gas and you have a slurry that can easily be engineered to handle any size load you care to put in the rack. The more heat you add, the more gas and the less liquid you get back, but as long as there is still some liquid, the fluid stream is still at the room temperature it was at before entering the coil. It is perfectly happy trying to cool an empty cabinet and does not over cool that area, and can carry as much overload as you are prepaired to pay to have built in. At the central equipment, the liquid goes to the bottom of the receiver ready for immediate pumping again, and the gas is condensed back to liquid on cold coils in this receiver (think of a large traditional shell and tube heat exchanger that also acts as a receiver and also a slight subcooler for the liquid). The coils can be DX fed with any conventional refrigerant, or could be tied to the building's chilled water supply. Parallel tube bundles can provide redundant and isolated systems, and duplicating this whole system with alternate rows or even alternate cabinets fed from different systems lets you function even with a major failure. Read about their scenarios when a cooling door is open or even removed. The adjacent cabinets just get warmer entering air and can easily carry the load. Enough 55 degree ground water in some places might even let you work with a very big shell and tube condenser and NO conventional refrigeration system at all. If you have every single cabinet packed full, having just two systems each needing full double+ capacity would not be as good as having 3 or 4 interleaved systems, but that is simply a design decision, but one that can be partially deferred. Pipe for 4 interleaved isolated systems, and then run the ODD ones into one condensing/pumping system, and the EVEN ones into another. As cabinets fill, and as dollars become available for paranoia, add the other central units and flick a few normally padlocked preprovisioned valves and your are done. The valves stay for various backup strategies. You can accidentally leak some CO2 from one system to another and then sneak it back. There are NO parallel compressor oil return issues, just a large range between min and max acceptible charges of CO2. The big problem is that CO2 at room temperature is about 1000 PSI, so all this is welded stainless steel and flexible metal hoses. There need not be enough CO2 in any one system to present any suffocation hazard, but you DO want to be totally aware of that in the design. Unlike regular refrigerants, liguid CO2 is just dirt cheap, and you just vent it when changing a finned rear door - each has its own valves at the cabinet top main pipes.. You just go slowly so you don't cover everything or anyone with dry ice chips. Here is another site hawking those same Trox systems: http://www.modbs.co.uk/news/fullstory.php/aid/1735/The_next_generation_of_co... Over in Europe they are talking of a demo being easliy done if you already have chilled water the demo could use. A recent trade mag had a small pumped heat pipe like R134a system for INSIDE electronic systems - a miniature version of these big CO2 systems. Heat producing devices could be directly mounted to the evaporator rather than use air cooling fins or a water based system, and the condenser could be above. or below or wherever you need to put it and could function in arbitrary positions in the field. And no heat pipe wicks needed. The fully hermetic pump has a 50K hour MTBF and is in this case pumping R134a. The pump looked like one of those spun copper totally sealed inline dryers. I suspect it was for high end computing and military gear, and not home PCs, but clearly could move a lot of heat from very dense spaces. Parker is so sprawling, I can't now seem to readily find the division that makes that pump and that wants to design and build the whole subsystem for you, but I bet we will be seeing a lot of these as power density goes up. And on a totally seperate rack power topic, (code issues aside - that can be changed given need and time), I would LOVE to see some of these 100 - 250V universal switching supplies instead made to run 100 - 300V so they could be run off 277V. It is silly to take 277/480 through wastefully heat producing Delta-Wyes just to get 120/208 to feed monster power supplies that really should be fed with what most building already have plenty of. The codes could easily recognize high density controlled access data centers as an environment where 277V would be OK to use for situstions where it isn't ok now. Small devices sharing the same cabinets should all be allowed to also use 277V. We used to have cabinets of 3 phase 208Y fed Nortel 200Amp -48VDC rectifiers for our CO batteries with as many 125KVA delta-wye transformers in front of them as needed for any particular site. These rectifiers are up about 98+% efficient (better than the transformers...) but as soon as these rectifiers became available in 480V, we switched and retrofitted everywhere except very small sites. It is just silly to not be using 480 single or better 3 phase for very large devices and 277V for smaller but still large devices where the single pole breaker gives more circuits per cabinet and 277V being near previous upper voltage limits may mean simple supply changes.. And yes, I know the Delta-Wye gives a lot of transient isolation, and a handy "newly derived neutral" to make a really good single point grounding system feasible and very local, but 277/480 deserves a better chance..
There are vendors working on this, but the point here is that unlike the medical, automotive or aerospace industries.... Computing (in general) platforms aren't regulated the same way... you won't see random gear hanging off the inside of an MRI (in general), or in an airplane, etc. Computer vendors make lots of random sizes and depths of boxes. Want to get really ambitious? Let's find a set of rails that works with all rackmountable equipment and cabinets before we get crazy with the water cooling. The point is that water has lots of issues. Water quality being one of them. Its fine to "toy" with water cooling a home clocked-up PC. When you have experience water cooling mainframes or using large chiller plants (1000+ tons) for years on end, there is a lot of discipline required to "do it right" -- a discipline that many shops and operators haven't needed up to this point. Deepak Jain AiNET Alexander Harrowell wrote:
I still think the industry needs to standardise water cooling to popularise it; if there were two water ports on all the pizzaboxes next to the RJ45s, and a standard set of flexible pipes, how many people would start using it? There's probably a medical, automotive or aerospace standard out there.
adrian@creative.net.au (Adrian Chadd) writes:
This thread begs a question - how much do you think it'd be worth to do things more efficiently?
this is a strict business decision involving sustainability and TCO. if it takes one watt of mechanical to transfer heat away from every watt delivered, whereas ambient air with good-enough filtration will let one watt of roof fan transfer the heat away from five delivered watts, then it's a no-brainer. but as i said at the outset, i am vexed at the moment by the filtration costs. -- Paul Vixie
i am vexed at the moment by the filtration costs.
What is it that is clogging your filters? Dust? Pollen? Small animals?? We're in a similar situation to you, though even better as we're blessed by even cooler ambients and never see 100°F, or even close to it. So we're using make-up air 12 months of the year and really only go fully mechanical in cooling on summer afternoons. We change our filters monthly most of the year, and in spring when the pollen count skyrockets we change from box-pleat to bag filters and change them every week or so. Dust is never an issue up here... it rains way too much. --chuck @ d.f in seattle
Paul Vixie wrote:
this is a strict business decision involving sustainability and TCO. if it takes one watt of mechanical to transfer heat away from every watt delivered, whereas ambient air with good-enough filtration will let one watt of roof fan transfer the heat away from five delivered watts, then it's a no-brainer. but as i said at the outset, i am vexed at the moment by the filtration costs.
Have you made any calculations if geo-cooling makes sense in your region to fill in the hottest summer months or is drilling just too expensive for the return? Pete
On Tue, Mar 25, 2008 at 5:00 PM, Paul Vixie <paul@vix.com> wrote:
Have you made any calculations if geo-cooling makes sense in your region to fill in the hottest summer months or is drilling just too expensive for the return?
i'm too close to san francisco bay.
Paul, Why is that bad? I thought ground-source HVAC systems worked better if the ground was saturated with water. Better thermal conductivity than dry soil. My problem finding someone to install a ground-source system was that everyone for miles is on city water. You have to be able to drill a hole in the ground and the folks familiar with well-drilling equipment are three hours away. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
i'm too close to san francisco bay.
Why is that bad? I thought ground-source HVAC systems worked better if the ground was saturated with water. Better thermal conductivity than dry soil.
aside from the corrosive nature of the salt and other minerals, there is an unbelievable maze of permits from various layers of government since there's a protected marshland as well as habitat restoration within a few miles. i think it's safe to say that Sun Quentin could not be built under current rules.
My problem finding someone to install a ground-source system was that everyone for miles is on city water. You have to be able to drill a hole in the ground and the folks familiar with well-drilling equipment are three hours away.
i could drill in the warehouse, i suppose, and truck the slurry out by night.
Paul Vixie wrote:
aside from the corrosive nature of the salt and other minerals, there is an unbelievable maze of permits from various layers of government since there's a protected marshland as well as habitat restoration within a few miles. i think it's safe to say that Sun Quentin could not be built under current rules.
The ones I have are MDPE (Medium Density Polyethylene) and I haven't understood that the plastic would have corrosive features. Obviously it can come down to regulation depending on what you use as a cooling agent but water is very effective if there is no fear of freezing (I use ethanol for that reason). The whole system is closed circuit, I'm not pumping water out of the ground but circulating the ethanol in the vertical ground piping of approximately 360 meters. The amount of slurry that came out of the hole was in order of 5-6 cubic meters. Cannot remember exactly what the individual parts cost but the total investment was less than $10k. (drilling, piping, circulation, air chiller, fluids, etc.) for a system with somewhat over 4kW of cooling capacity. (I'm limited by the airflow, not by the ground hole if the calculations prove correct) Pete
I believe some of the calculations for hole/trench sizing per ton used for geothermal exchange heating/cooling applications rely on the seasonal nature of heating/cooling. I have heard that if you either heat or cool on a continuous permanent basis, year-round, then you need to allow for more hole or trench since the cold/heat doesn't have an off-season to equalize from the surrounding earth. I don't have hard facts on hand, but it might be a factor worth verifying. On Wed, Mar 26, 2008 at 2:23 AM, Petri Helenius <petri@helenius.fi> wrote:
Paul Vixie wrote:
aside from the corrosive nature of the salt and other minerals, there is
unbelievable maze of permits from various layers of government since
an there's
a protected marshland as well as habitat restoration within a few miles. i think it's safe to say that Sun Quentin could not be built under current rules.
The ones I have are MDPE (Medium Density Polyethylene) and I haven't understood that the plastic would have corrosive features. Obviously it can come down to regulation depending on what you use as a cooling agent but water is very effective if there is no fear of freezing (I use ethanol for that reason). The whole system is closed circuit, I'm not pumping water out of the ground but circulating the ethanol in the vertical ground piping of approximately 360 meters. The amount of slurry that came out of the hole was in order of 5-6 cubic meters. Cannot remember exactly what the individual parts cost but the total investment was less than $10k. (drilling, piping, circulation, air chiller, fluids, etc.) for a system with somewhat over 4kW of cooling capacity. (I'm limited by the airflow, not by the ground hole if the calculations prove correct)
Pete
Dorn Hetzel wrote:
I believe some of the calculations for hole/trench sizing per ton used for geothermal exchange heating/cooling applications rely on the seasonal nature of heating/cooling.
I have heard that if you either heat or cool on a continuous permanent basis, year-round, then you need to allow for more hole or trench since the cold/heat doesn't have an off-season to equalize from the surrounding earth.
I don't have hard facts on hand, but it might be a factor worth verifying. That is definitely a factor. I do know that you can run such systems 24/7 for multiple months but whether the number is 3, 6 or 8 with the regular sizing I don't know. Obviously it also depends on what's the target temperature for incoming air, if you shoot for 12-13'C the warming of the hole cannot be more than a few degrees but for 17-20'C one would have double the margin to play with. It's also (depending on your kWh cost) economically feasible to combine geothermal pre-cooling with "traditional" chillers to take the outside air first from 40'C to 25'C and then chill it further more expensively. This also works the other way around for us in the colder climates where you actually need to heat up the inbound air. That way you'll also accelerate the cooling of the hole.
I'm sure somebody on the list has the necessary math to work out how many joules one can push into a hole for one degree temperature rise. Pete
Paul, Using a multi-stage filter system with the large partical filters in front and an ionizing stage to remove smaller but still large enough particals to cause dust. Clean room filters would be an overkill. John (ISDN) Lee ________________________________ From: owner-nanog@merit.edu on behalf of Paul Vixie Sent: Tue 3/25/2008 2:17 AM To: nanog@merit.edu Subject: Re: rack power question this has been, to me, one of the most fascinating nanog threads in years. at the moment my own datacenter problem is filtration. isc lives in a place where outside air is quite cool enough for server inlet seven or more months out of the year. we've also got quite high ceilings. a 2HP roof fan will move 10000 cubic feet per minute. we've got enough make-up air for that. but, the filters on the make-up air have to be cleaned several times a week, and at the moment that's a manual operation. mechanical systems, by comparison, only push 20% make-up air, and the filters seem to last a month or more between maintainance events. i'm stuck with the same question that vexes the U S Army when they send the M1A1 into sandstorms, or that caused a lot of shutdowns in NYC in the days after 9/11: what kind of automation can i deploy that will precipitate the particulates so that air can move (for cooling) and so that air won't bring grit (which is conductive)? -- Paul Vixie
On Monday 24 March 2008, Deepak Jain wrote:
While I enjoy hand waving as much as the next guy... reading over this thread, there are several definitions of sq ft (ft^2) here and folks are interchanging their uses whether aware of it or not. [snip] A 30KW cabinet while one sounds lovely, a huge amount of space is going to turned over to most or all of a dedicated PCU and 1/15th of the infrastructure of 500KVA UPS (@0.9PF) including batteries, transformers, etc. [snip] Even ignoring heat rejection, the battery + UPS gear for 500KVA (even with minimal battery times) is approximately the same size (physically) as the 12 cabinets or so it takes to reach that capacity. [same applies for flywheel/kinetic systems]
This is certainly a fascinating thread. One thing I haven't seen discussed, though, is the other big issue with high-density equipment, and that is weight. Those raised floors have a weight limit. In our case, our floors, built out in the early 90's, have a 1500 lb per square inch point load rating, and 7,000 pound per pedestal max weight. The static load rating of 300 pounds per square foot on top of the point load rating doesn't sound too great, but it's ok; we just have to be careful. Our floors are concrete-in-steel, on 24 inch pedestals, with stringers. In contrast, a 42U rack loaded with 75 pound 1U servers is going to weigh upwards of 3,150 pounds (if you figure 300 pounds for the rack and the PDU's in the rack, make that 3,450 pounds). When we get to heavier than 75 pound 1U servers things are going to get dicey. Also in contrast, a fully loaded EMC CX700 is about 2,000 pounds. It sounds more and more like simply charging for rack-occupied square footage is an unsustainable business model. The four actual billables are power, cooling (could be considered power), bandwidth, and weight. When we see systems as dense as a Cray 2, but with modern IC's, we'll be treated to flourinert waterfalls again. :-) -- Lamar Owen Chief Information Officer Pisgah Astronomical Research Institute 1 PARI Drive Rosman, NC 28772 (828)862-5554 www.pari.edu
At 10:15 AM 3/26/2008, Lamar Owen wrote:
One thing I haven't seen discussed, though, is the other big issue with high-density equipment, and that is weight.
Those raised floors have a weight limit. In our case, our floors, built out in the early 90's, have a 1500 lb per square inch point load rating, and 7,000 pound per pedestal max weight. The static load rating of 300 pounds per square foot on top of the point load rating doesn't sound too great, but it's ok; we just have to be careful. Our floors are concrete-in-steel, on 24 inch pedestals, with stringers.
I don't know about others, but we don't use raised floors. If you look at the airflow required and how high your raised floor actually has to be (5-6 ft) in our case, it simply doesn't make sense. We use doors at the ends of aisles, blanking panels, and a lexan cover over all aisles. We sequester all air and force the air to flow through the equipment. This typically cuts energy used for cooling roughly by 30-45% We have seen dual 20 ton Lieberts used for a double row (typically 20-22 racks per row) actually cycle on and off once air is no longer allowed to mix. We typically will also use two Challenger 3000 5 ton units in the middle of the row for a total of 50 tons of cooling and about 150KW of electrical use for 35-40 cabinets. That is a mix of some cabinets with fewer servers and some with high density 10 slot dual quad core blade chassis units. We also like to build our datacenters on 8-12" slabs at or slightly above ground level so we don't really need to worry about weight loads either. Not possible if you are on the 20th floor of headquarters, but something to consider when talking about greenfield datacenter development. -Robert Tellurian Networks - Global Hosting Solutions Since 1995 http://www.tellurian.com | 888-TELLURIAN | 973-300-9211 "Well done is better than well said." - Benjamin Franklin
On Sat, 22 Mar 2008, Patrick Giagnocavo wrote:
Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?
As you recognize, its not an engineering question; its an economic question. Notice how Google's space/power philosphy changed between leveraging other people's space/power, and now that they own their own space/power. Existing equipment could exceed 20kW in a rack, and some folks are planning for equipment exceeding 30kW in a rack. But things get more interesting when you look at the total economics of a data center. 8kW/rack is the new "average," but that includes a lot of assumptions. If someone else is paying, I want it and more. If I'm paying for it, I discover I can get by with less.
On Sat, 22 Mar 2008, Patrick Giagnocavo wrote:
Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?
As you recognize, its not an engineering question; its an economic question. Notice how Google's space/power philosphy changed between leveraging other people's space/power, and now that they own their own space/power.
Existing equipment could exceed 20kW in a rack, and some folks are planning for equipment exceeding 30kW in a rack.
But things get more interesting when you look at the total economics of a data center. 8kW/rack is the new "average," but that includes a lot of assumptions. If someone else is paying, I want it and more. If I'm paying for it, I discover I can get by with less.
That may not be the correct way to look at it. There's a very reasonable argument to be made that the artificial economic models used by colocation providers has created this monster to begin with. The primary motivation for many customers to put more stuff in a single rack is that the cost for a rack subsidizes at least a portion of the power and cooling costs. A single rack with two 20A circuits typically costs less than two racks with a 20A circuit each. To some extent, this makes sense. However, it often costs *much* less for the single rack with two 20A circuits. Charging substantially less for rack space, even offset by higher costs for power, would encourage a lot of colo customers to "spread the load" around and not feel as obligated to maximize the use of space. That would in turn reduce the tendency for there to be excessive numbers of hot spots. The economic question of how to build your pricing model ultimately becomes an engineering question, because it becomes progressively more difficult to provide power and cooling as density increases. Or, to quote you, in an entirely different context:
If I'm paying for it, I discover I can get by with less.
The problem is that this is currently true for values of "it" where "it" equals "racks." ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sat, 22 Mar 2008, Joe Greco wrote:
Charging substantially less for rack space, even offset by higher costs for power, would encourage a lot of colo customers to "spread the load" around and not feel as obligated to maximize the use of space. That would in turn reduce the tendency for there to be excessive numbers of hot spots.
I wonder if we're to the point yet where we should just charge for power and give the space away "free".... When I'm shopping for colo that's pretty much the way I look at it. Power determines space. I need 80,000W of power at the breaker, so I need 800sqftx15$ in facility A, and 320sqft@40$ in facility B. I can fit my 8 racks into either the 320sqft or into the 800. If I'm doing the 800, I'll probably spend a bit more up front and use 12 or 14 racks, to keep my density down. A bit more cost up front, but in the grand scheme of things 4 or 6 extra racks ($6 to 10,000$) don't directly hurt to much. (80kW worth of power usually means you've got well north of $2M worth of hardware and software being stuffed into the space in my experience..but maybe that's because we're an Oracle shop. ;) Of course, I suppose for those customers still doing super-low-density boxes (webhosting with lots and lots of desktops), I suppose that model wouldn't work as well. ramble. .d --- david raistrick http://www.netmeister.org/news/learn2quote.html drais@icantclick.org http://www.expita.com/nomime.html
Basically, that is the state of things. You're paying for power (which is also cooling) and bandwidth/connectivity. Globix is only letting us run one 30A 240V circuit per rack for "cooling reasons", however even with our 6850s we manage to populate a good portion of the rack. However, this means that for redundancy we cannot put anyone's partner in crime in the same cabinet. Though Globix is the typical cold-in-front, hot-in-back setup they still seem to be under-capacity when it comes to cooling... I think the end of the dot-com boom put a dent in their Liebert budget. It amazes me places like Hurricane can't get enough space when there are once-decent shops like Globix with so much unused space. Plan your power requirements carefully, and plan on the ability to upgrade in the future. With the current trend of high-capacity blades, it seems that it would not be impossible to find 20-25kw per cabinet not long from now. Power distribution (if done right) is easy, cooling that density is the fun part. What's your cooling plan? -Patrick ----- Original Message ----- From: "david raistrick" <drais@icantclick.org> To: "Joe Greco" <jgreco@ns.sol.net> Cc: nanog@nanog.org Sent: Saturday, March 22, 2008 9:26:37 PM (GMT-0800) America/Los_Angeles Subject: Re: rack power question On Sat, 22 Mar 2008, Joe Greco wrote:
Charging substantially less for rack space, even offset by higher costs for power, would encourage a lot of colo customers to "spread the load" around and not feel as obligated to maximize the use of space. That would in turn reduce the tendency for there to be excessive numbers of hot spots.
I wonder if we're to the point yet where we should just charge for power and give the space away "free".... When I'm shopping for colo that's pretty much the way I look at it. Power determines space. I need 80,000W of power at the breaker, so I need 800sqftx15$ in facility A, and 320sqft@40$ in facility B. I can fit my 8 racks into either the 320sqft or into the 800. If I'm doing the 800, I'll probably spend a bit more up front and use 12 or 14 racks, to keep my density down. A bit more cost up front, but in the grand scheme of things 4 or 6 extra racks ($6 to 10,000$) don't directly hurt to much. (80kW worth of power usually means you've got well north of $2M worth of hardware and software being stuffed into the space in my experience..but maybe that's because we're an Oracle shop. ;) Of course, I suppose for those customers still doing super-low-density boxes (webhosting with lots and lots of desktops), I suppose that model wouldn't work as well. ramble. .d --- david raistrick http://www.netmeister.org/news/learn2quote.html drais@icantclick.org http://www.expita.com/nomime.html
On Sun, 23 Mar 2008, david raistrick wrote:
I wonder if we're to the point yet where we should just charge for power and give the space away "free"....
There's at least one small example of someone doing that already. http://www.jump.net.uk/colo.html Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ BAILEY: CYCLONIC BECOMING SOUTHEASTERLY 5 TO 7, OCCASIONALLY GALE 8, PERHAPS SEVERE GALE 9 LATER. MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH. RAIN OR SHOWERS. MODERATE OR GOOD.
PG> Date: Sat, 22 Mar 2008 22:02:49 -0400 PG> From: Patrick Giagnocavo PG> Hopefully this classifies as on-topic... PG> PG> I am discussing with some investors the possible setup of new PG> datacenter space. You might also try the isp-colo.com list. PG> Are there cases where more than 6000W per rack would be needed? It depends how one differentiates between "want" and "need". PG> (We are not worried about cooling due to the special circumstances PG> of the space.) ixp.aq? ;-) PG> Would someone pay extra for > 7KW in a rack? They should. If they need more than 6kW, their alternative is to pay for a second rack, which hardly would be free. PG> What would be the maximum you could ever see yourself needing in PG> order to power all 42U ? 1. For colo, think 1U dual-core servers with 3-4 HDD; 2. For routers, Google: juniper t640 kw. HTH, Eddy -- Everquick Internet - http://www.everquick.net/ A division of Brotsman & Dreger, Inc. - http://www.brotsman.com/ Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 785 865 5885 Lawrence and [inter]national Phone: +1 316 794 8922 Wichita ________________________________________________________________________ DO NOT send mail to the following addresses: davidc@brics.com -*- jfconmaapaq@intc.net -*- sam@everquick.net Sending mail to spambait addresses is a great way to get blocked. Ditto for broken OOO autoresponders and foolish AV software backscatter.
On Sat, Mar 22, 2008 at 11:19 PM, Edward B. DREGER <eddy+public+spam@noc.everquick.net> wrote:
[ clip ]
PG> (We are not worried about cooling due to the special circumstances PG> of the space.)
ixp.aq? ;-)
I'm not worried about cooling either: http://www.businessweek.com/magazine/content/08_13/b4077060400752.htm?campai... -M<
At 10:02 PM -0400 3/22/08, Patrick Giagnocavo wrote:
Hopefully this classifies as on-topic...
I am discussing with some investors the possible setup of new datacenter space.
Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.
Right now we are in spreadsheet mode evaluating different scenarios.
Are there cases where more than 6000W per rack would be needed?
10K per rack | ~ 400 watts/sqft is a common design point being used by the large scale colocation/reit players. It's quite possible to exceed with blade servers or high-density storage (Hitachi, EMC, etc) but it'd take unusual business models today to exceed that on every rack.
(We are not worried about cooling due to the special circumstances of the space.)
So, even presuming an abundance of cold air right outside the facility, you are still going to move the equipment generated heat to chillers or cooling towers. It is quite likely that your HVAC plant will could be your effective limit in ability to add power drops.
Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?
Again, you can find single rack, 30" deep storage arrays/controllers that will exceed 20KW, but the hope is that you've got a cabinet or two of less dense equipment surrounding them. Best thing to do is fine someone in the particular market segment you're aiming for and ask them for some averages and trends, since it's going to vary widely depending on webhosting/enterprise data center/content behemoth. /John
This greatly depends on what you want to do with the space. If you're putting in co-lo space by the square footage footprint then your requirements will be much less. If you expect a large percentage of it to be leased out to an enterprise then you should expect the customers to use every last U in a cabinet before leasing the next cabinet in the row. Ie your power usage will be immense. I did something similar about 2 years ago. We were moving a customer from one facility to another. We mapped out each cabinet including server models. I looked up maximum power consumption for each model including startup consumption. The heaviest loaded cabinet specced out at 12,000w. The cabinet was full of old 1U servers. New 1U servers are the worst-case scenario by far. 12k is rather low IMHO. Some industry analysts estimate that the power requirements for high-density applications scale as high as 40kw. http://www.servertechblog.com/pages/2007/01/cabinet_level_p.html There are a few things to remember. Code only permits you to load a circuit to 80% of its maximum-rated capacity. The remaining 20% is the safety margin required by the NEC. Knowing this that means that the 12Kw specified above require 7x 20a 120v circuits or 5x 30a 120v circuits. You can get 20a and 30a horizontal PDUs for both 120v and 240v. There are also 208v options. You can also get up to 40a vertical PDUs. One word of caution about the vertical PDUs. If your cabinets aren't deep enough in the rear (think J Lo) the power cabling will get in the way of the rails and other server cabling. There are others but they are less common. Also remember that many of the larger servers (such as the Dell 6850s or 6950s) are 240v and will require a pair of dedicated circuits (20a or 30a). I would also recommend that you look into in-row power distribution cabinets like the Liebert FDC. This means shorter home-runs for the large number of circuits you'll be putting in (saving your a bundle in copper too). It also means less under-floor wiring to work around, making future changes much easier. Changes in distribution cabinets are also much easier, safer and less prone to accidents/mistakes than they are in distribution panels. Grounding is a topic that is worthy of its own book. Consult an electrician used to working with data centers. Don't overlook this critical thing. Standby power sources fall into this topic as well. How many 3-phase generators are you going to need to keep your UPSs hot? I'm curious what your cooling plans are. I would encourage you to consider geothermal cooling though. The efficiencies that geothermal brings to the table are worth you time to investigate. Best of luck, Justin Patrick Giagnocavo wrote:
Hopefully this classifies as on-topic...
I am discussing with some investors the possible setup of new datacenter space.
Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.
Right now we are in spreadsheet mode evaluating different scenarios.
Are there cases where more than 6000W per rack would be needed?
(We are not worried about cooling due to the special circumstances of the space.)
Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?
Cordially
Patrick Giagnocavo patrick@zill.net
On Sunday 23 March 2008, Justin Shore wrote:
There are a few things to remember. Code only permits you to load a circuit to 80% of its maximum-rated capacity. The remaining 20% is the safety margin required by the NEC. Knowing this that means that the 12Kw specified above require 7x 20a 120v circuits or 5x 30a 120v circuits.
Cord connected loads can be 50A easily enough; something like a NEMA L21-60P can give you 18KW in one plug (after 80% derating); if you could use 277V the L22-60P is available to get you almost 40KW on one plug (again, after the 80% is factored in; it's almost 50KW at 100% rating). Hubbell makes 60 and 100A plugs and receptacles if 40KW isn't enough. PDU's for these are more scarce, but I'm sure Marway would build to suit. We have a few of the older Hubbell 50A twistloks here that were used for some sort of signal processing equipment back in the day.
Also remember that many of the larger servers (such as the Dell 6850s or 6950s) are 240v and will require a pair of dedicated circuits (20a or 30a).
The 6950 can run on 120VAC. That is one of the primary reasons we bought 6950's with Opterons instead of 6850's with Xeons; I only had 120VAC capable UPS's at the time. With router densities going way up, and heating going along with them, this facilities issue can even impact the network operator.
I would also recommend that you look into in-row power distribution cabinets like the Liebert FDC.
We have Liebert PPA's here. Two 125's and a 50.
Grounding is a topic that is worthy of its own book. Consult an electrician used to working with data centers. Don't overlook this critical thing.
Ground reference grid. See Cisco's 'Building the Best Data Center for your Business' book and/or Sun's Blueprint series datacenter book for more good information. Also be thoroughly familiar with NEC Article 645. While this discussion might seem out of the ordinary for a network operator's group, it is a very good discussion. Another good resource for datacenter/commcenter information is www.datacenterknowledge.com; at least I've found it to be. -- Lamar Owen Chief Information Officer Pisgah Astronomical Research Institute 1 PARI Drive Rosman, NC 28772 (828)862-5554 www.pari.edu
What is the purpose of the datacenter computing, datacom/telco or both. AC or DC power feeds or both, backup power or naked, dual feed from the power company with transfer switch or power with generator backup? Are you dual feeding the racks? Do you require NEBs compliant racks to make it through a shake and bake ( a seismic event). Many centers run DC 50 volt multi hundred amp battery and inverter systems. For AC the higher voltage, three phase is more efficient 220, 440 60 cycle. Power delivery can be at 10 - 12k volts with stepdown transformers in the facility. If the building feed is for the entire facility than you need home runs from the main power panels to your power backup and protection circuits. When I worked for a certain large fiber backbone based provider of circuits and colo for each amount of rack space and racks you would get so much DC and AC power and to get more you would pay extra. All of the colo power had multi hour battery backup and a generator would kick in. They had a DC distribution plant with AC inverters. Your largest issue may be grounding and the ground plan for the building and your datacenter in the building. If the building does not have a good ground plane and most do not, you may have to retrofit new grounding pads by digging outside the building or through the floor. You need to measure the potential to determine if their are any ground loops in the building i.e. you want the ground to be the same for all parts of the building. You need to put power and transient monitoring equipment on your power sources to verify no power spikes or large EMI coming into the building. No carpet in the room or you can have bad ESD blowing your equipment. Ground the cabinets and have grounding straps for anyone working on the equipment. Check power conditioning equipment for being able to handle brownout conditions as well as actual power outages. In certain areas of the country lightening protection may have to be enhanced for the building. Without adequate high voltage and current shunts and filters your equipment can be wiped out on a regular basis. You want to locate the datacenter below ground but not in the basement. You want it in the interior of the building for better lightening and storm protection. You do not want it in a hundred year flood plane. or you may need to seal it against incoming water. You do not want to locate it near or below building plumbing. You will want to have non reversible drains to drain water but not backup and flood the facility. You do not want to locate it close to the elevators, building HVAC or other sources of large EM spikes. You may want to add EMI shielding for the room to reduce either EMI leaving or entering the room. If the power requirements are large enough you will need to use a chiller system to adequately cool the room. This is putting water in your datacenter which is also not a good idea. During power outages you need to continue to power the HVAC and building control or your facility can go down. You will need to review structural plans for the building to see if the floors can handle the extra load or if the floors need to be re-enforced. For security you want re-enforced concrete walls and floors for the room. If the current floors and walls are inadequate you may need to build a room within the room. If the walls are standard concrete block and steel you can run re-enforcing rods and concrete into the blocks. You want steel doors with magnetic locks, that can withstand sledge hammers and people driving into them. Add video surveillance, biometric readers and other sensors for your security systems. Before you can do any construction of course you will need to get the appropriate city and county permits and permission from the building owners. If you engineer the facility correctly it can take significant investment for a 5, 10 or 15 year investment period. IMHO make sure you really want to do this. Good luck John (ISDN) Lee ________________________________ From: owner-nanog@merit.edu on behalf of Patrick Giagnocavo Sent: Sat 3/22/2008 10:02 PM To: nanog@nanog.org Subject: rack power question Hopefully this classifies as on-topic... I am discussing with some investors the possible setup of new datacenter space. Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack. Right now we are in spreadsheet mode evaluating different scenarios. Are there cases where more than 6000W per rack would be needed? (We are not worried about cooling due to the special circumstances of the space.) Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ? Cordially Patrick Giagnocavo patrick@zill.net
On Sat, 22 Mar 2008, Patrick Giagnocavo wrote:
I am discussing with some investors the possible setup of new datacenter space.
Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.
Right now we are in spreadsheet mode evaluating different scenarios.
Are there cases where more than 6000W per rack would be needed?
Is this just for servers, or could there be network gear in the racks as well? We normally deploy our 6509s with 6000W AC power supplies these days and and I do have some that can draw close to or over 3000W on a continuous basis. A fully populated 6513 with power hungry blades could eat 6000W. It's been awhile since I've tumbled the numbers, but I think a 42U rack full of 1U servers or blade servers could chew through 6000W and still be hungry. Are you also taking into account a worst-case situation, i.e. everything in the rack powering on at the same time, such as after a power outage?
(We are not worried about cooling due to the special circumstances of the space.)
Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?
I don't know what you mean by 'extra', but I'd imagine that if someone needs 7KW or more in a rack, then they'd be prepared to pay for the amount of juice they use. This also means deploying a metering/monitoring solution so you can track how much juice your colo customers use and bill them accordingly. Power consumption, both direct (by the equipment itself) and indirect (cooling required to dissipate the heat generated by said equipment) is a big issue in data center environments these days. Cooling might not be an issue in your setup, but it is a big headache for most large enterprise/data center operators. jms
On Sun, 23 Mar 2008, Justin M. Streiner wrote:
and and I do have some that can draw close to or over 3000W on a continuous basis. A fully populated 6513 with power hungry blades could eat 6000W.
Easily. The HP blades I have right now are 14 servers in 10u, 6-7,000W. Breaker on it needs to be for over 10,000W. (30x208x1.73 for 30A 3 phase) With our 1u servers, we're able to get about 12 or so in a rack with a 20A 208V single phase (my exact budget numbers are behind a vpn I don't feel like firing up...;) plus a pair of switches. At 370W (peak), i'd need 15540W to power 42 of them, 27,972W at the breaker (I prefer 75 to 80% of breakered capacity vs the NEC's 85%). Works out to something like 4 30A 208v single phase circuits and 1 20A. So 29120W at the breaker. That's a lot of hot air. ;) ...david --- david raistrick http://www.netmeister.org/news/learn2quote.html drais@icantclick.org http://www.expita.com/nomime.html
In a message written on Sat, Mar 22, 2008 at 10:02:49PM -0400, Patrick Giagnocavo wrote:
Are there cases where more than 6000W per rack would be needed?
For a router/switch data points (this is NANOG, after all): http://www.cisco.com/en/US/prod/collateral/routers/ps5763/prod_brochure0900a... The CRS-1 in 16 slot or fabric chassis configuration takes a full rack and needs ~11,000W. 6509-E's take dual 6000W power supplies. They are 15U, and I have seen 3 of them stacked in a 48U cabinet (obviously doesn't work in a 42U rack). That's 18,000W draw in a single cabinet. I'm afraid 6000W is on the low end, by today's standards, and some of the new 1RU multi-system chassis or blade servers can make these numbers look puny. For instance: http://www.themis.com/prod/hardware/res-12dcx.htm Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600W * 42 = 25,200. What do you expect your customers to bring? How long do you expect your data center to last? Not that long ago people were building 5000W/rack data centers; often those places today have large empty spaces, but are at their power and/or cooling limits. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Leo Bicknell wrote:
Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600W * 42 = 25,200.
Supermicro has the "1U Twin" which is 980W for two dual-slot machines in 1U form factor; http://www.supermicro.com/products/system/1U/6015/SYS-6015TW-TB.cfm If you can accommodate that, it should be pretty safe for anything else. Pete
Leo Bicknell wrote:
Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600W * 42 = 25,200.
Supermicro has the "1U Twin" which is 980W for two dual-slot machines in 1U form factor; http://www.supermicro.com/products/system/1U/6015/SYS-6015TW-TB.cfm
If you can accommodate that, it should be pretty safe for anything else.
My desktop has a 680 Watt power supply, but according to a meter I once connected, it is only running at 350 to 400 Watts. So if a server has a 980W power supply, does the rack power need to be designed to handle multiples of such a beast, even though the server may not come close (because it may not be fully loaded with drives or whatever)? Wouldn't it be better to do actual measurements to see the real draw might be? -- Scanned for viruses and dangerous content at http://www.oneunified.net and is believed to be clean.
On Sun, 23 Mar 2008, Ray Burkholder wrote:
My desktop has a 680 Watt power supply, but according to a meter I once connected, it is only running at 350 to 400 Watts. So if a server has a 980W power supply, does the rack power need to be designed to handle multiples of such a beast, even though the server may not come close (because it may not be fully loaded with drives or whatever)? Wouldn't it be better to do actual measurements to see the real draw might be?
This depends on who's providing the power. If it's your power and your servers, you can "know" that your 980W supplies are really only using 600W, be happy, and plan accordingly if you upgrade later. If you're providing the power, but it's someone else's gear, you better have good communication when it comes to power requirements/utilization, because what happens when they install more drives/processors next month, and those systems that were using 600W suddenly are using 800W each? When providing/planning UPS power, if you sell a 120V 20A circuit, do you budget 120V 20A of UPS power for that customer, or 16A (80%), or even slightly more than 20A (figuring worst case, they're going to overload their circuit at some point) when deciding how full that UPS is? ---------------------------------------------------------------------- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
At 01:14 PM 3/23/2008, Ray Burkholder wrote:
My desktop has a 680 Watt power supply, but according to a meter I once connected, it is only running at 350 to 400 Watts. So if a server has a 980W power supply, does the rack power need to be designed to handle multiples of such a beast, even though the server may not come close (because it may not be fully loaded with drives or whatever)? Wouldn't it be better to do actual measurements to see the real draw might be?
The startup draw can be quite a bit more. I think before all those fancy power saving features kick in, some of the servers we have can draw quite a bit on initial bootup as they spin the fans 100% and spin up disks etc. I also find the efficiencies of boards really vary. In our spam scanning cluster we used some "low end" RS480 boards by ECS (AMD Socket 939). Cool to run to the point where on the bench you would touch the various heat sinks and wonder if it was powered up. This compared to some of our Tyan 939 "server boards" which could blister your finger if you touched the heat sink too long. ---Mike
At 2:14 PM -0300 3/23/08, Ray Burkholder wrote:
My desktop has a 680 Watt power supply, but according to a meter I once connected, it is only running at 350 to 400 Watts. So if a server has a 980W power supply, does the rack power need to be designed to handle multiples of such a beast, even though the server may not come close (because it may not be fully loaded with drives or whatever)? Wouldn't it be better to do actual measurements to see the real draw might be?
Yes, if you perform the measurements both at peak cpu load and during power-up (quite a bit of well-known gear maxes out its power draw only during the power-on sequence). Also, you're still going to want to size the power drop so that the measured load won't exceed 80% capacity due to code. /John
jcurran@mail.com (John Curran) writes:
Also, you're still going to want to size the power drop so that the measured load won't exceed 80% capacity due to code.
that's true of output breakers, panel busbars, and wire. on the other hand, transformers (e.g., 480->208 or 12K->480) are rated at 100%, as are input breakers and of course generators. -- Paul Vixie
participants (46)
-
Adrian Chadd
-
Alex Rubenstein
-
Alexander Harrowell
-
Barry Shein
-
Barton F Bruce
-
Ben Butler
-
Brian Raaen
-
Chris Adams
-
chuck goolsbee
-
david raistrick
-
Deepak Jain
-
Derek J. Balling
-
Dorn Hetzel
-
Duane Waddle
-
Edward B. DREGER
-
Frank Bulk - iNAME
-
Joe Abley
-
Joe Greco
-
Joel Jaeggli
-
John Curran
-
John Lee
-
Jon Lewis
-
Justin M. Streiner
-
Justin Shore
-
Lamar Owen
-
Leigh Porter
-
Leo Bicknell
-
Marshall Eubanks
-
Martin Hannigan
-
Michael Brown
-
Michael Holstein
-
michael.dillon@bt.com
-
Mike Tancsa
-
Patrick Clochesy
-
Patrick Giagnocavo
-
Paul Vixie
-
Paul Vixie
-
Petri Helenius
-
Ray Burkholder
-
Robert Boyle
-
Ryan Otis
-
Sean Donelan
-
Tony Finch
-
Valdis.Kletnieks@vt.edu
-
vijay gill
-
William Herrin