Date: Sat, 31 Mar 2007 19:31:53 -0700 From: Jay Hennigan <jay@west.net> Subject: Re: PG&E on data centre cooling..
John Kinsella wrote:
I sorta wonder why the default is lights on, actually...I used to always love walking into dark datacenters and seeing the banks of GSRs (always thought they had good Blink) and friends happily blinking away.
Consider the power consumption per square foot of the gear in a typical data center, then add in the power needed to keep it cool. I suspect that the cost of energy to keep the lights on will be down in the noise.
In addition, 1) if the lighting is 'already there', figure the cost of re-wiring to 'sensor-based' switching. The parts aren't terribly expensive, but consider the amount of labor required. Particularly if the desired switched lighting 'zones' don't match the existing circuit wiring. Don't forget the maintenance costs, either. You're probably going to have to replace bulbs more frequently -- on/off cycles _are_ added 'stress' on bulbs. 2) if it is new construction, figure the differential cost in parts, labor, *and* maintenance, of sensor-based lighting switching. This is lower than 1), but still 'non-trivial'. Now, estimate how much energy will be saved, and how long it will take for that savings to pay back the cost of the investment. "Secondary" savings from reduction in HVAC load? How many KW/sq.ft. does the gear eat? vs. how many watts/sq.ft for lighting? ['Office grade' lighting is under 2 watts/sq.ft. (and may be significantly less) using conventional fluorscents, high-intensity halogen can be lower. 'Residential level' general lighting can easily be under 1 watt/sq.ft.] It's not like you're going to reduce the load enough to shut down one of the chillers. :)
Myth Busters proved that turning the lights off is more cost efficient than leaving them on. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com ----- Original Message ----- From: "Robert Bonomi" <bonomi@mail.r-bonomi.com> To: <nanog@merit.edu> Sent: Saturday, March 31, 2007 11:41 PM Subject: Re: PG&E on data centre cooling..
Date: Sat, 31 Mar 2007 19:31:53 -0700 From: Jay Hennigan <jay@west.net> Subject: Re: PG&E on data centre cooling..
John Kinsella wrote:
I sorta wonder why the default is lights on, actually...I used to always love walking into dark datacenters and seeing the banks of GSRs (always thought they had good Blink) and friends happily blinking away.
Consider the power consumption per square foot of the gear in a typical data center, then add in the power needed to keep it cool. I suspect that the cost of energy to keep the lights on will be down in the noise.
In addition, 1) if the lighting is 'already there', figure the cost of re-wiring to 'sensor-based' switching. The parts aren't terribly expensive, but consider the amount of labor required. Particularly if the desired switched lighting 'zones' don't match the existing circuit wiring. Don't forget the maintenance costs, either. You're probably going to have to replace bulbs more frequently -- on/off cycles _are_ added 'stress' on bulbs. 2) if it is new construction, figure the differential cost in parts, labor, *and* maintenance, of sensor-based lighting switching. This is lower than 1), but still 'non-trivial'.
Now, estimate how much energy will be saved, and how long it will take for that savings to pay back the cost of the investment.
"Secondary" savings from reduction in HVAC load? How many KW/sq.ft. does the gear eat? vs. how many watts/sq.ft for lighting? ['Office grade' lighting is under 2 watts/sq.ft. (and may be significantly less) using conventional fluorscents, high-intensity halogen can be lower. 'Residential level' general lighting can easily be under 1 watt/sq.ft.]
It's not like you're going to reduce the load enough to shut down one of the chillers. :)
participants (2)
-
Mike Hammett
-
Robert Bonomi