RE: rack power question
A very interesting thread... I believe that the cost of power will continue to climb well into the future and that it would be foolish to build any new infrastructure without incorporating the ability to pay for what you use. So this means being able to measure the power consumption for each server or to aggregate it to the individual customer. Then factor in your cost of providing that power and the associated cooling loads, add you margin and bill the user accordingly. Paying for what you use is inherently fair - and I think of the colo provider also as a technically competent provider of "clean", highly-reliable power.
Agreed. About four years ago, we saw the writing on the wall. The days of $750 racks which includes 20 amps of 120v are long gone. It was fine when the customer consumed 2 amps. No one really thought about the cross subsidization issues. Today, we sell racks for $500, but include no energy; every outlet to every rack in our datacenter is metered, and we sell energy at $n/amp. It's that simple. In effect, you can have as many outlets as you like (one time charge for them), but you pay for the consumption. No question cost of energy is going up and up; we've seen a 75% to 100% increase in 5 years in Northern NJ. This is why we plan on investing heavily in solar in the next year or so. With returns being as fast at 6 to 8 years, you are crazy not to look into this.
If you don't have a pay-as-you-go billing model, then what incentives are there for the users to consolidate apps onto fewer boxes or to enable the power saving features of the box (or operating system) - which are becoming more widely available over time? Answer: none - and human nature will simply be lazy about power saving.
Even so, this doesn't seem to factor in. Customers have computing needs. Rarely do you see a customer say, "wow, this amp is costing me $n, so I am not going to run this SQL server."
About 8 months ago we were faced with an expansion issue where our datacenter upgrade was delayed due to permits. At the time sun had just announced their blackbox, now called the S20. During the road trip I got to walk through one of these. I found the cooling aspect to be very interesting. It works on front to back cooling but has what I would call radiators are sandwiched between each set of racks. From my non-technical point of view on HVAC this seemed to reduce a large amount of wasted cooling seen in large room datacenters. Is this something we might see in future fixed datacenters or is this limited to the portable data center due to technical limitations ? Due to cost we didn't get to use one but I found the idea very interesting. Derrick
About 8 months ago we were faced with an expansion issue where our datacenter upgrade was delayed due to permits. At the time sun had just announced their blackbox, now called the S20. During the road trip I got to walk through one of these. I found the cooling aspect to be very interesting. It works on front to back cooling but has what I would call radiators are sandwiched between each set of racks. From my non-technical point of view on HVAC this seemed to reduce a large amount of wasted cooling seen in large room datacenters. Is this something we might see in future fixed datacenters or is this limited to the portable data center due to technical limitations ? Due to cost we didn't get to use one but I found the idea very interesting.
I take exception to "wasted cooling" ... Are you saying the laws of thermodynamics don't apply to heat generated by servers? I can understand and appreciate the thought process of bad airflow design, ie, no hot or cold rows, or things of that nature. But you can't really waste AC. We're doing a small test deployment of a dozen or so Chatsworth chimney style cabinets, with ducted returns. When the system is up and running I will comment on it more, but it seems to make a lot more sense that what we all have been doing for the last 10 years. As for the Sun S20, is it still painted black? :)
participants (2)
-
Alex Rubenstein
-
Derrick