It's more or less the truth though. Only on rare occasions, such as the cluster/fail-over scenario given, can you actually supply less power to certain machines, and power use largely unrelated to their actual utilisation. Keep an eye on your UPS load during peak hours and you'll see the load rising when traffic and server utilisation rises, but compared to the baseline power needed to feed servers these fluctuations are peanuts. You supply a server with enough power to run...how is this waste exactly...? If anyone is wasting anything, it's perhaps hardware manufacturers that don't design efficiently enough, but power that you provide and that's used (and paid for) by your customers is not wasted IMO. Cheers, Erik On Tue, 2004-10-26 at 21:07, Alex Rubenstein wrote:
Thats an insane statement.
Are you saying, "You are only wasting money on things if you aren't profitable" ?
/action shakes head.
On Tue, 26 Oct 2004, james edwards wrote:
Sorry, this is somewhat OT.
I'm looking for information on energy consumption vs percent utilization. In other words if your datacenter consumes 720 MWh per month, yet on average your servers are 98% underutilized, you are wasting a lot of energy (a hot topic these days). Does anyone here have any real data on this?
Grisha
It is only waste is the P & L statement is showing no profit.
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net -- --
Erik Haagsman Network Architect We Dare BV tel: +31(0)10 7507008 fax:+31(0)10 7507005 http://www.we-dare.nl