Steven M. Bellovin wrote:
In message <Pine.WNT.4.61.0410261429110.3340@vanadium.hq.nac.net>, Alex Rubenst ein writes:
Hello,
I've done quite a bit of studyin power usage and such in datacenters over the last year or so.
I'm looking for information on energy consumption vs percent utilization. In
other words if your datacenter consumes 720 MWh per month, yet on average your servers are 98% underutilized, you are wasting a lot of energy (a hot topic these days). Does anyone here have any real data on this?
I've never done a study on power used vs. CPU utilization, but my guess is that the heat generated from a PC remains fairly constant -- in the grand scheme of things -- no matter what your utilization is.
I doubt that very much, or we wouldn't have variable speed fans. I've monitored CPU temperature when doing compilations; it goes up significantly. That suggests that the CPU is drawing more power at such times.
From running a Colo in a place with ridiculus high electricity engery costs (Zurich/Switzerland) I can tell you that the energy consuption of routers/telco (70%) and servers (30%) changes changes significantly throughout the day. It pretty much follows the traffic graph. There is a solid base load just because the stuff is powered up and from there it goes up as much as 20-30% depending on the routing/computing load of the boxes. To simplify things you can say that per packet you have that many "mWh" (milli-Watt-hours) per packet switched/routed or http requests answered over the base load. I haven't tried to calulate how much energy routing a packet on a Cisco 12k or Juniper M40 cost though. Would be very interesting if someone (student) could do that calculation. -- Andre