On 2019-12-18 15:57, Rod Beck wrote:
> This led me to wonder what is the inefficiency of these servers in data> centers. Every time I am in a data center I am impressed by how much> heat comes off these semiconductor chips. Looks to me may be 60% of the> electricity ends up as heat.
What are you expecting the remaining 40% of the electricity ends up as?
There is another efficiency number that many datacenters look at, which
is PUE, Power Usage Effectiveness. That is a measure of the total energy
used by the DC compared to the energy used for "IT load". The differece
being in cooling/ventilation, UPS:es, lighting, and similar stuff.
However, there are several deficiencies with this metric, for example:
- IT load is just watts (or joules) pushed into your servers, and does
not account for if you are using old, inefficient Cray 1 machines or
modern AMD EPYC / Intel Skylake PCs.
- Replace fans in servers with larger, more efficient fans in the rack
doors, and the IT load decreases while the DC "losses" increase,
leading to higher (worse) PUE, even though you might have lowered your
total energy usage.
- Get your cooling water as district cooling instead of running your own
chillers, and you are no longer using electricity for the chillers,
improving your PUE. There are still chillers run, using energy, but
that energy does not show up on your DC's electricity bill...
This doesn't mean that the PUE value is *entirely* worthless. It did
help in putting efficiency into focus. There used to be datacenters
that had PUE numbers close to, or even over, 2.0, due to having horribly
inefficient cooling systems, UPS:es and so on. But once you get down
to the 1.2-1.3 range or below, you really need to look at the details
of *how* the DC achieved the PUE number; a single number doesn't capture
the nuances.