No doubt. Not trying to repeal the second law of thermodynamics. πŸ™‚

I visited Boltzman's grave in Vienna and this equation was on it: S=k*logW. Would not want to disturb his sleep. πŸ˜ƒ


From: Ben Cannon <ben@6by7.net>
Sent: Wednesday, December 18, 2019 8:11 PM
To: Rod Beck <rod.beck@unitedcablecompany.com>
Cc: Thomas Bellman <bellman@nsc.liu.se>; NANOG Operators' Group <nanog@nanog.org>
Subject: Re: Energy Efficiency - Data Centers
 
It is overwhelmingly disposed of as heat, even all useful work.  The amount of energy leaving a DC in fiber cables, etc is perhaps a millionth of one percent.

Even in your lightbulb example, if the light is used inside a room, it gets turned back into heat once it hits the walls. 

So in a closed system, it’s all heat.   

Now, power is lost before it can be used for compute/routing, mostly in power conversions.  Of which there are many in most DCs.  Companies like Facebook and Amazon have done a lot of work to remove excess power conversion steps, to chase better PUE (Power Unit Efficiency) and get more electricity to the computers before losing it as excess heat in voltage conversions.  There’s still room for improvement here, and the power wasted here goes directly to heat before doing any other useful work.

Source: I have a C-20 HVAC license and own and operate 2 datacenters.

-Ben.


-Ben Cannon
CEO 6x7 Networks & 6x7 Telecom, LLC 



On Dec 18, 2019, at 11:06 AM, Rod Beck <rod.beck@unitedcablecompany.com> wrote:

I was reasoning from the analogy that an incandescent bulb is less efficient than a LED bulb because more it generates more heat - more of the electricity goes into the infrared spectrum than the useful visible spectrum. Similar to the way that an electric motor is more efficient than a combustion engine. 




From: Thomas Bellman
Sent: Wednesday, December 18, 2019 7:47 PM
To: Nanog@nanog.org
Cc: Rod Beck
Subject: Re: Energy Efficiency - Data Centers

On 2019-12-18 15:57, Rod Beck wrote:

> This led me to wonder what is the inefficiency of these servers in data> centers. Every time I am in a data center I am impressed by how much> heat comes off these semiconductor chips. Looks to me may be 60% of the> electricity ends up as heat.
What are you expecting the remaining 40% of the electricity ends up as?

In reality, at least 99% of the electricity input to a datacenter ends up
as heat within the DC.  The remaining <1% turns into things like:

 - electricity and light leaving the DC in network cables (but will
   turn into heat in the cable and at the receiving end)
 - sound energy (noise) that escapes the DC building (but will turn
   into heat later on as the sound attenuates)
 - electric and magnetic potential energy in the form of stored bits
   on flash memory, hard disks and tapes (but that will turn into heat
   as you store new bits over the old bits)

(I'm saying <1%, but I'm actually expecting it to be *much* less than
one percent.)

This is basic physics.  First law of thermodynamics: you can't destroy
(or create) energy, just convert it.  Second law: all energy turns into
heat energy in the end. :-)


You are really asking the wrong question.  Efficiency is not measured
in how little of the input energy is turned into heat, but in how much
*utility* you get out of a certain amount of input energy.  In case of
a datacenter, utility might be measured in number of database transac-
tions performed, floating point operations executed, scientific articles
published in Nature (by academic researchers using your HPC datacenter),
or advertisments pushed to the users of your search engine.


There is another efficiency number that many datacenters look at, which
is PUE, Power Usage Effectiveness.  That is a measure of the total energy
used by the DC compared to the energy used for "IT load".  The differece
being in cooling/ventilation, UPS:es, lighting, and similar stuff.
However, there are several deficiencies with this metric, for example:

 - IT load is just watts (or joules) pushed into your servers, and does
   not account for if you are using old, inefficient Cray 1 machines or
   modern AMD EPYC / Intel Skylake PCs.

 - Replace fans in servers with larger, more efficient fans in the rack
   doors, and the IT load decreases while the DC "losses" increase,
   leading to higher (worse) PUE, even though you might have lowered your
   total energy usage.

 - Get your cooling water as district cooling instead of running your own
   chillers, and you are no longer using electricity for the chillers,
   improving your PUE.  There are still chillers run, using energy, but
   that energy does not show up on your DC's electricity bill...

This doesn't mean that the PUE value is *entirely* worthless.  It did
help in putting efficiency into focus.  There used to be datacenters
that had PUE numbers close to, or even over, 2.0, due to having horribly
inefficient cooling systems, UPS:es and so on.  But once you get down
to the 1.2-1.3 range or below, you really need to look at the details
of *how* the DC achieved the PUE number; a single number doesn't capture
the nuances.


        /Bellman