Energy efficiency is a hobby of mine and most of my properties embody Passive House Technology. This led me to wonder what is the inefficiency of these servers in data centers. Every time I am in a data center I am impressed by how much heat comes off these semiconductor chips. Looks to me may be 60% of the electricity ends up as heat. Regards, Roderick. Roderick Beck VP of Business Development United Cable Company www.unitedcablecompany.com<http://www.unitedcablecompany.com> New York City & Budapest rod.beck@unitedcablecompany.com 36-70-605-5144 [1467221477350_image005.png]
On 18 Dec 2019, at 15:57, Rod Beck <rod.beck@unitedcablecompany.com> wrote:
Energy efficiency is a hobby of mine and most of my properties embody Passive House Technology. This led me to wonder what is the inefficiency of these servers in data centers. Every time I am in a data center I am impressed by how much heat comes off these semiconductor chips. Looks to me may be 60% of the electricity ends up as heat.
Less than a 100.000th of the energy in a data center is used to run the applications in a datacenter as summarised in this graph: The full talk by Amory Lovins of the Rocky Mountain View: https://youtu.be/wY_js13AuRk?t=1343 <https://youtu.be/wY_js13AuRk?t=1343> My research group has come up with supporting evidence for these claims. Our Wafer Scale Integration and new operating system software can actually achieve these savings. Merik Voswinkel Metamorph research institute
On Wed, Dec 18, 2019 at 8:32 AM merik@fiberhood.nl <merik@fiberhood.nl> wrote:
The full talk by Amory Lovins of the Rocky Mountain View: https://youtu.be/wY_js13AuRk?t=1343
Hi Merik, This aligns with what I'd expect. Essentially every watt of electricity in to the data center is a watt of heat that must be removed from the data center. Did you know some computer room air conditioners actually cool the air at fixed compression and then re-heat it with a resistive electric element to reach the desired cooling output? Insane! Regards, Bill Herrin -- William Herrin bill@herrin.us https://bill.herrin.us/
I guess that is one reason why Google built a huge data center in Finland. Access to very cool water. Not to mention good wholesale electricity rates. And yes, since the electricity is not converted into mechanical work, it must all end up as heat. Regards, Roderick. ________________________________ From: William Herrin <bill@herrin.us> Sent: Wednesday, December 18, 2019 6:10 PM To: merik@fiberhood.nl <merik@fiberhood.nl> Cc: Rod Beck <rod.beck@unitedcablecompany.com>; nanog@nanog.org <Nanog@nanog.org> Subject: Re: Energy Efficiency - Data Centers On Wed, Dec 18, 2019 at 8:32 AM merik@fiberhood.nl <merik@fiberhood.nl> wrote:
The full talk by Amory Lovins of the Rocky Mountain View: https://youtu.be/wY_js13AuRk?t=1343
Hi Merik, This aligns with what I'd expect. Essentially every watt of electricity in to the data center is a watt of heat that must be removed from the data center. Did you know some computer room air conditioners actually cool the air at fixed compression and then re-heat it with a resistive electric element to reach the desired cooling output? Insane! Regards, Bill Herrin -- William Herrin bill@herrin.us https://bill.herrin.us/
In our current project we deploy a distributed datacenter in peoples homes with 120 Gbps fiber. We connect the water cooling of the CPU and GPU directly into the heating of homes and offices. Power comes from $0,02 per kW solar and wind in the neighbourhood. The waste heat is not wasted, it is sold to the homes and offices. This saves up to 95% in energy at 30% the capex.
On 18 Dec 2019, at 18:10, William Herrin <bill@herrin.us> wrote:
On Wed, Dec 18, 2019 at 8:32 AM merik@fiberhood.nl <merik@fiberhood.nl> wrote:
The full talk by Amory Lovins of the Rocky Mountain View: https://youtu.be/wY_js13AuRk?t=1343
Hi Merik,
This aligns with what I'd expect. Essentially every watt of electricity in to the data center is a watt of heat that must be removed from the data center. Did you know some computer room air conditioners actually cool the air at fixed compression and then re-heat it with a resistive electric element to reach the desired cooling output? Insane!
Regards, Bill Herrin
-- William Herrin bill@herrin.us https://bill.herrin.us/
On 2019-12-18 15:57, Rod Beck wrote:
This led me to wonder what is the inefficiency of these servers in data> centers. Every time I am in a data center I am impressed by how much> heat comes off these semiconductor chips. Looks to me may be 60% of the> electricity ends up as heat. What are you expecting the remaining 40% of the electricity ends up as?
In reality, at least 99% of the electricity input to a datacenter ends up as heat within the DC. The remaining <1% turns into things like: - electricity and light leaving the DC in network cables (but will turn into heat in the cable and at the receiving end) - sound energy (noise) that escapes the DC building (but will turn into heat later on as the sound attenuates) - electric and magnetic potential energy in the form of stored bits on flash memory, hard disks and tapes (but that will turn into heat as you store new bits over the old bits) (I'm saying <1%, but I'm actually expecting it to be *much* less than one percent.) This is basic physics. First law of thermodynamics: you can't destroy (or create) energy, just convert it. Second law: all energy turns into heat energy in the end. :-) You are really asking the wrong question. Efficiency is not measured in how little of the input energy is turned into heat, but in how much *utility* you get out of a certain amount of input energy. In case of a datacenter, utility might be measured in number of database transac- tions performed, floating point operations executed, scientific articles published in Nature (by academic researchers using your HPC datacenter), or advertisments pushed to the users of your search engine. There is another efficiency number that many datacenters look at, which is PUE, Power Usage Effectiveness. That is a measure of the total energy used by the DC compared to the energy used for "IT load". The differece being in cooling/ventilation, UPS:es, lighting, and similar stuff. However, there are several deficiencies with this metric, for example: - IT load is just watts (or joules) pushed into your servers, and does not account for if you are using old, inefficient Cray 1 machines or modern AMD EPYC / Intel Skylake PCs. - Replace fans in servers with larger, more efficient fans in the rack doors, and the IT load decreases while the DC "losses" increase, leading to higher (worse) PUE, even though you might have lowered your total energy usage. - Get your cooling water as district cooling instead of running your own chillers, and you are no longer using electricity for the chillers, improving your PUE. There are still chillers run, using energy, but that energy does not show up on your DC's electricity bill... This doesn't mean that the PUE value is *entirely* worthless. It did help in putting efficiency into focus. There used to be datacenters that had PUE numbers close to, or even over, 2.0, due to having horribly inefficient cooling systems, UPS:es and so on. But once you get down to the 1.2-1.3 range or below, you really need to look at the details of *how* the DC achieved the PUE number; a single number doesn't capture the nuances. /Bellman
On Wed, Dec 18, 2019 at 10:48 AM Thomas Bellman <bellman@nsc.liu.se> wrote:
On 2019-12-18 15:57, Rod Beck wrote:
This led me to wonder what is the inefficiency of these servers in data> centers. Every time I am in a data center I am impressed by how much> heat comes off these semiconductor chips. Looks to me may be 60% of the> electricity ends up as heat. What are you expecting the remaining 40% of the electricity ends up as?
There is another efficiency number that many datacenters look at, which is PUE, Power Usage Effectiveness. That is a measure of the total energy used by the DC compared to the energy used for "IT load". The differece being in cooling/ventilation, UPS:es, lighting, and similar stuff. However, there are several deficiencies with this metric, for example:
- IT load is just watts (or joules) pushed into your servers, and does not account for if you are using old, inefficient Cray 1 machines or modern AMD EPYC / Intel Skylake PCs.
- Replace fans in servers with larger, more efficient fans in the rack doors, and the IT load decreases while the DC "losses" increase, leading to higher (worse) PUE, even though you might have lowered your total energy usage.
- Get your cooling water as district cooling instead of running your own chillers, and you are no longer using electricity for the chillers, improving your PUE. There are still chillers run, using energy, but that energy does not show up on your DC's electricity bill...
This doesn't mean that the PUE value is *entirely* worthless. It did help in putting efficiency into focus. There used to be datacenters that had PUE numbers close to, or even over, 2.0, due to having horribly inefficient cooling systems, UPS:es and so on. But once you get down to the 1.2-1.3 range or below, you really need to look at the details of *how* the DC achieved the PUE number; a single number doesn't capture the nuances.
Google has some information on PUE at https://www.google.com/about/datacenters/efficiency/ -- the tl;dr is that we have a datacenter PUE of 1.06, and a campus (including power substation) PUE of 1.11. By comparison, most large datacenters average around 1.67. Damian
The laws of thermodynamics dictate that near 100% of the electricity consumed by a piece of equipment (let's use a high powered 2RU size router as an example) comes off as heat. Unless it's doing mechanical physical work like lifting a load or spinning a fan. Some infinitesimal portion leaves as photons down the fiber. On Wed, Dec 18, 2019 at 6:58 AM Rod Beck <rod.beck@unitedcablecompany.com> wrote:
Energy efficiency is a hobby of mine and most of my properties embody Passive House Technology. This led me to wonder what is the inefficiency of these servers in data centers. Every time I am in a data center I am impressed by how much heat comes off these semiconductor chips. Looks to me may be 60% of the electricity ends up as heat.
Regards,
Roderick.
Roderick Beck VP of Business Development
United Cable Company
www.unitedcablecompany.com
New York City & Budapest
rod.beck@unitedcablecompany.com
36-70-605-5144
[image: 1467221477350_image005.png]
participants (7)
-
Damian Menscher
-
Eric Kuhnke
-
Ethan O'Toole
-
merik@fiberhood.nl
-
Rod Beck
-
Thomas Bellman
-
William Herrin