Re: Energy Efficiency - Data Centers
Well, the fact that a data center generates a lot of means it is consuming a lot of electricity. It is probably a major operating expense. And while it may be efficient given current technology standards, it naturally leads to the question of how we can do better. Thermodynamics is only part of the picture. The other part is economics. If you see a lot of heat being produced and it is not the intended output, then it is anatural focus for improvement. My guess is that a lot of corporate research is going into trying to reduce chip electricity consumption. So my gut feeling might still be relevant. It is about the level of energy consumption, not just the fact electricity becomes disorderly molecular gyrations. ________________________________ From: Thomas Bellman Sent: Wednesday, December 18, 2019 9:57 PM To: Nanog@nanog.org Cc: Rod Beck Subject: Re: Energy Efficiency - Data Centers On 2019-12-18 20:06 CET, Rod Beck wrote:
I was reasoning from the analogy that an incandescent bulb is less efficient than a LED bulb because more it generates more heat - more of the electricity goes into the infrared spectrum than the useful visible spectrum. Similar to the way that an electric motor is more efficient than a combustion engine.
Still, you should not look at how much heat you get, but how much utility you get. Which for a lighting source would be measured in lumens within the visible spectrum. If you put in 300 watt of electricity into a computer server, you will get somewhere between 290 and 299 watts of heat from the server itself. The second largest power output will be the kinetic energy of the air the fans in the server pushes; I'm guestimating that to be somewhere between 1 and 10 watts (and thus my uncertainty of the direct heat output above). Then you get maybe 0.1 watts of sound energy (noise) and other vibrations in the rack. And finally, less than 0.01 watts of light in the network fibers from the server (assuming dual 40G or dual 100G network connections, i.e. 8 lasers). Every microwatt of electricity put into the server in order to toggle bits, keeping bits at their current value, transporting bits within and between CPU, RAM, motherboard, disks, and so on, will turn into heat *before* leaving the server. The only exception being the light put into the network fibers, and that will be less than 10 milliwatts for a server. All inefficiencies in power supplies, power regulators, fans, and other stuff in the server, will become heat, within the server. So your estimate of 60% heat, i.e. 40% *non*-heat, is off by at least a factor ten. And the majority of the kinetic energy of the air pushed by the server will have turned into heat after just a few meters... So, if you look at how much heat is given off by a server compared to how much power is put into it, then it is 99.99% inefficient. :-) But that's just the wrong way to look at it. In a lighting source, you can measure the amount of visible light given off in watts. In an engine (electrical, combustion or other- wise), you can measure the amount of output in watts. So in those cases, efficiency can be measured in percent, as the input and the output are measured in the same units (watts). But often a light source is better measured in lumens, not watts. Sometimes, the torque, measured in Newton-meters, is more relevant for an engine. Or thrust, measured in Newtons, for a rocket engine. Then, dividing the input (W) with the output (lm, Nm, N) does not give a percentage. Similarly, the relevant output of a computer is not measured in watts, but in FLOPS, database transactions/second, or web pages served per hour. Basically, the only time the amount of heat given off by a computer is relevant, is when you are designing and dimensioning the cooling system. And then the answer is always "exactly as much as the power you put *into* the computer". :-) /Bellman
On 2019-12-18 22:14 CET, Rod Beck wrote:
Well, the fact that a data center generates a lot of means it is consuming a lot of electricity.
Indeed, they are consuming lots of electricity. But it's easier to measure that by putting an electricity meter on the incoming power line (and the power company will insist on that anyway :-) than to measure the heat given off.
It is probably a major operating expense.
There's no "probably" about it. It *is* a major operating expense. When we bought our latest HPC clusters last year, the estimated cost of power and cooling over five years, was ca 25% of the cost of the clusters themselves (hardware, installation, and hardware support). And yes, the cost for cooling is a large part of "power and cooling", but it scales pretty linearly with electricity consumption. Since every joule electricity consumed is one joule heat that needs to be removed.
And while it may be efficient given current technology standards, it naturally leads to the question of how we can do better.
Absolutely. But we are not looking at heat dissipation, but power consumption. Heat only comes into it for the cooling, but since every joule of electricity consumed, becomes one joule of heat to be removed by cooling, they are one and the same.
Thermodynamics is only part of the picture. The other part is economics. If you see a lot of heat being produced and it is not the intended output, then it is anatural focus for improvement. My guess is that a lot of corporate research is going into trying to reduce chip electricity consumption.
Intel, AMD, ARM, IBM (Power), they are all trying to squeeze out more performance, less power usage, and more performance per watt. This affects not only datacenter operating costs, but also things like battery times in laptops, tablets and phones. Likewise are computer manufacturers like HPE, Dell, SuperMicro, Apple, and so on (but they are mostly beholden to the achivements of the CPU and RAM manufacturers). Datacenter operators are trying to lower their power consumptions, by having more efficient UPS:es, more efficient cooling systems, and buying more efficient servers and network equipment. (And as a slight aside, you can sometimes use the heat produced by the datacenter, and removed by the cooling system, in useful ways. I know one DC that used their heat to melt snow in the parking lot during winter. I have heard of some DC that fed their heat into a green-house next door. And some people have managed to heat their offices with warm water from their DC; although that is generally not very easy, as the output water from DCs tend to have too low temperature to be used that way.)
So my gut feeling might still be relevant. It is about the level of energy consumption, not just the fact electricity becomes disorderly molecular gyrations.
Energy consumption is very important. Efficiency is very important. My point is that efficiency is measured in *utility* produced per input power (or input euros, or input man-hours). *In*efficiency is, or at least should be, measured in needed input per utility produced, not in heat left over afterwards. /Bellman
participants (2)
-
Rod Beck
-
Thomas Bellman