On 2019-12-18 22:14 CET, Rod Beck wrote:
Well, the fact that a data center generates a lot of means it is consuming a lot of electricity.
Indeed, they are consuming lots of electricity. But it's easier to measure that by putting an electricity meter on the incoming power line (and the power company will insist on that anyway :-) than to measure the heat given off.
It is probably a major operating expense.
There's no "probably" about it. It *is* a major operating expense. When we bought our latest HPC clusters last year, the estimated cost of power and cooling over five years, was ca 25% of the cost of the clusters themselves (hardware, installation, and hardware support). And yes, the cost for cooling is a large part of "power and cooling", but it scales pretty linearly with electricity consumption. Since every joule electricity consumed is one joule heat that needs to be removed.
And while it may be efficient given current technology standards, it naturally leads to the question of how we can do better.
Absolutely. But we are not looking at heat dissipation, but power consumption. Heat only comes into it for the cooling, but since every joule of electricity consumed, becomes one joule of heat to be removed by cooling, they are one and the same.
Thermodynamics is only part of the picture. The other part is economics. If you see a lot of heat being produced and it is not the intended output, then it is anatural focus for improvement. My guess is that a lot of corporate research is going into trying to reduce chip electricity consumption.
Intel, AMD, ARM, IBM (Power), they are all trying to squeeze out more performance, less power usage, and more performance per watt. This affects not only datacenter operating costs, but also things like battery times in laptops, tablets and phones. Likewise are computer manufacturers like HPE, Dell, SuperMicro, Apple, and so on (but they are mostly beholden to the achivements of the CPU and RAM manufacturers). Datacenter operators are trying to lower their power consumptions, by having more efficient UPS:es, more efficient cooling systems, and buying more efficient servers and network equipment. (And as a slight aside, you can sometimes use the heat produced by the datacenter, and removed by the cooling system, in useful ways. I know one DC that used their heat to melt snow in the parking lot during winter. I have heard of some DC that fed their heat into a green-house next door. And some people have managed to heat their offices with warm water from their DC; although that is generally not very easy, as the output water from DCs tend to have too low temperature to be used that way.)
So my gut feeling might still be relevant. It is about the level of energy consumption, not just the fact electricity becomes disorderly molecular gyrations.
Energy consumption is very important. Efficiency is very important. My point is that efficiency is measured in *utility* produced per input power (or input euros, or input man-hours). *In*efficiency is, or at least should be, measured in needed input per utility produced, not in heat left over afterwards. /Bellman