Herb Zien is CEO of LiquidCool Solutions, a technology development firm with patents surrounding cooling electronics by total immersion in a dielectric fluid.
Predicting future data center space, power and cooling requirements has always been a challenge, recently compounded by the advent of enormous cloud computing facilities. Guides have been developed that provide best practices and standards for energy-efficient data center design, but prescriptive design parameters have not been able to keep pace with the growth of equipment densities and thermal output. Although data center design standards are becoming more dynamic and balanced between the need for reliability and energy efficiency, they do not fully consider data center optimization and new, disruptive technologies.
The fact is, today’s data center design makes no sense at all.
Most data centers are large air-conditioned rooms with hot and cold aisles, and cold air is forced up through holes in the floor. They require humidity control. As temperatures are being elevated to reduce mechanical refrigeration, fans are pushed to work harder offsetting some of the energy savings.
Free cooling is used where possible, but it can introduce dirty outside air, triggering its own set of maintenance issues. Adiabatic walls that cool by evaporation, elevate water usage; and the saturated air has to be reheated to bring humidity down to acceptable levels.
Circulating air removes low-grade heat, but it does not do well with point sources. To accommodate these limitations hundreds of low-power racks are used in large cloud data centers instead of dozens of high-power racks. The results are an unnecessarily large white space footprint and increases in maintenance and energy costs. Like an automobile at a stop sign, idling servers use almost half of full load power, and in large arrays many servers are literally doing nothing but wasting energy.
Air is a thermal insulator with an extremely low heat capacity and virtually no thermal mass, and cold air sinks. Contact between air and electronics promotes oxidation and corrosion. Pollutants in the air can cause additional damage. Fans are inefficient and can fail, affecting reliability. Earplugs are required in some data centers due to excessive fan noise. Heat generation at the device level is bumping up against the thermodynamic limit.
This legacy design approach is completely unnecessary.
It’s common knowledge that liquids conduct heat better than gases. With the right liquid cooling technology - and the devil is in the details - enterprise and cloud data centers would look far different than they do today. There would be no high ceilings and no raised floors, no chiller room, no CRAC units and no hot aisles. Racks would be fewer and denser. Electronics would be isolated from the environment so there would be no need for outside air or humidity control. The white space would be 60 percent smaller and power demand 40 percent lower than a conventional air-cooled data center. Capital and operating costs for infrastructure would also be significantly lower, and the total spend on racks and servers would be less than today’s.
The industry continues to tweak a data center cooling design that never worked very well with “less worse” results. By pivoting to liquid cooling technology, data center design would realize a performance improvement at lower capital cost ... and without any noise.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.