In today's digitally driven world, nothing runs without data. From streaming services and social media platforms to e-commerce and cloud computing, the relentless surge in data generation isn't slowing down anytime soon. Latest estimates place the volume of data generated per day at 328.77 million terabytes, with global projections for 2025 reaching 180 zettabytes.
For context, 1 zettabyte equals 1 billion terabytes of data — and as outstanding as these numbers are, they are expected to swell even higher. This is unsurprising, especially when you consider more data has been created in the last few decades than in mankind's history.
This exponential growth, however, comes with a price — a parallel surge in the demand for data centers to store, process, and distribute all this data.
And as demand for data centers grows, so too does the demand for energy to run those data centers and to keep them cool so they don't overheat and become damaged or crash. This is why the US Department of Energy's Advanced Research Project Agency - Energy (ARPA-E) created the COOLERCHIPS program: "to develop transformational, highly efficient, and reliable cooling technologies for data centers."
The Challenges of Keeping the Data Center Cool
Storing data comes at a cost, especially in terms of its environmental footprints. These footprints range from energy consumption, water usage, to even the lifecycle of hardware. A comprehensive review of data center energy estimates authored by David Mytton and Masao Ashtine found that data centers consume approximately 1% to 2% of global energy and 2% of total electricity used in the US.
However, a significant portion of that energy is dedicated to one critical function: cooling the data centers. Powerful computers generate a lot of heat as they process and store massive amounts of data, and if this temperature is not controlled, overheating can occur, damaging the server equipment and causing crashes.
When asked about the current challenges with cooling data centers, Moshe Tanach, founder and CEO at NeuReality, told Data Center Knowledge: "Data center cooling energy is a side-effect of the compute infrastructure's power consumption and heat dissipation. As much as it's a critical piece in data centers, the larger problem to address is the source of the heat — the compute infrastructure."
The biggest contributor to the growing amount of heat that data centers produce, according to Tanach, are deep learning accelerator (DLA) systems like GPUs, Tensor Processing Units (TPUs), and others. "And it is going to get worse when generative AI and large language models [LLMs] widen their deployment," he added.
Cooling is critical to data center performance, similar to how you'd need a fan or air conditioning to keep yourself cool on a hot summer day or else you could suffer from heat exhaustion. This is why, on average, more than 40% of the energy consumed by these data centers go into powering cooling and ventilation systems. And whether it is a computer room air conditioning (CRAC) unit, a computer room air handler (CRAH) unit, rear door heat exchangers, or direct expansion (DX) cooling, these cooling systems are not cheap.
Vladimir Galabov, director of Cloud and Data Center Research at research firm Omdia, agreed that a big challenge for data center cooling is reducing energy consumption of the air conditioners and fans cooling servers. "For over a decade the largest data center operators have been experimenting with ways to cool servers more efficiently. One avenue of experimentation has been the mechanical aspect of cooling — fans. Server fans consume a substantial share of the electricity a server consumes," Galabov told Data Center Knowledge. "Eliminating fans altogether and using liquid cooling only, or using a combination of cooling technologies like a rear door heat exchanger plus direct-to-chip cooling is another avenue being explored.
"I expect that optimization around the mechanical components of computing and power conversion would be two key data center efficiency avenues that will be explored by data center operators," he added. "It is better to improve efficiency of non-critical components rather than using a less powerful processor."
A Global Markets Insight report on the data center cooling market found that in 2022, the market size was at $10 billion. The report also noted that the urgent need to reduce the carbon footprint of data centers (which is currently at more than 1% of global energy-related GHG emissions and 3% in the US.) is driving the implementation of energy-efficient cooling systems.
Moving Away From Legacy Data Center Designs
Apart from energy usage, the indirect and direct consumption of water for either generating electricity or cooling data centers also poses a climate risk if it continues to run unchecked. This was echoed by Bruno Berti, senior vice president of product and go-to-market at NTT Global Data Centers, who told Data Center Knowledge that legacy data center designs use "evaporative cooling technologies that waste a lot of water." While Berti admitted that these legacy technologies are "very efficient and help to get data centers cool, [they are] obviously very wasteful from the perspective of water," impacting climate in adverse ways, he added.
This is why NTT Global Data Centers moved away from evaporative cooling technologies and techniques to air-cooled chillers — a closed water system that doesn't waste water, according to Berti.
The COOLERCHIPS Program
Berti stressed the importance of leveraging advanced cooling solutions like air-cooled chillers that can improve data center cooling while reducing costs.
This is where the COOLERCHIPS (which stands for Cooling Operations Optimized for Leaps in Energy, Reliability, and Carbon Hyperefficiency for Information Processing Systems) program can help. ARPA-E designed the COOLERCHIPS program to revolutionize data center cooling by leveraging cutting-edge technologies and implementing energy-efficient strategies to reduce the total amount of energy needed to cool data centers in any US location.
With funding of $42 million for the program, 15 projects located at universities, businesses, and national labs will receive grants to develop high-performance, energy-efficient cooling solutions for data centers. Some of the recipients include Nvidia, the University of California, Hewlett-Packard (HP), the University of Arkansas, Intel Federal, and Purdue University, with funding ranging from $1.2 million to $5 million for each recipient.
These projects will oversee the development of technologies like secondary cooling loop components, cooling system software, cooling systems for modular/edge data centers, and even support facilities for testing the new technologies — all of which are designed to reduce the power used for cooling to a mere 5% of the data center's total energy consumption, as opposed to the 33% to 40% currently used for cooling.
In turn, this reduction will lower the operational carbon footprints of these data centers and contribute to environmental sustainability.
"Creating solutions to cool data centers efficiently and reduce the associated carbon emissions supports the technological breakthroughs needed to fight climate change and secure our clean energy future," said US Secretary of Energy Jennifer Granholm.
For Galabov, "Any research and funding that can enable data efficiency is most welcome." But, he added, "how successful the program will be depends on the attractiveness of the innovations that get developed as a result of the funding."
One big concern that Galabov doesn't see many COOLERCHIPS projects addressing is the availability of non-toxic and affordable fluids for liquid cooling. "This is an area that requires development," he said. "This would be something that we all need to care about."
Because of the tremendous amount of electricity used by data centers, even small changes can make big differences, Galabov concluded. In an industry that spends $3 trillion yearly on electricity bills, even a reduction in electricity consumption by 5% or 10% will lead to huge savings in data center costs and improve efficiency, he said.