Will liquid cooling ever become a mainstream data center technology? Liquid solutions have shown modest growth in recent years, but resistance remains high in some quarters. A recent SearchDataCenter survey found that 65 percent of respondents said they would never use liquid cooling in their data center.
But the growth of high density blade server installations and virtualization may prompt a change of heart over time, according to Fred Stack, the vice president of marketing at Liebert Precision Cooling, a unit of Emerson Network Power. “The high-density cooling solutions are no longer a small experiment or niche market,” said Stack. “It is becoming a principal design embraced by many industries out there. There are still a fair number of data center operators that tend to spread out rather than design for high capacity. There’s some that have that extra (data center) capacity, but that number is shrinking fast. Some people are limited by the (power available from the) utility company.”
Since cooling represents a large chunk of the energy usage in many data centers, it is a critical focus for managers seeking energy efficienct gains. But there are many strategies to choose between when optimizing data center cooling. At what level do you optimize, for instance: at the room level, the rack level, or even the chip level? There are also choices of cooling agent: air or liquid? If it’s liquid cooling, water or refrigerant?
There’s no “one approach fits all” solution, according to Stack. “One of the first questions is whether you’re going to solve the problem on a rack basis or a room basis,” said Stack. “Then there’s the question of what kind of fluid to use. A very high percentage of the market is going to open architectures and pump refrigerants.”
Liebert offers products to support both rack-level (closed) architecture and water cooling. But Stack said Liebert sees refrigerant-based cooling in an open architecture as the best approach for the majority of the growing number of high density installations.
“An open architecture is the most forgiving,” said Stack. “There are places where closed architecture is the right approach. These tend to be focused situations dealing with a small number of particularly high density racks. For any application that’s gone high density, more than 8 kilowatts a rack, you get into fluid cooling.”
As for the cooling medium, Stack references the ongoing debate about “data center hydrophobia”