Skip navigation

Chill Off 2: Detailed Data on Cooling Choices

<img src="/sites/datacenterknowledge.com/files/wp-content/uploads/2010/10/chilloff.jpg" alt="" width="470" height="285" /> The final report from the Chill Off 2 will likely serve as a valuable resource for data center operators exploring the most efficient way to cool their servers. The project evaluated 11 cooling technologies from eight vendors to determine which were the most efficient.

When the first Data Center Chill Off was held in 2008, one of the most intriguing aspects was the ability to compare leading vendors and products with one another. When the findings of the Chill Off 2 were announced Thursday at the Data Center Efficiency Summit in San Jose, Calif., the results highlighted the potential of a prototype from Clustered Systems. But the larger focus of the 18-month project was comparing different approaches to cooling a data center.

On that front, the Chill Off 2 final report – Evaluation of Rack-Mounted Computer Equipment Cooling Solutions – is a comprehensive resource for data center operators exploring the most efficient way to cool their servers. The project evaluated 11 cooling technologies from eight different vendors to determine which were the most efficient.

Importantly, the evaluations were conducted using identical conditions in the same facility (an Oracle/Sun Microsystems data center), across a variety of temperature ranges for supply air and water, and with several different metrics. The goal was to provide the fairest and most detailed comparisons possible, adjusting for variables that could influence decisions about the effectiveness of each solution.

The types of cooling systems tested were:

  • Rack cooler with air-to-water heat exchanger
  • Row-level rack cooler with air-to-refrigerant or air-to-water heat exchanger
  • Rack rear door passive cooler with air to refrigerant or air-to-water heat exchanger
  • A prototype direct touch cooling system using refrigerant
  • A container type enclosure cooled with chilled water.

Some broad trends and opportunities emerged. "Encouraging the use of higher chilled water temperatures, higher server air inlet temperatures and increased use of free cooling will yield improved energy efficiency," said the report written by Henry Coles of Lawrence Berkeley Labs.

Not surprisingly, chillers were identified as major energy guzzlers. Chillers, which are used to refrigerate water, are widely used in data center cooling systems but require a large amount of electricity to operate. With the growing focus on power costs, many data centers are reducing their reliance on chillers to improve the energy efficiency of their facilities.

Gains from Warmer Chiller Set Points
"The largest consumer of power per the amount of IT power cooled is the power needed to make the chilled water," the report read. "Small increases in the chilled water supply set point can provide large energy savings. Depending upon system design, the chilled water distribution pump power is not a large component of overall energy efficiency but savings can be easily achieved if the supplied water supply delta pressure is reduced to the lowest required level."

The Chill Off 2 Energy Efficiency (COEE) metric found that all of the closely-coupled solutions tested were more energy efficient than the use of traditional cooling using a hot aisle-cold aisle arrangement with computer room air conditioners (CRACs) located around the perimeter of the data center. The in-rack and in-row cooling technologies were tightly bunched in the COEE tests, with rear door cooling units using slightly less energy than the in-rack and in-row. The tests did not examine cost issues for the various technologies.

There's a ton of data in the report. In addition, Data Center Pulse has posted videos of most of the cooling solutions that were tested. Here's where you can find them:

Taken together, the 70-page report and videos serve as a comprehensive resource for data center managers seeking to stay informed on the latest data center cooling options. The effort was put together by the Silicon Valley Leadership Group, Data Center Pulse, and Lawrence Berkeley National Labs.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish