Clustered Systems is Hot Property in Chill Off 2

The Chill Off 2 spent 18 months evaluating cooling technologies from many of the data center industry’s largest vendors. But in the end, tiny startup Clustered Systems stole the spotlight with a prototype that earned the best scores for energy efficiency.

No official “winner” was announced for the Chill Off 2, which was sponsored by Silicon Valley Leadership Group (SVLG) and conducted by Data Center Pulse and Lawrence Berkeley National Laboratory (LBNL). Instead, the results presented at Thursday’s Data Center Efficiency Summit in San Jose, Calif. focused on the fractional performance differences between different approaches, with rear-door liquid cooling products faring slightly better than rack-level and row-level solutions.

Trends Highlighted, Rather Than Vendors
All three of these “closely-coupled” solutions were superior to results from traditional perimeter based computer room air conditioners (CRACs). Performances by individual vendors (which included IBM, APC by Schneider, Emerson Network Power, Sun/Oracle, Coolcentric and Rittal ) were not discussed at the SVLG summit, but included in a 70-page report (PDF) on the LBNL web site. (Note: We’ll have additional coverage of the full Chill Off 2 results in coming days).

But the 36-server rack from Clustered Systems had a clear advantage in the metrics used by Data Center Pulse and LBNL, offering energy savings of 12 to 16 percent compared to other cooling approaches. The servers built by the Menlo Park, Calif. company have no fans and instead cool processors using a cold plate, which contains tubing filled with liquid coolant. By removing fans and dedicating more power to processors, the company says its product will support power densities of up to 80 kilowatts per rack.

The Chill Off 2 accidentally showcased the potential for Clustered Systems to achieve even greater efficiency. During the testing, a chiller disruption caused the temperature of the chilled water used in the prototype’s cooling distribution unit (CDU) to rise from 44 degrees F to 78 degrees F. During the 46-minute cooling outage, the CPU temperatures rose in the Clustered Systems, but the servers continued to operate.

“The observations during the use of 78F (25.5C) chilled water temperature indicate that the Clustered Systems design potentially can be operated with very low-cost cooling water, providing additional energy savings compared to the test results,” said the report on the Clustered Systems technology, prepared by Lawrence Berkeley National Laboratory.

Next Up: DOE-Funded Platform at SLAC
Clustered Systems’ technology will now be deployed in an ultra-dense liquid-cooled server platform at the Stanford Linear Accelerator Center (SLAC). The two-rack system will include hundreds of processor cores and power density of 80 kilowatts per cabinet. The project, which earned a $2.8 million grant from the U.S. Department of Energy, will also incorporate technology from the Edison Materials Technology Center and Emerson Network Power’s Liebert unit.

During the Chill Off 2 process, Data Center Pulse co-founder Dean Nelson reviewed Clustered Systems technology with CEO Phil Hughes, who shows off the server design in the 36-server rack dubbed the “BitFridge.” this video runs about 9 minutes.

For additional background on Clustered Systems, see an earlier video from 2009.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. So what do they do? They sell the metal plates and the Rack and you also need some pipe connected to the Rack?

  2. IT appears the servers must be modified. How? There website does not discuss this at all. Is it practical as a retrofit?

  3. Just a quick update, the technology was licensed to Liebert (Emerson Network Power) and should be in production by year end. Evaluations units are available now. Server retrofit is a screwdriver operation. Heat sinks are replaced with heat risers to bring heat to the lid level where it is conduced to a cold plate pressed onto the lid (easy to explain but needed considerable invention to make it work cheaply and simply)