Appro Introduces Liquid-Cooled Supercomputer

Add Your Comments


Appro introduced the new Appro Xtreme-Cool Supercomputer, which features an energy-efficient design using warm water liquid-cooling heat exchangers and no chillers. The system also recaptures heat to support  produces 80% heat capture to the warm water for possible heat reuse. The company will showcase the system at the SC12 event next week in Salt Lake City.

The new Xtreme-Cool supercomputer is composed of blade nodes that are typically installed in a cluster. The liquid cooling installed in the noded is connected to the Coolant Distribution Unit (CDU) via tubes with drip-free quick connects. Leak detection and prevention is integrated in the system for an extra measure of protection. Integrated remote power and temperature monitoring and reporting is also provided.

“Appro’s new Xtreme-Cool Supercomputer is aimed squarely at the worldwide high performance computing market, which reached a record $10.3 billion in 2011 and is predicted by IDC to exceed $14 billion by 2016,” said Earl Joseph, IDC HPC Program Vice President. ”Appro’s new product is designed to address key customer requirements such as less or no air-conditioning in the datacenter with warm liquid-cooling heat exchanger technology, which enables direct cooling of the compute processor and memory combined with power and temperature monitoring software. This has the potential to improve price/performance and TCO for dense, large-scale cluster environments.”

Using a higher water temperature in a cooling system provides two benefits – it allows you to either use your chiller less, or not at all. Higher inlet water temperature maximizes the number of hours in which “free cooling” is possible through the use of water side economizers. Warm water cooling works best in a tightly-designed and controlled environment that focuses the cooling as closely as possible to the heat-generating components.

The Xtreme-Cool is targeted at medium-to-large data centers with HPC deployments up to 25 Petaflops of computing performance. The system is composed of two processors per node supporting approximately 80 nodes per rack in a standard 42U rack based on Intel Xeon processor E5 Family. It also supports hybrid processing based on Intel Xeon processors paired with Intel Xeon Phi coprocessors. It uses 480 volt power distribution with a choice of 208 or 277 volt power supplies for further energy savings.

“Customers who are pressing the state of the art in scientific discovery are looking for not only outstanding performance and energy-efficiency, but also programmability and manageability”, said Dr. Rajeeb Hazra, VP Intel Architecture Group and GM Technical Computing, Datacenter and Connected Systems Group. “The Appro Xtreme-Cool meets those needs by combining the power of Intel Xeon processor E5 family with the programmability and energy efficiency of the Intel Many Integrated architecture based Intel Xeon Phi coprocessors. This combination of technologies establishes a new standard for both programmer productivity and performance per watt.”

NOAA Selects Appro

Appro also announced a delivery of a 113.2 Teraflop Xtreme-X supercomputer, based on a subcontract awarded to CSC, to the National Oceanic and Atmospheric Administration’s (NOAA) Hurricane Forecast Improvement Project (HFIP). The $317 million contract was awarded to CSC in 2011, and included the requirement to build a supercomputer for modeling weathering patterns.

“By installing the Appro Xtreme-X Supercomputer as part of the NOAA’s Hurricane Forecast Improvement Project (HFIP), CSC and Appro are working together to improve the reliability, fault tolerance and redundancy of the HPC solution, as well as flexibility for system scalability for future installations.” said Steve Baxter, program manager of CSC’s North American Public Sector.

The Appro Xtreme-X Supercomputer system configuration for NOAA features single rail QDR InfiniBand interconnect based on a dual socket, 8-core Intel Xeon processor E5 Family providing a total of 113.2 TFlops of computing performance and 10.9 TB of memory.

About the Author

John Rath is a veteran IT professional and regular contributor at Data Center Knowledge. He has served many roles in the data center, including support, system administration, web development and facility management.

Add Your Comments

  • (will not be published)