Blue Waters: Awesome Power, Awesome Efficiency
June 24th, 2010 By: John Rath
Described as an “unrivaled national asset” the Blue Waters Supercomputer was unveiled Wednesday on the campus of the University of Illinois. The National Center for Supercomputing Applications (NCSA) hosted Building the Data Center of the Future 2nd Biennial Workshop: HPC Data Centers. Blue Waters is the petascale supercomputer project in which some awesome power is matched with awesome efficiency.
The HPC Frontier
Set to go online some time in 2011 Blue Waters will literally set the mark for a new level of performance with an expected peak performance of 10 petaflops. Being no stranger to the frontier of computing firsts the NCSA teamed up with IBM and the Great Lakes Consortium for Petascale Computation to not only raise the HPC (High Performance Computing) bar a notch, but take data center technologies along with it.
With a $208 million award from the National Science Foundation (NSF) the Blue Waters team went to work with an early, unique collaboration bringing the NCSA, IBM and data center teams together to make sure the supercomputer was a unique tool for open scientific research, and that it would reside in a data center that matched that blend of power and efficiency. With 114 compute racks in the 88,000 square foot National Petascale Computing Facility (NPCF), each rack is packed with an impressive array of technology.
Blue Waters contains over 300,000 cores, over a petabyte of memory, more than 10 petabytes of storage, half an exabyte of archive storage and is expected to sustain more than one petaflop on scientific applications. To support the supercomputer, the data center receives a 24 megawatt power feed from University of Illinois Abbott power plant, 5,400 tons of chilled water from the chiller plant and has a six-foot raised floor to house over 81 miles of cabling. Blue Waters is expected to consume approximately 15 megawatts of power.
IBM Power 7
The Blue Waters supercomputer contains a lot of ….. water. IBM’s Michael J. Ellsworth explained the return to water-cooled chips for IBM and the reasons it achieves such great efficiency with the extreme density of the Power7 architecture. The performance and efficiency gains from water cooling feature an order of magnitude lower unit thermal resistance, 3500X heat carrying capacity, having total control of flow, and lower temperature.
Set to debut in the 2011 launch of Blue Waters, the IBM Power platform has evolved dramatically since the P6 in 2008, where 12 racks over 1,000 square feet of space would draw 864 kW. The P7 platform with a single rack will draw 175 kW and be 100 percent water cooled.
Michael explained that with proper hookups, the P7 node could reside in the conference room where he was speaking and operate as designed.
Blue Waters will take advantage of the most advanced technologies under development at IBM, including an advanced processor and memory subsystem, a new interconnect, parallel file system, operating system, programming environment, and system administration tools. These technologies are embodied in the PERCS system design (Productive, Easy-to-use, Reliable Computing System). The complete details of the Blue Waters computing system can be found on a POWER7 architecture presentation here.
No Generators or UPS
When the NPCF (National Petascale Computing Facility) was being planned, a partnership with the University of Illinois netted a collaboration with the University chiller plant and Abbott power plant. A hard look at the historical performance and uptime of Abbot resulted in a decision to forgo UPS and generators at NPCF.
Around 70 percent of the year, the three on-site cooling towers provide water chilled by Mother Nature. The University chiller plant upgrades included a thermal storage tank, which has already delivered a significant cost savings. See this Data Center Knowledge article about other data centers utilizing this technology.
The in-cabinet Water Conditioning Units take 100 percent of the heat to water and eliminate the need for a Computer Room Air Handler (CRAH). A few facility CRAH units were installed for other supercomputers in the NPCF, and for instances where maintenance was being done on Blue Waters nodes with the back door open.
The University of Illinois recently pledged to take steps toward carbon neutrality, reduced energy use and overall improved sustainability in the future. This was in reaction to the May 15th release of the Office of Sustainability Illinois Climate Action Plan (iCAP).
The 15Megawatts of power for Blue Waters is delivered over four individual 480 Volt AC feeds directly into every compute node. A PUE below 1.2 and LEED Gold certification are expected once the system comes online in 2011. A separate operations grant from the NSF will go towards regular utility bills for Blue Waters.
The foresight, collaboration and cooperation, and passion for the Blue Waters project are clearly evident in all aspects of the supercomputer, NPCF, University of Illinois and NCSA staff. The scientific and research communities will benefit from this national asset for many years to come.
Daniel GoldingPosted June 24th, 2010
Well, when you eliminate UPS units, Generator sets, and use an outside chilled water plant, you get a very low “effective” PUE, but that’s largely fraudulent. I can put 10 servers in a warehouse and declare a very low PUE, but that doesn’t make it an industry standard datacenter, either.
Geoff CruickshankPosted June 24th, 2010
I’m hearing you Daniel. I was involved in what was supposed to be the first Green 6 star building in Darwin, Australia. After completion it was discovered the formula for calculating the rating relied more upon where the power feeds came from (i.e. tenant or landlord) rather than the actual efficiency of the equipment. VERY fraudulent in my opinion. The result was a rating of 2 stars and lawsuit from customer.
Daniel, I could not agree more. Perhaps the article is not complete. The PUE is skewed in this example. In addition, this type of power and cooling architecture is not viable (economically or technologically) for 95% of today’s data centers.
Although I love the innovation at work here, it just doesn’t establish any obtainable benchmarks for present day critical facilities.