Supercomputers are extremely cool and help us tackle some of humanity’s biggest hopes and dreams, from mapping the human genome to finding the Higgs boson, the “God particle.” But they are also extremely expensive to operate, because they consume enormous amounts of energy.
The Texas Advanced Computing Center in Austin is home to not one but a series of energy-guzzling supercomputers. TACC is where University of Texas at Austin keeps its big computing brains researchers use to work on problems like the influenza A virus, which takes millions of lives around the world annually, or malfunctions of the NMDA brain receptor, linked to Parkinson’s, Alzheimer’s, and Schizophrenia.
A recently announced project by TACC together with a Japanese government research and development organization aims to demonstrate that a lot of the energy those supercomputers require can be generated by solar panels. The other part of the project is to demonstrate that they can also use an atypical but reportedly more efficient power distribution scheme, feeding high-voltage DC power directly to servers.
An Uncommon Combination
Solar power, especially on-site solar power, is not a mainstay at data centers today by any means, but there are now numerous sizable deployments around the world. Examples of the biggest ones include Apple’s two 20 MW on-site solar farms at its Maiden, North Carolina, data center, and a 14 MW solar installation powering the QTS data center campus in Princeton, New Jersey.
The biggest challenges in using on-site solar installations is that they require huge amounts of real estate to generate energy at data center scale. Solar power in general is an intermittent energy source, and data centers need a steady supply of electricity, so solar in data centers needs to be combined with grid power, large-scale energy storage, or both.
No Boom for High-Voltage DC in Data Centers
But solar power lends itself especially well to use with high-voltage DC power distribution systems, since that’s the kind of current photovoltaic plants generate. A typical low-voltage AC distribution system in a data center receives 480V AC power from a utility feed, converts it to DC to charge UPS batteries, converts it back to AC on the UPS output, and then steps it down to 208V AC in a power distribution unit before pushing it to the server power supply, where it’s converted once again to 12V DC power for consumption by the computer components.
One argument for high-voltage DC power is elimination of all those conversion steps, since each one of them results in energy losses and reduces energy efficiency. Another argument is that a simpler system with fewer conversion points is more reliable because there are fewer components that can fail.
The arguments against using the alternative power distribution method include the fact that most hardware on the market doesn’t come with power supplies that can take high-voltage DC power as well as the increased risk of potentially deadly arc flash that can be caused by bringing high-voltage electricity to the IT racks, where data center technicians work. Yet another argument is that modern AC power distribution systems have become so efficient that whatever efficiency gains DC systems have may be negligible in comparison.
TACC Hopes to Achieve 15 Percent Energy Savings
In high-power-density data centers like the ones that house supercomputers at TACC, the efficiency gains of a DC system can add up to a lot of savings, however, Dan Stanzione, the center’s executive director, says. TACC’s newest supercomputer, Stampede, can require as much as 5MW to operate, although it runs at about 3MW on a normal day.
In addition to Stampede, TACC has three more supercomputing systems, as well as numerous storage and cloud computing clusters. Needless to say, Stanzione’s power bill is huge. “An enormous amount of our cost has to do with data center power,” he says.
This is why TACC has done as much as it could to increase energy efficiency of its data center power and cooling systems, and why the project with Japan’s New Energy and Industrial Technology Development Organization and NTT Facilities is so interesting to the center. If the experimental setup proves to be as effective as expected, TACC stands to save a lot of money by implementing it at a larger scale in the future.
The proof-of-concept project is fairly small, consisting of a 250kW of photovoltaic generation capacity that’s going to be deployed over a university parking lot. It will provide shade for about 60 parking spaces, Stanzione said. In tandem with a utility feed, it will generate electricity for an HPC cluster consisting of about 10,000 CPU cores with a 200kW power requirement.
Besides the potential power-savings benefits for TACC, Stanzione hopes to publish results of the experiment. The plan is to deploy the compute cluster with a traditional AC power distribution scheme first to establish a baseline and then convert to high-voltage DC and compare the two sets of data, he says.
The goal is to achieve 15 percent energy savings, which he admits is ambitious, but, even if the project demonstrates 5 percent energy savings, at TACC’s level of energy consumption that can translate into substantial cost reduction.
NTT Facilities, a data center design, construction, and management company that’s a subsidiary of Japan’s telecommunications giant NTT Communications, will act as the overall power distribution system (in this case 380V DC) integrator and supplier. The company has been putting a lot of effort into expanding its business in the US market, which included acquiring Massachusetts-based data center infrastructure specialist Electronic Environments Corp. last year.
Energy Research on Japanese Government’s Dime
NEDA is footing the bill, which amounts to $13 million, including $4 million in computing equipment. This is not the first project NEDO has invested in in the US. The organization developed an energy management system demonstration project for the electrical grid in Hawaii and a smart-home demonstration project in New Mexico, and pursued net-zero-energy nanotechnology together with the State University of New York.
The data center industry is notoriously, and justifiably, conservative when it comes to adopting new technologies, especially when it gets to critical power. Because their job is to keep servers humming 24-7-365, data center operators generally prefer solutions that have been tried and true, and it is proof-of-concept deployments like the one at TACC that can really help new ways of thinking about data center energy make the transition from thinking to reality.