Oak Ridge: The Frontier of Supercomputing

The Oak Ridge Leadership Computing Facility is on the frontier of supercomputing, forging a path toward "exascale" computing. The data center houses three of the world's most powerful supercomputers, including a machine that looms as the once and future king of the supercomputing realm. DCK recently had a look inside the Oak Ridge data center.

Rich Miller

September 10, 2012

7 Min Read
Oak Ridge: The Frontier of Supercomputing

ornl-jaguar-containment

Some of the cabinets for the Jaguar supercomputer at Oak Ridge National Laboratory, currently the sixth-fastest machine in the world. An upgrade is underway that will transform Jaguar into the 20-petaflop Titan. (Photo: Rich Miller)

OAK RIDGE, Tenn. - At first glance, the data hall within Oak Ridge National Laboratory resembles many raised floor environments. But a stroll past the dozens of storage cabinets reveals three of the world's most powerful supercomputers, including a machine that looms as the once and future king of the supercomputing realm.

The Oak Ridge Leadership Computing Facility (OLCF) is on the frontier of supercomputing, forging a path toward "exascale" computing. The data center features an unusual concentration of computing horsepower, focusing 18 megawatts of electric capacity on a 20,000 square foot raised-floor area. "The power demands are about what you would see for a small town," says Rick Griffin, Senior Electrical Engineer at Oak Ridge National Laboratory (ORNL).

That power sustains three Cray systems that rank among the top supercomputers in the latest Top 500 list: NOAA's Gaea (33rd), the University of Tennessee's Kraken system (21st) and ORNL's Jaguar, which is currently ranked sixth at 2.37 petaflops, but topped the list when it made its debut in November, 2009. (See our photo feature, Inside the Oak Ridge Supercomputing Facility, for more).

Jaguar is currently undergoing a metamorphosis into Titan, an upgraded Cray XE6 system. When it goes live late this year, Titan will be capable of a peak performance of up to 20 petaflops - or 20 million billion calculations per second. Titan will be accelerated by a hybrid computing architecture teaming traditional central processing units (CPUs) from AMD with the latest high-speed graphics processing units (GPUs) from NVIDIA to create a faster and more efficient machine.

The Road to Exascale

At 20 petaflops, Titan would be significantly more powerful than the current Top 500 champ, the Sequoia supercomputer at Lawrence Livermore National Labs, which clocks in at 16.3 petaflops. The data center team at Oak Ridge expects that Titan will debut as the fastest machine within the Department of Energy, which operates the most powerful research supercomputers in the U.S.

But Titan is just a first step toward the goal of creating an exascale supercomputer—one able to deliver 1 million trillion calculations each second - by 2018.

Jaguar is being upgraded in several phases. The dual 6 core AMD Opteron chips have been upgraded to a single 16-core Opteron CPU, while Jaguar's Seastar interconnect has been updated with Cray's ground-breaking new Gemini interconnect. In the current phase, NVIDIA Tesla 20-series GPUs are being added to the system, which will be upgraded to NVIDIA's brand new Kepler architecture. Upon completion, Titan will feature 18,688 compute nodes loaded with 299,008 CPUs, with at least 960 of those nodes also housing GPUs to add more parallel computing power.

Cooling 54 kilowatts per Cabinet

Each of Titan's 200 cabinets will require up to 54 kilowatts of power, an intense high-density load. The system is cooled with an advanced cooling system developed by Cray, which uses both water and refrigerants. The ECOPhlex (short for PHase-change Liquid Exchange) cooling system uses two cooling loops, one filled with a refrigerant (R-134a ), and the other with chilled water. Cool air flows vertically through the cabinet from bottom to top. As it reaches the top of the cabinet, the server waste heat boils the R-134a, absorbing the heat through a change of phase from a liquid to a gas. It is then returned to the heat exchanger inside a Liebert XDP pumping unit, where it interacts with a chilled water loop and is converted from gas back to liquid.

ORNL estimates that the efficiency of ECOPhlex allowed it to save at least $1 million in annual cooling costs on Jaguar. The advanced nature of the ECOPhlex design will allow the existing cooling system for Jaguar to handle the upgrade to  Titan, accommodating a 10-fold increase in computing-power within the same 200-cabinet footprint.

Upon completion, Titan will require between 10 and 11 megawatts of power. Oak Ridge has 140 additional cabinets for the other systems within its facility, and currently has 14 megawatts of total power for its IT. Another 4.2 megawatts of power is dedicated to Oak Ridge's chiller plant.

ornl-three-supers

Oak Ridge's three supercomputers: Gaea, Kraken and Jaguar, are all currently ranked among the top 33 supercomputers in the world. (Photo: Rich Miller)

More Power, Scotty!

As Oak Ridge continues to expand its technical computing operations, it will need additional space and power for both its supercomputers and its in-house computing needs. An upgrade is in the works that will  provide Oak Ridge with an additional 20 megawatts of power for IT loads and 6 megawatts of chiller power capacity.

Jaguar and the other supercomputers at Oak Ridge provide researchers with the ability to tackle computational problems that would be impossible on other systems. Scientists are using these machines for breakthrough research in astrophysics, quantum mechanics, nuclear physics, climate science and alternative energy.

While the powerful systems housed at Oak Ridge require different approaches to power and cooling, the nature of their workloads enables different approaches to infrastructure. "Because of our financial and footprint constraints, we have to be really focused on keeping things simple," said Griffin. "We don't need to keep these things on at any cost, so we don't need a Tier IV system. HPC used for research can recover from power outages. The biggest problem with the power going off is restarting stuff and hardware problems (from a hard stop)."

Operational Focus on Reliability

Even though Oak Ridge may not have the same uptime requirements as a major bank or stock exchange, reliability still matters. At a recent meeting of the Tennessee chapter of AFCOM, Griffin and Scott Milliken, Computer Facility Manager at Oak Ridge, discussed some of the operational strategies the lab employs to maintain high reliability.

The ORNL team works to rigorously commission, test, inspect and maintain electrical and mechanical equipment. That includes infrared and acoustical scans of electrical and mechanical rooms, power testing using load banks, simulations of power losses, predictive and preventive maintenance, and maintaining an inventory of spare parts on-site for critical components.

Griffin said Oak Ridge also has detailed power quality monitoring to guard against equipment challenges related to "dirty power,” and specs equipment to be able to ride through a range of  power quality events in the electrical system. “Nowadays, power supplies can handle a lot of things on power quality events,” said Griffin.

On the user side, no single computing job can run more than 24 hours, so any loss of data from a power outage would be limited.

Focusing Redundancy on Most Critical Systems

The Oak Ridge data center focuses its redundant infrastructure on key systems that manage a graceful shutdown of power and a quick restart of cooling systems. A 1,000 kVA uninterruptible power supply (UPS) system backs up the disk storage systems and some chillers, allowing the lab to maintain cooling in critical areas of the facility. ORNL also has worked with vendors on a quick-start system on its chillers, which allows it to produce chilled water within five minutes of a restart - a key consideration when seeking to limit the time period in which cabinets go without cooling.

On the efficiency front, the lab has implemented cold aisle containment in its storage gear and some areas of the supercomputing installations.  It will also raise the temperature in cold aisles to 65 degrees - still chilly by most standards, but up from the original 55  degrees.

The lab is currently nearing completion on an additional 20,000 square foot data hall which will be dedicated to its enterprise computing needs. As Oak Ridge's in-house workloads are migrated to the new space, it will free up more space for future supercomputers.

"We envision two systems beyond Titan to achieve exascale performance by about 2018," wrote Jeff Nichols, Associate Laboratory Director for Computing and Computational Sciences. "The first will be an order of magnitude more powerful than Titan, in the range of 200 petaflops. This system will be an exascale prototype, incorporating many of the hardware approaches that will be incorporated at the exascale. We hope to scale this solution up to the exascale."

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like