Stampede Supercomputer Beefs Up With Phi Coprocessors

Add Your Comments

The Stampede supercomputer is housed in nearly 200 cabinets in a new data center at the Texas Advanced Computing center in Austin. Click the picture for a larger image. (Photo: TACC)

At the Texas Advanced Computing Center, Phi is enabling a Stampede of data. The TACC is in the process of launching Stampede, the seventh-fastest supercomputer in the world. Much of that horsepower is provided by Intel’s new Xeon Phi coprocessor.

The Stampede system spans nearly 200 cabinets at the TACC facility in Austin, Texas. It recorded a speed of 2.6 petaflops in the most recent Top 500 ranking, but is expected to have an upper range closer to 10 petaflops upon full deployment. The system was showcased this week in a keynote presentation at the Gartner Data Center Conference in Las Vegas, cited as an example of the future path for high performance computing (HPC).

The Stampede system marks the first Top 10 appearance for a supercomputer using Xeon Phi, a coprocessor using Intel’s Many Integrated Core (MIC) architecture for highly parallel workloads.

Coprocessors supplement the performance of the primary processor, and have become a common feature in the fastest supercomputers. The Xeon Phi Coprocessor will compete with graphics processing units (GPUs) from NVIDIA, which have been used to boost performance of some of the leading performers in the top 500 in recent years. Phi coprocessors, loaded with as many as 50 cores, will account for about 8 petaflops of Stampede’s overall 10 petaflops.

Computing Power to Answer Big Questions

It’s cool technology, but it’s in service to a larger goal, as Stampede’s computing power will be applied to answering pressing scientific questions about understanding natural disasters, environmental threats .

“Our mission is really to enable discoveries that advance science and advance society,” said Tommy Minyard, the Director of Advanced Computing Systems at TACC. Creating supercomputers that are faster and more efficient is a key part of that effort. “We’re seeing we can do a lot more in a smaller footprint.”

In addition to its peak performance of 10 petaflops, Stampede will be equipped with 272 terabytes (272,000 gigabytes) of total memory, and 14 petabytes (14 million gigabytes) of disk storage.

The Texas Advanced Computing Center has a rich tradition in supercomputing, and is home to two other powerful systems, named Lonestar and Ranger. In recent years, its supercomputers have supported research with broad implications, which Minyard described in his keynote:

  • Ranger has run hurricane forecasting simulations for NOAA that require up to 40,000 cores to calculate the many variables in predicting the track and intensity of a major hurricane.
  • During the BP oil spill in the Gulf of Mexico, TACC received emergency funding to observe and predict the movement of the oil spill using satellite photos and high-resolution maps.
  • The Austin center has also run outbreak modeling simulations to predict the potential spread of the the H1N1 bird flu.
  • TACC conducted earthquake and tsunami hazard analysis based on data harvested during the Japanese earthquake of 2011.

Researchers from around the country can arrange access to Stampede through the National Science Foundation, which funded Stampede’s construction and allocates time on the system to researchers.

When the NSF awarded the grant to fund Stampede, the TACC didn’t have a facility to house it.  The center got to work quickly, building an 11,000 square foot expansion of its data center, along with a new chiller plant it will share with the University of Texas. Included in the project was a large thermal storage tank, which is “charged” with 45-degree water by running a chiller every evening when power rates are lower. That chilled water is used in the data center cooling system during the day, providing about 6 hours worth of chilled water.

Inside the data center, TACC implemented a design using hot-aisle containment and in-row cooling units, which allow it to support densities of up to 40 kW per cabinet. The power distribution brings 415V to the cabinet to the rack, and 240V to the servers. Between the Stampede facility and the 4,000 square foot data hall housing Ranger, TACC has a power capacity of approximately 10 megawatts.

Backed by 102,400 CPU Nodes

Inside the cabinet, Stampede is powered by an x86 cluster featuring Dell dual-socket C8220x “Zeus” servers with 32 GB of RAM. Each socket holds an 8-core, 2.7 Ghz Intel Xeon E5processor. With 6,400 nodes, the system brings 102,400 cores to bear on a task. Each Phi coprocessor includes at least 50 more cores.

Then there’s the 75 miles of Infiniband cabling. “Cabling has been one of the biggest aspects of the design,” said Minyard. “It takes a lot of man-hours. We try to carefully label each cable.”

Infiniband was used to support the fastest interconnections possible between the processors and coprocessors. “With typical Ethernet the lowest latencies are in milliseconds,” said Minyard. “We need to be down to microsecond latencies. Our point to point latencies are about 1 microsecond.”

Minyard projects Stampede is likely to have a lifespan of about four years.

“The challenge in high performance computing is keeping up with the leading edge,” he said. “After four years, fewer people want to use your system because it’s no longer the fastest. So we try to keep up with the latest processor technology.”

But there will be plenty of life left in the Stampede cabinets. The system ahs been designed so that TACC will be able to upgrade both the processors and coprocessors to create a more powerful supercomputer. A similar strategy was used this year by Oak Ridge Laboratory to transform its Jaguar system into the 17-petaflop Titan, which is now the world’s most powerful machine.

The cabinets, filled with Dell dual-socket servers and Intel Xeon Phi coprocessors, are interspersed with in-row cooling units to support densities of 40 kilowatts per cabinet. (Photo: TACC)

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)