Texas Stampede: TACC’s 10 Petaflop Supercomputer

1 comment

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin announced it will build and support a world-class supercomputer with comprehensive computing and visualization capabilities as part of a National Science Foundation grant. The new system will be called Stampede and will be housed on the same campus as TACC’s Ranger Supercomputer.

Powered by Dell and Intel

The Dell and Intel powered system will be a part of the National Science Foundation’s eXtreme Digital (XD) program, which enables scientists to interactively share computing resources, data and expertise. Set to be operational by January 2013 the NSF will provide $27.5 million up front and is expected to invest $50 million over the next four years.

“Stampede will be one of the most powerful systems in the world and will be uniquely comprehensive in its technological capabilities,” said TACC Director Jay Boisseau. “Many researchers will leverage Stampede not only as part of their breakthrough scientific research, but for all of their scientific research, including visualization, data analysis and data-intensive computing. We expect the Stampede system to be an exemplar for supporting both simulation-based science and data-driven science.”

10 Petaflops

Stampede will be powered by several thousand Dell “Zeus” servers with 8-core processors from the forthcoming Intel Xeon E5 family (Sandy Bridge), and each server will have 32GB of memory. The production system will offer almost 2 petaflops of peak performance.

The cluster will also include a new innovative capability: Intel Many Integrated Core (MIC) co-processors codenamed “Knights Corner,” providing an additional 8 petaflops of performance.  Stampede will also offer 128 next-generation NVIDIA graphics processing units (GPUs) for remote visualization, 16 Dell servers with 1 terabyte of shared memory and 2 GPUs each for large data analysis, and a high-performance Lustre file system for data-intensive computing. All components will be integrated with an InfiniBand FDR 56Gb/s network for extreme scalability. Altogether, Stampede will have a peak performance of 10 petaflops, 272 terabytes (272,000 gigabytes) of total memory, and 14 petabytes (14 million gigabytes) of disk storage.

Stampede will support more than a thousand projects in computational and data-driven science and engineering from across the U.S. It will also allow researchers to develop advanced methods for petascale computing, including Intel MIC architecture optimization, and will foster new expertise in data-intensive computing. Finally, Stampede will be used to help train the next generation of researchers in advanced computational science and technology, expanding the use of advanced computing across disciplines and into new communities and domains.

“Intel is proud to be a core part of enabling the next-generation of scientific discovery for XSEDE’s users,” said Anthony Neal-Graves, vice president and general manager of Workstations and MIC Computing at Intel. Our goal is to provide consistency with the next-generation of Intel processors, co-processors and software so that our nation’s best scientists can focus on scientific discovery and not computer science.”

The University of Texas at Austin has pledged additional support for the project, including a new data center to house Stampede set to break ground in November 2011 at the J.J. Pickle Research Campus. In this video TACC Director Jay Boisseau discusses Stampede.

About the Author

John Rath is a veteran IT professional and regular contributor at Data Center Knowledge. He has served many roles in the data center, including support, system administration, web development and facility management.

Add Your Comments

  • (will not be published)

One Comment

  1. Gordon White

    Sounds like a great opportunity for a modular data center (see Dell's installation at U of Colorado). As a Texas tax payer, I think it should be required to have a very low PUE! Sadly, the word on the street is that the folks at the University were not interested in a modular data center. Can't wait to hear what PUE they end up with using high density racks in a traditional brick & mortar DC.