Sequoia Supercomputer Breaks 1 Million Core Barrier

The Stanford Center for Turbulence Research (CTR) has set a new record in computational science, using the Sequoia supercomputer with more than one million computing cores to solve a complex fluid dynamics problem — he prediction of noise generated by a supersonic jet engine.

John Rath

January 31, 2013

2 Min Read
Sequoia Supercomputer Breaks 1 Million Core Barrier

Sequoia-Nov2012-470

Sequoia-Nov2012-470

The Sequoia supercomputer at Lawrence Livermore National Laboratory recently harnessed more than 1 million compute cores to run a complex fluid dynamics simulation. (Image: LLNL)

The Stanford Center for Turbulence Research (CTR) has set a new record in computational science, using the Sequoia supercomputer with more than one million computing cores to solve a complex fluid dynamics problem — he prediction of noise generated by a supersonic jet engine.  Installed at the Lawrence Livermore National Laboratory (LLNL)  Sequoia was named the most powerful supercomputer in the world on the June 2012 Top500 list, and moved to number two in November 2012.

With a total of 1,572,864 compute cores installed, research associate Joseph Nichols was able to show for the first time that million-core fluid dynamics simulations are possible—and also to contribute to research aimed at designing quieter aircraft engines.  Predictive simulations aid in the process of peering inside and measuring processes  occurring within the harsh aircraft exhaust environment that is otherwise inaccessible to experimental equipment. The data gleaned from these simulations are driving computation-based scientific discovery as researchers uncover the physics of noise.

“Computational fluid dynamics (CFD) simulations are incredibly complex," said Parviz Moin, the Director of CTR. "Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed."

Recently Stanford researchers and LLNL computing staff have been working closely to iron out the last few wrinkles. They were glued to their terminals during the first "full-system scaling" to see whether initial runs would achieve stable run-time performance. They watched eagerly as the first CFD simulation passed through initialization then thrilled as the code performance continued to scale up to and beyond the all-important one-million-core threshold, and as the time-to-solution declined dramatically.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like