GPUs Increasingly Rule the Supercomputer World

NVIDIA GPUs now power five of the world’s seven fastest systems and 17 of the 20 most energy efficient systems identified by, the non-profit that ranks supercomputers • The world's fastest supercomputer, the Department of Energy's Summit, utilizes 27,648 Volta Tensor Core GPUs • The rate of performance gains for GPUs far outpaces traditional CPUs

Christine Hall

June 27, 2018

3 Min Read
NVIDIA Volta Tensor Core

Although ARM stole the show last week, with Hewlett Packard Enterprise and Sandia National Laboratories' announcement of Astra, the world's largest ARM-based supercomputer, that's in the works, GPU is basking in the rays of the spotlight this week with the release of the Top 500 supercomputer list.

GPUs power many of the world's supercomputers (in tandem with CPUs) and have become core to AI-capable supercomputers. The edition of Top500 that came out Monday marked the first time that GPUs were responsible for a bigger share of total performance represented on the list than CPUs, according to a Top500 blog post.

None of this was lost on NVIDIA, of course, since it's the company that invented GPUs back in 1999.

At the International Supercomputing Conference now underway in Frankfurt, the company was quick to point out that a large number of systems on the Top500 list employ its Volta Tensor Core GPUs, including Summit, currently the world's fastest supercomputer, and Sierra, the third-fastest, both of which are located in the US. For good measure, Japan's fastest supercomputer, AI Bridging Cloud Infrastructure, or ABCI, which ranks number-five overall, is also built on Tensor Core technology. The same is true of Piz Daint, Europe's fastest, and ENI HPC4, the world's fastest industrial supercomputer.

NVIDIA boasts that the majority of computing performance added to the new TOP500 list comes from its GPUs. In total, GPUs now power five of the world’s seven fastest systems, as well as 17 of the 20 most energy efficient systems on the new Green500 list. In case you're counting, Summit is equipped with 27,648 Volta Tensor Core GPUs.

"The new TOP500 list clearly shows that GPUs are the path forward for supercomputing in an era when Moore’s Law has ended," Ian Buck, VP and GM of accelerated computing at NVIDIA, said in a statement. "With the invention of our Volta Tensor Core GPU, we can now combine simulation with the power of AI to advance science, find cures for disease, and develop new forms of energy. These new AI supercomputers will redefine the future of computing."

The same parallel nature of GPUs that makes them ideal for graphics-intensive applications such as gaming is behind their usefulness in high-performance computing. A single GPU can have thousands of simple shader cores, designed to render multiple pixels simultaneously. When put to use in HPC, these shader cores can be used to process multiple streams of data at the same time, enabling supercomputers to handle huge workloads at speeds that would be difficult if not impossible using even the most powerful CPUs.

An early demonstration of this was seen in 2012 when an Oak Ridge National Laboratory CPU supercomputer, Jaguar, was upgraded by the Department of Energy to become Titan, a hybrid HPC cluster that include both GPUs and CPUs. As a result, the system went from 1.76 petaflops to a theoretical peak of 27 petaflops.

These performance increases become more important as AI is increasingly added to the mix.

"Summit's unprecedented computing power will aid scientists in researching energy, advanced materials, artificial intelligence, astrophysics, and medicine in ways that were not previously possible," Thomas Zacharia, director of the Oak Ridge National Laboratory, said in a statement.

The performance curve for GPUs has also been increasing at a rate that far outpaces traditional CPUs. For example, at a recent press briefing, NVIDIA's Buck pointed out that a single Summit node is 150x faster than the previous best.

About the Author(s)

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like