Exascalar Results from November 2012: Part 1

Exascalar defines supercomputing leadership through a combination of Top500 (performance) and Green500 (efficiency) data to understand both evolution and revolution in supercomputing architecture and scale. Winston Saunders of Intel looks at the latest information on efficiency and performance in the top supercomputers in the world.

Industry Perspectives

January 2, 2013

4 Min Read
Data Center Knowledge logo

Winston Saunders has worked at Intel for nearly two decades and currently leads server and data center efficiency initiatives. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter. He previously wrote on Exascalar in an article titled, Exascalar 2012: HPC Performance Meets Efficiency, in June.

Winston Saunders



With the recent publication of the Top500 and Green500 lists of the world’s most powerful and efficient supercomputers, it’s time to pull together another look at Exascalar. As I noted in this blog post, "The biggest challenge facing high-performance technical computing is to deliver an Exaflop per second. What makes the problem challenging is not just the achievement of that scale of computing performance, but to do it within a 'reasonable' power budget of 20MW as Kirk Skaugen recently announced."

Supercomputing: Both Performance & Efficiency

Recall that Exascalar defines supercomputing leadership through a combination of Top500 (performance) and Green500 (efficiency) data to understand both evolution and revolution in supercomputing architecture and scale. Exascalar is the “logarithmic distance” to 1018 flops in a 20 MegaWatt power envelope. As the current analysis shows the current leader has an Exascalar of 2.22 (i.e. a factor of 166). The leading Exascalar in June 2011 was 2.75, or about a factor of four lower in either performance or efficiency.

On this round, I plan to break up the discussion into three blogs: the first will focus on the Exascalar analysis itself, the second will dig a little deeper into the data and underlying trends, and the third will discuss some recent, and potentially controversial, insights into Exascalar taxonomy.

Exascalar Top 10

The November 2012 Exascalar (Performance-Efficiency Scalar) Top 10 list is shown below. The biggest change is at the top of the list, the new DPE/SC/Oak Ridge National Laboratory system with a best-ever Exascalar of 2.22. Since Exascalar is logarithmic, this equates to about a factor of 166 from the Exascalar goals in efficiency and performance. In June 2012, the peak Exascalar was 2.26, about a 10 percent improvement from the June 2012 list.



Click to enlarge.

This time I’ve included the Performance and Efficiency scalars (the log of performance and efficiency relative to the Exascale goals) as an aid to understanding how Exascalar is calculated. I feel this also provides greater insight than does the ordinal ranking of efficiency and performance into what constitutes leadership. I’ll discuss this in greater detail in my next blog.

In all, there are four new or changed systems on the list. The Forschungszentrum Juelich  system had a big jump in performance since the last publication and the Leibniz Rechenzentrum improved from an Exascalar of 3.12 to 3.10. A new entry on the list is the Texas Advanced Computing Center at the University of Texas system, rounding out the Top 10 list at an Exascalar of 3.28. This is over a 20 percent improvement in the Top 10 score since June.



Click to enlarge.

The Exascalar graph of the data shows the familiar triangular shape. There are good reasons for the triangular shape (though ideally it will turn into a trapezoid, as I will discuss in the third blog of this series). The familiar column due primarily to the BlueGene/Q family of computers remains prevalent at an efficiency of about 2000 Mflops/Watt. However, there are some new entrants in this efficiency range and these will become apparent in the next blog.

The green trend-line of the Top Exascalar extending back to 2007 in the above figure shows the expected proportionality to efficiency and underscores the primary dependence of performance on efficiency. The Top 10 Exascalar value, shown as the red arc in the figure, improved from 3.39 in June 2012 to its current value of 3.08. Note that most of the systems in the Top10 benefit from high efficiency and all but four are in the dominant populations of highest efficiency.

In the next two blogs, I’ll look into some of the trends behind Exascalar and also some insights into the taxonomy of Exascalar. Until then, your comments (add them below) are appreciated.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like