China’s Tianhe-1A Achieves 2.507 Petaflops

Add Your Comments

With the Top500 Supercomputer bi-annual list just around the corner, The National Supercomputing Center in Tiajin, China announced that its new Tianhe-1A supercomputer has set a new performance record of 2.507 petaflops on the LINPACK benchmark.

The benchmarks have been submitted to Top500.org, where the current top spot from June 2010 is held by Jaguar - the Oak Ridge National Laboratory Cray XT5-HE Opteron.  Jaguar clocked in at 1.7 PetaFLOPS (1 PetaFLOP is 10 15 Floating point Operations Per Second) and China’s Nebulae jumped to number two on the June list at 1.271 PetaFLOPS.

The second-generation Tianhe (pronounced “tee-awn-hoo-wa”) system is named after the Milky Way galaxy. A heterogeneous computer, Tianhe-1A uses a proprietary interconnect to couple massively parallel GPUs with multi-core CPUs, enabling significant advantages in performance, size and power.

GPU Powered

The Tianhe takes advantage of 7,168 NVIDIA Tesla M2050 GPUs, 14,336 CPUs, and 262 Terabytes of distributed memory.  “The performance and efficiency of Tianhe-1A was simply not possible without GPUs,” said President Guangming Liu of the National Supercomputer Center in Tianjin. “The scientific research that is now possible with a system of this scale is almost without limits; we could not be more pleased with the results.”

Tianhe-I
The Tianhe-I has its own Wikipedia page where it shows that it was developed by the Chinese National University of Defense Technology in Changsha, Hunan and used for Petroleum exploration and aircraft simulation.   Tianhe-1 held the number five position in the November 2009 Top500 list with 563 TeraFLOPS.

What other surprises will be in the November 2010 Top500 List?  Stay tuned next month to find out!

About the Author

John Rath is a veteran IT professional and regular contributor at Data Center Knowledge. He has served many roles in the data center, including support, system administration, web development and facility management.

Add Your Comments

  • (will not be published)