IBM Designs a “Performance Beast” for AI

Powered by its latest Power9 chip and new bandwidth highways, the AC922 is IBM’s latest and most powerful hardware for HPC and AI.

Wylie Wong, Regular Contributor

December 12, 2017

4 Min Read
Power9, IBM Power System AC922
IBM engineer Stefanie Chiras holds a Power9 chip above an AC922 server in AustinJack Plunkett/Feature Photo Service for IBM

Companies running AI applications often need as much computing muscle as researchers who use supercomputers do. IBM’s latest system is aimed at both audiences.

The company last week introduced its first server powered by the new Power9 processor designed for AI and high-performance computing. The powerful technologies inside have already attracted the likes of Google and the US Department of Energy as customers.

The new IBM Power System AC922 is equipped with two Power9 CPUs and from two to six NVIDIA Tesla V100 GPUs. It includes three new interfaces that IBM executives claim makes it 3.7 times faster than Intel x86-based systems.

“An almost four times speed-up in AI and HPC applications is a game changer,” said Sumit Gupta, VP of HPC, AI, and machine learning at IBM’s Cognitive Systems business unit. “When data scientists and developers create AI models, they can bring down the time it takes to train a model from two days to half a day. It’s a big productivity enhancement.”

The AC922 is the first server to support next-generation interfaces that accelerate memory bandwidth and the movement of data, he said. Those are PCI-Express 4.0 and NVLink 2.0. The latter connects the Power9 CPU to NVIDIA’s GPUs, providing twice the bandwidth of previous versions. The system also supports OpenCAPI, an open standard that provides a high-speed interface for FPGA accelerators and other devices, he said.

Related:NVIDIA CEO: AI Workloads Will “Flood” Data Centers

IBM previously partnered with NVIDIA to better compete against Intel in the server market, where x86 systems still dominate. NVIDIA GPUs are used to accelerate x86 systems as well. Compute heavyweights are all fighting for a piece of the growing AI hardware and software market, which IDC says will balloon from about $12 billion this year to $57.6 billion in 2021.

Peter Rutten, a research manager for Servers and Compute Platforms at IDC, said IBM’s Power9 announcement is significant not just for AI workloads but for other data center needs as well.

“This is a performance beast for AI and HPC. It is also very suitable for hyper-scale deployments such as Google, who are installing Power 9 in their data centers,” he said.

Chirag Dekate, a Gartner research director for HPC, machine learning, and emerging compute technologies, said the impact of the new interfaces -- NVLink 2.0 and PCIe 4.0 -- is similar to adding new lanes on a highway to reduce traffic jams during rush hour.

“Applications in AI and HPC that leverage accelerators are often constrained by data bandwidth issues,” Dekate said. Those issues limit both application scalability and performance. The next-generation interfaces dramatically increase bandwidth between CPUs and accelerators (GPUs or FPGAs), enabling the system to utilize more of the accelerators’ power. This can have a “dramatic impact” on applications like deep neural networks (the most widely used form of AI), computational fluid dynamics, and HPC, all of which are typically constrained by data bandwidth.

Related:A Cloud for the Artificial Mind: This Data Center is Designed for Deep Learning

The Power9 processor’s new NVLink and OpenCAPI interfaces also support memory coherence, which provides developers with a unified memory view and boosts performance, IBM’s Gupta said.

IBM promises that its AC922 server will improve performance for popular AI frameworks, such as Chainer, TensorFlow, and Caffe. It is geared at enterprise customers, such as banks, who may want deep learning insights for real-time fraud detection, or manufacturing and industrial firms, who can use it to find defects in their products or infrastructure, Gupta said.

Besides Google, the US Department of Energy is among the first customers of the Power9 chip, he said, using it in two new supercomputers. The Summit HPC system at Oak Ridge National Laboratory is expected to reach speeds of 200 petaflops per second, while the Sierra supercomputer at Lawrence Livermore National Laboratory is expected to reach 125 petaflops per second.

IDC’s Rutten believes Power9 has potential for broad market reach.

Organizations are moving away from homogeneous computing systems -- in which all compute is standardized on a single architecture -- toward heterogeneous ones. Customers increasingly choose the right processor for the right task -- be it x86, ARM, or Power -- and combine it with accelerators, such as GPUs, FPGAs, manycore processors, or ASICs.

“Modern workloads, and especially AI and HPC, which is moving into the data center, demand this heterogeneous approach,” he said. “Power9 has distinct advantages for data-heavy workloads that demand serious performance and I/O. At the same time, Power9 runs the same Linux that IT and developers work with on other architectures."

About the Author(s)

Wylie Wong

Regular Contributor

Wylie Wong is a journalist and freelance writer specializing in technology, business and sports. He previously worked at CNET, Computerworld and CRN and loves covering and learning about the advances and ever-changing dynamics of the technology industry. On the sports front, Wylie is co-author of Giants: Where Have You Gone, a where-are-they-now book on former San Francisco Giants. He previously launched and wrote a Giants blog for the San Jose Mercury News, and in recent years, has enjoyed writing about the intersection of technology and sports.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like