Tencent, China’s answer to Facebook which this week became the country’s most valuable publicly traded company, is integrating new servers powered by OpenPower, IBM’s alternative to Intel’s ubiquitous x86 chip architecture, into its hyperscale data centers.
That’s according to IBM. Yesterday, the company rolled out a new OpenPower chip and three new OpenPower servers, including the one being deployed at Tencent data centers to run Big Data workloads. OpenPower is at the center of IBM’s current server play, focused on high-horsepower workloads as opposed to the general-purpose x86 market the company got out of in 2014 when it sold its commodity server business to Lenovo.
OpenPower is a chip architecture developed through a foundation whose members include Google, Nvidia, and Broadcom, among many others. Unlike x86, which Intel keeps exclusive to itself, IBM licenses OpenPower to other companies wishing to build chips using the architecture.
Tencent, according to IBM, recently tested a large cluster of its new servers, finding that they performed three times faster than the x86-based infrastructure it had in place, with fewer servers. The announcement didn’t specify, however, what kind of x86 servers the cluster was compared to and how old they were.
The results were convincing enough for Tencent to go ahead with a production deployment of the IBM servers, but it’s unclear how big that deployment will be. According to IBM, the Chinese company is “integrating the new servers into its hyperscale data centers for big data workloads.”
Social networking giant Tencent drives revenue growth through advertising and gaming on its messaging apps WeChat and QQ. On Monday, when its shares rose 4.2 percent, the company’s market value reached $256.6 billion, surpassing China Mobile as the most valuable company in China and joining the likes of Apple and Alphabet on the list of 10 largest public companies in the world.
If the scale of Tencent’s deployment is significant, it should be concerning for Intel, whose x86 architecture has dominated the data center market for years with little trouble from challengers. Other two hyperscale data center operators, Google and Rackspace, have been co-designing a server based on IBM’s Power9 CPU, with plans to open source the server architecture through the Open Compute Project. Founded by Facebook, OCP has become the primary hub for open source hardware design for hyperscale data centers. IBM announced Power9 last month, but systems based on these 14nm chips are not expected to ship until sometime in 2017.
Internet giants like Tencent, Google, and Facebook, which operate some of the world’s largest data centers, are now the primary growth drivers for suppliers in the data center hardware food chain, and emergence of a serious challenger to Intel in this space would threaten to slow the chip giant’s future revenue gains.
And OpenPower isn’t the only challenger that’s gathering up steam. At the high end of the data center market, Intel is also being challenged by Nvidia.
Much of the computing might in IBM’s new Linux servers comes from integration with Nvidia GPUs, working in tandem with OpenPower CPUs as accelerators. The most powerful of the three new models comes with the GPU maker’s latest and greatest interconnect technology, NVLink, which links the new Power8 processor to Tesla P100 Pascal GPUs.
Bill Boday, IBM’s senior offering manager for Linux on Power, said Power8 is the “first CPU designed for acceleration,” meaning acceleration via GPUs, while NVLink provides five times the bandwidth of PCIe interconnects.
Nvidia’s latest Pascal processors also power its first supercomputer, DGX-1. The chipmaker recently gifted a DGX-1 system to OpenAI, the artificial-intelligence research non-profit founded by Elon Musk.
Read more: Why Nvidia Gave Elon Musk's AI Non-Profit a Supercomputer
The IBM system with NVLink is S822LC for High Performance Computing. It has two Power8 CPUs, tightly integrated with four Tesla P100 GPUs via NVLink.
The other two systems (S822LC for Big Data and S821C) are compatible with Nvidia GPUs but interconnection with CPUs is done via PCIe ports.
Intel’s answer to high-octane, GPU-accelerated computing is Xeon Phi. The next-gen part in this line (Knights Mill) – its first designed specifically with machine learning in mind – is expected to ship next year.