It is essentially the data bus of IBM’s Power processor line: the Coherent Accelerator Processor Interface. It became necessary as the accelerator industry expanded way beyond just making the CPU faster. Over a decade ago, the graphics processor industry, led at the time by Nvidia, opened up a route for parallel processing in ordinary x86 boxes, with functionality way beyond the shading of triangles in 3D space.
Suddenly — or more accurately, from an historical vantage point, once again — hardware could be programmed to assist software, making it orders of magnitude faster. Now with the once inexorable path of Moore’s Law having collided with a solid wall erected by the laws of physics, the path forward for CPU makers such as IBM and Intel is to leverage the workload acceleration capability of FPGA chips.
They can’t accelerate everything, but FPGAs can be programmed to expedite a surprisingly large number of common workloads. Intel bet its future on the technology last year, spending $16.7 billion to acquire Altera.
For IBM to compete, it has to open up its intellectual property base even further, to build software ecosystem-like activity around the very core of its hardware. But as IBM has learned more than once, calling something “open” and sending out invitations doesn’t mean people show up to the party.
Raise your hand if you remember Micro Channel Architecture.
“CAPI needs to be open, because it’s a spec dealing with actual electrical signals and bus protocols that will be baked into hardware for several product generations,” explained Marko Insights principal analyst Kurt Marko, in a message to Data Center Knowledge. “As a physical hardware spec, details can't be abstracted away in a virtual layer or API, but must be precisely followed by products expecting to work with other CAPI-compliant devices.”
If an FPGA industry is to grow and thrive, it needs a set of standards its members can apply themselves to, as opposed to having those specs dictated to them from a megaphone. It’s the need for a level playing field that brought competitors HPE and Dell EMC, AMD and Nvidia, together with Mellanox and Xilinx to form the OpenCAPI Consortium. The institution was announced last October 14, and Thursday, the IEEE announced it was being enrolled as its latest federation program member.
That membership gives the group the clearance it needs to advance OpenCAPI as an international standard, through the Industry Standards and Technology Organization.
“OpenCAPI will offer its first bus architecture specification,” read a statement Thursday from the IEEE, “that will provide an open, architecture agnostic, high performance pathway between the microprocessor and different types of technology — advanced memory, accelerators, networking and storage — to more tightly integrate their functions within servers. This data-centric approach to server design, which puts the compute power closer to the data, removes inefficiencies in traditional system architectures to help eliminate system bottlenecks and significantly improve server performance.”
Marko helped correct a common misprint about OpenCAPI’s data rate: With 8 lanes of serial data at 25 GHz, the data rate is 25 gigabytes per second, not gigabits. In his communication with us, he acknowledged that IBM’s original CAPI started out proprietary. But in the modern hardware market, he stated, “most hardware specs are open to contributions from any stakeholder, since the inventor realizes that the best way to industry acceptance is through cooperation. Few vendors outside Apple (see Lightning) can force a hardware interface through on sheer market dominance.
“Historically, many proposals were submitted to the IEEE,” continued Marko. “But lately, ad hoc industry consortia like the USB-IF and the PCI-SIG have proven to be more nimble and responsive to technological change. I see IBM trying to do something similar with CAPI. If other silicon vendors adopt it and you see, for example, ARM server SoCs with CAPI interfaces, then you'll know it's working.”