Custom AI Hardware: A New Front in the Cloud Wars

Alibaba Cloud joins the race for the cloud platform with the best AI processor.

Yevgeniy Sverdlik

November 6, 2019

2 Min Read
alibaba cloud logo mwc barcelona 2019 getty
David Ramos/Getty Images

Low prices, high performance, a wide range of tools, and scale of your platform are no longer enough if you want to show customers that your cloud is better than the competitors’. The world’s largest cloud platforms are now also competing on who can design the best processors for machine learning.

In September, Alibaba Cloud launched Hanguang 800, its custom machine learning chip, following announcements along similar lines by some of its biggest rivals, Amazon Web Services and Google Cloud.

This custom hardware powers cutting-edge features in one of the fastest growing segments of the cloud services market: Platform-as-a-Service. According to IHS Markit | Technology, PaaS grew 41 percent in the first half of 2019. This segment is where machine learning and artificial intelligence techniques are “most heavily used,” Devan Adams, a principal analyst at IHS, said in a recent announcement.

The only segment of the market that grew faster than PaaS was what IHS calls Cloud-as-a-Service, or CaaS. According to the market research group (part of the Informa Tech family that also includes Data Center Knowledge), CaaS includes all the services an Infrastructure-as-a-Service offering does, plus management of server and cloud operating systems. A middle ground between IaaS and PaaS, in a CaaS situation, the cloud provider manages more of the customer's stack than an IaaS provider does but less than a PaaS provider.

Related:AI Hardware Landlords

Custom AI hardware and teams of highly skilled experts like data scientists are how cloud providers increasingly differentiate themselves, Adams said. He tied the growth in PaaS usage to the rising adoption of AI and ML techniques.

Alibaba’s new AI chip is for “inference,” a subset of ML workloads. Inference is when a system makes autonomous decisions using a model that’s been “trained” using a different type of hardware.

AWS introduced its custom ML inference chip, called Inferentia, last November.

Alphabet’s Google Cloud has been running its custom Tensor Processing Unit ASICs for ML training since around 2015.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like