Microsoft First to Bring AMD's EPYC Processors to Its Cloud Platform

Azure putting EPYC into production might be just what AMD needs to spark data center adoption of its Zen-based processors.

Christine Hall

December 6, 2017

3 Min Read
ocp servers microsoft
Microsoft's custom cloud servers, open sourced through the Open Compute Project, as seen at the OCP Summit 2017Yevgeniy Sverdlik

Chip maker AMD just got some much needed traction and cred with Tuesday's announcement that Microsoft Azure has deployed EPYC processors in its data centers. Redmond gets a feather in its cap for being the first public cloud to adopt AMD's Zen-based processors.

Azure is currently utilizing the processors in its Lv2 VM family designed for storage-optimized workloads, but word on the street is it won't be long until they'll be rolling them out in cloud servers designed for other purposes.

This is a big win for AMD. Its EPYC processors were just released in June, and although early benchmarks have been impressive they haven't yet had a chance to prove themselves under heavy fire in 24/7 data center operations. That they've passed Microsoft's testing and been given the green light to be put into production in the second largest public cloud is a big PR plus.

Now all that's left is to see how well they hold up in real-life conditions.

"We are extremely excited to be partnering with Microsoft Azure to bring the power of AMD EPYC processors into their data center," Scott Aylor, AMD's corporate VP and GM of enterprise solutions, said in a statement. "There is tremendous opportunity for users to tap into the capabilities we can deliver across storage and other workloads through the combination of AMD EPYC processors on Azure. We look forward to the continued close collaboration with Microsoft Azure on future instances throughout 2018."

Related:Why Microsoft Says ARM Chips Can Replace Half of Its Data Center Muscle

This comes at a particularly bad time for Intel, which has what amounts to a monopoly on servers. A couple of weeks back the company's products came under scrutiny when Machine Engine, it's processor-beneath-the-processor, was found to have several severe security vulnerabilities, which had OEM's scrambling to patch firmware. Intel is also under fire from companies that would like to weaken the chip maker's grip on the data center market with servers running the ARM architecture. Currently, at least three Linux server distributions support ARM, and Microsoft has been working with ARM chip makers Cavium and Qualcomm on server designs for its cloud services.

Redmond has indicated it expects ARM servers to eventually provide more than half of its cloud data center capacity. If that turns out to be the case, and if its experiences with AMD are fruitful, that means Intel will be shipping a lot fewer chipsets to the company that's quickly absorbing more and more enterprise workloads.

Today's news is also a plus for the open hardware movement. The Lv2-Series instances running AMD's chips are based on Microsoft’s Project Olympus server platform, which was introduced about a year ago as Redmond’s next-generation hyper-scale cloud hardware design. This design serves as a new model for open source hardware development within the Open Compute Project.

Related:Why AMD Thinks Its New Data Center Play Will Be Good for Intel

"We’ve enjoyed a deep collaboration with AMD on our next generation open source cloud hardware design called Microsoft’s Project Olympus," said Corey Sanders, Microsoft Azure's director of compute. "We think Project Olympus will be the basis for future innovation between Microsoft and AMD, and we look forward to adding more instance types in the future benefiting from the core density, memory bandwidth and I/O capabilities of AMD EPYC processors."

Azure's new Lv2-Series instances are running the AMD EPYC 7551 processor with a base core frequency of 2.2 GHz and a maximum single-core turbo frequency of 3.0 GHz. With support for 128 lanes of PCIe connections per processor, AMD provides over 33 percent more connectivity than available two-socket solutions to directly address a record number of NVMe drives.

The Lv2 VMs will be available starting at eight and going up to 64 vCPUs, the largest able to directly access up to 4TB of memory. They will support Azure premium storage disks by default, and according to AMD will support accelerated networking capabilities for the highest throughput of any cloud.

About the Author(s)

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like