Skip navigation
ocp servers microsoft Yevgeniy Sverdlik
Microsoft's custom cloud servers, open sourced through the Open Compute Project, as seen at the OCP Summit 2017

Microsoft First to Bring AMD's EPYC Processors to Its Cloud Platform

Azure putting EPYC into production might be just what AMD needs to spark data center adoption of its Zen-based processors.

Chip maker AMD just got some much needed traction and cred with Tuesday's announcement that Microsoft Azure has deployed EPYC processors in its data centers. Redmond gets a feather in its cap for being the first public cloud to adopt AMD's Zen-based processors.

Azure is currently utilizing the processors in its Lv2 VM family designed for storage-optimized workloads, but word on the street is it won't be long until they'll be rolling them out in cloud servers designed for other purposes.

This is a big win for AMD. Its EPYC processors were just released in June, and although early benchmarks have been impressive they haven't yet had a chance to prove themselves under heavy fire in 24/7 data center operations. That they've passed Microsoft's testing and been given the green light to be put into production in the second largest public cloud is a big PR plus.

Now all that's left is to see how well they hold up in real-life conditions.

"We are extremely excited to be partnering with Microsoft Azure to bring the power of AMD EPYC processors into their data center," Scott Aylor, AMD's corporate VP and GM of enterprise solutions, said in a statement. "There is tremendous opportunity for users to tap into the capabilities we can deliver across storage and other workloads through the combination of AMD EPYC processors on Azure. We look forward to the continued close collaboration with Microsoft Azure on future instances throughout 2018."

This comes at a particularly bad time for Intel, which has what amounts to a monopoly on servers. A couple of weeks back the company's products came under scrutiny when Machine Engine, it's processor-beneath-the-processor, was found to have several severe security vulnerabilities, which had OEM's scrambling to patch firmware. Intel is also under fire from companies that would like to weaken the chip maker's grip on the data center market with servers running the ARM architecture. Currently, at least three Linux server distributions support ARM, and Microsoft has been working with ARM chip makers Cavium and Qualcomm on server designs for its cloud services.

Redmond has indicated it expects ARM servers to eventually provide more than half of its cloud data center capacity. If that turns out to be the case, and if its experiences with AMD are fruitful, that means Intel will be shipping a lot fewer chipsets to the company that's quickly absorbing more and more enterprise workloads.

Today's news is also a plus for the open hardware movement. The Lv2-Series instances running AMD's chips are based on Microsoft’s Project Olympus server platform, which was introduced about a year ago as Redmond’s next-generation hyper-scale cloud hardware design. This design serves as a new model for open source hardware development within the Open Compute Project.

"We’ve enjoyed a deep collaboration with AMD on our next generation open source cloud hardware design called Microsoft’s Project Olympus," said Corey Sanders, Microsoft Azure's director of compute. "We think Project Olympus will be the basis for future innovation between Microsoft and AMD, and we look forward to adding more instance types in the future benefiting from the core density, memory bandwidth and I/O capabilities of AMD EPYC processors."

Azure's new Lv2-Series instances are running the AMD EPYC 7551 processor with a base core frequency of 2.2 GHz and a maximum single-core turbo frequency of 3.0 GHz. With support for 128 lanes of PCIe connections per processor, AMD provides over 33 percent more connectivity than available two-socket solutions to directly address a record number of NVMe drives.

The Lv2 VMs will be available starting at eight and going up to 64 vCPUs, the largest able to directly access up to 4TB of memory. They will support Azure premium storage disks by default, and according to AMD will support accelerated networking capabilities for the highest throughput of any cloud.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish