AMD has added three new chips to its Epyc 2 lineup of data center processors codenamed “Rome.” The new parts, unveiled Tuesday, are optimized for workloads the company said were already big in the enterprise and would only continue to grow for the foreseeable future.
The new second-generation Epyc chips, based on AMD’s Zen 2 cores manufactured using TSMC’s 7nm process technology – well ahead of Intel’s current capabilities – come in eight-core, 16-core, and 24-core flavors. They are designed for optimal cost and performance with hyperconverged infrastructure, commercial HPC applications, and database workloads, Dan McNamara, senior VP and general manager of AMD’s server business unit, said in a press briefing.
McNamara joined AMD this January, leaving a senior executive role at Intel. He’d been at Intel since 2015, when the chip giant acquired Altera in a $16.7 billion bet on reconfigurable processors called FPGAs. McNamara had worked at Altera for 11 years prior to the acquisition, according to his LinkedIn profile.
He said he expected enterprise data centers to go through a “really big modernization phase” in the near future, driven by the need to gain insight from analyzing vast amounts of data quickly, maximizing return on data center investment, and commercial HPC applications.
“Everyone’s looking at how do you modernize your on-prem data centers,” he said.
Hewlett Packard Enterprise, Dell EMC, and Lenovo announced rack servers powered by the new AMD chips. Supermicro announced the first ever Epyc-powered blade server.
HPE and Nutanix announced a joint hyperconverged infrastructure appliance, but HPE also said its own hyperconverged platform, SimpliVity, would support the new processors as well. VMware announced support by its vSAN hyperconverged platform.
IBM Cloud announced an Epyc 2-powered bare-metal server offering.
The first batch of AMD’s Epyc 2 server chips, unveiled in August 2019, was aimed primarily at the higher end of the data center market: hyperscale cloud platforms and some of the world’s fastest supercomputers. AMD executives did talk about the lineup’s performance with enterprise workloads at the launch event in San Francisco, but the focus was clearly on the big operators.
With the three new parts, all the focus is on “per-core performance leadership for the enterprise,” McNamara said. Big design considerations were things like helping enterprises reduce the cost of per-core software licensing by maximizing single-core performance by boosting the amount of cache available to each core and improving connectivity between CPU and other components.
The 16-core $3,100 Epyc 7F52 part – McNamara said 16 cores was a sweet spot of sorts in the enterprise server market – offers 16MB of cache per core. An AMD presentation deck compared this part with 16-core Intel Xeon Gold 6246R (2.2MB per core, at $3,286) and 16-core Xeon Gold 6242 (1.4MB per core, at $2,529).
Asked how AMD went about selecting the Intel parts to compare its new products to, McNamara said the aim was to select the best-matching Intel chips from the most recent Cascade Lake refresh. Aaron Grabein, an AMD PR manager, added that parts with equivalent core counts were chosen for comparison.
Upfront, an AMD Epyc 2 processor may cost more or less than a comparable Intel part, depending on the model, but McNamara claimed that all three of the new AMD chips delivered better performance per “CPU dollar” spent and lower total cost of ownership, because they were more energy efficient and therefore provided data center operational cost savings, i.e. requiring less power and cooling per unit of compute muscle.
Here’s the AMD slide comparing performance per CPU dollar of the latest Epyc 2 processors to select Xeon Gold and Xeon Platinum parts by Intel: