Skip navigation
Intel Sapphire Rapids data center processor Intel
Intel Sapphire Rapids data center processor

DCK Must Know: Top Data Center News This Week – August 20, 2021

Intel previews new hardware, Facebook pledges to go water positive, Google plans $1B Ohio expansion, and more.

Welcome to the week’s roundup of all the biggest news in the data center industry, curated, distilled, and put in context by Data Center Knowledge.

Intel Embraces Chiplets, Doubles Down On IPUs, Takes On Nvidia With Data Center GPUs

On Thursday, a few weeks after its CEO Pat Gelsinger pledged to continue pursuing Moore’s Law “until the periodic table is exhausted,” the company previewed upcoming data center and consumer-device hardware it’s been designing in its pursuit.

The data center products revealed at the annual Intel Architecture Day included the next-generation Intel Xeon Scalable processor, codenamed “Sapphire Rapids;” new Infrastructure Processing Unit (IPU) accelerators; and a new GPU designed to directly take on Nvidia in the AI GPU accelerator market.

Sapphire Rapids, the next-gen Xeon Scalable CPU, is Intel’s first data center part built of multiple tiles, or “chiplets,” as they’re often referred to in the industry. AMD’s Epyc processors, responsible for the big bite it recently took out of Intel’s data center market share, are also designed using chiplets.

  • Creating a single logical processor by integrating four tiles in a single package enables the designers to increase core counts, cache, memory, and IO without having to deal with “the physical constraints that would otherwise be imposed on the architecture and would have led to difficult compromises,” Nevine Nassif, chief engineer of Sapphire Rapids, explained.

Intel introduced three new IPUs: Oak Springs Canyon and Arrow Creek FPGAs and an ASIC-based Mount Evans part.

Dedicated ASIC-based IPUs are optimized for maximum performance, while FPGA IPUs are designed for re-programmability to enable custom offloads, Guido Appenzeller, CTO of Intel’s Data Platforms Group, said.

Intel’s bet on IPU accelerators stems from its need to cater to cloud providers. It’s been working on IPUs with Microsoft, JD.com, Baidu, and VMware. Intel developed the Mount Evans IPU “hand-in-hand with a top cloud provider,” Naru Sundar, Mount Evans’s chief architect, said.

  • As Appenzeller explained, offloading infrastructure management processing overhead from the CPU leaves more CPU capacity that can be used by cloud customers, thereby increasing revenue per processor.
  • The approach also enables tenants to take full control of the CPU (to bring their own hypervisor, for example), while giving the cloud provider the ability to still confine that hypervisor to a specific network segment of storage volume.
  • Finally, IPUs enable “diskless architecture. Instead of attaching a storage disk to every server (which is going to be underutilized), IPUs give you the ability to create a shared storage service that presents a virtual NVMe storage device to each workload.

As Raja Koduri, Intel senior VP and general manager of its Accelerated Computing Systems and Graphics Group, put it, Intel has had “almost a decade-long problem. We were behind on throughput compute density and support for high-bandwidth memories.”

He didn’t name the company Intel’s been behind of in this area, but it was clearly Nvidia, which largely owes its massive lead in the AI accelerator market to this throughput advantage of its data center GPUs.

The GPU Intel unveiled Thursday is designed to finally solve that problem. Built using Intel’s new Xe-HPC architecture, the Ponte Vecchio GPU offers more than 45 TFLOPS FP32 throughput, 5 TBps-plus memory fabric bandwidth, and 2 TBps-plus connectivity bandwidth, according to the company.

  • In an internal test, Intel claimed, the Ponte Vecchio GPU beat Nvidia’s A100 GPU at training a ResNet-50 v1.5 neural network.
  • Speaking with ZDNet, Koduri said “nobody beat Nvidia on a training benchmark [so far], and we have demonstrated that today.”

Google to Spend $1B to Triple Its Ohio Data Center Capacity

Google said Thursday that it’s planning to spend $1 billion to triple the capacity of its data center campus in New Albany, Ohio. Its initial investment in the campus was $600 million, the company said when it broke ground in New Albany in 2019.

  • As usual, the announcement was accompanied by statements of praise and enthusiastic support of the investment by local, state, and federal elected officials, who often use such announcements to score points with their constituents for successful economic development efforts.

New Albany, a Columbus suburb, has grown into a major hyperscale data center hotspot. Facebook and AWS have built data centers there, and so have specialist developers, such as Compass Datacenters and Stack Infrastructure.

  • Google said in its announcement that it’s also bought 618 acres in Columbus and nearby Lancaster for potential data center construction in the future. Those sites and its New Albany campus add up to more than 1,000 acres of land in the region.

First Live Data Center Conference in the US Since Start of the Pandemic

Orlando’s Orange County Convention Center this week hosted the first live and in-person data center industry conference since the start of the pandemic, and the event was a success. The Data Center World 2021 expo floor was full of vendor booths and conference attendees checking out the wares, keynotes were well attended, people mingled in the hallways, and the atmosphere, save for face masks and pervasive hand sanitizer dispensers, felt about as Data Center World used to feel pre-2020.

There were keynotes on the future of virtual, augmented, and other “realities” in the workplace (by Toshi Anderson Hoo, director of Institute for the Future think tank’s Emerging Media Lab), the state of play in quantum computing (by Celia Merzbacher, executive director of SRI International’s Quantum Economic Development Consortium, or QED-C), the state of play in cybersecurity (by JetBlue CISO Timothy Rohrbacher), and what businesses can learn from sports teams about applying data analytics to measure performance (by Christina Chase, lecturer and managing director of MIT Sports Lab).

One of the new things at the show this year was a noticeable presence of data center market analysts from Omdia, the large technology market research organization the event’s organizer, Informa Tech, formed last year to unify four of its research brands: Ovum, IHS Markit Technology, Tractica, and Heavy Reading.

  • There were more than 900 people in attendance.
  • The event took place over four days (Monday through Thursday).
  • Data Center World is closely tied to AFCOM, its sister industry association for data center and IT professionals.
  • Data Center World, AFCOM, Omdia, and Data Center Knowledge are all part of Informa Tech.

Facebook Pledges to Become “Water Positive” by 2030

Back in May, the City Council of Mesa, Arizona, voted to approve a massive Facebook data center construction project in the town. The project drew controversy because of the water the future facility would need to draw for cooling in a place that struggles with drought.

This week, Facebook announced a commitment to become “water positive,” or restore more water than it consumes globally by 2030. It plans to achieve the goal by funding water restoration projects in water-stressed regions and by improving water efficiency at its facilities.

Total water withdrawal to cool Facebook data center campuses in the US (including an “East Coast Leased Data Center Facility”) was 3 million cubic meters (790 million gallons) in 2020, according to the company’s 2020 sustainability report.

  • But the company said it also restored almost 5.8 million cubic meters of water in “high water-stressed regions” that year.
  • Notably, the company’s leased East Coast data center footprint resulted in more water withdrawn than any other Facebook data center campus on the list: 645,000 cubic meters (170.4 million gallons). The company has multiple data center leases in Northern Virginia, where its largest landlords are CloudHQ and Digital Realty Trust.
  • Facebook helps fund restoration projects for watersheds local to its data centers. As of the end of 2020, the company had contracted 10 water projects in four regions where water supplies are stressed, according to its sustainability report.

Facebook said its data centers’ Water Usage Effectiveness ratio in 2020 was 0.30, while the industry average WUE that year was 1.80. WUE is defined as liters of water used to cool 1kWh of IT load.

  • Its primary data center cooling design relies on the combination of outside air and evaporative cooling. Outside air gets pushed through a moist medium and gets cooler as the moisture evaporates.

Facebook also uses water to maintain a certain level of humidity inside its data centers required for the IT equipment to function for the duration of its warranty. The company recently made a big efficiency improvement in this area.

  • It’s traditionally maintained 20 to 80 percent relative humidity (at 65-80F air temperature) in its data centers to keep servers happy. But in a pilot project last year its engineers discovered that they could keep humidity level as low as 13 percent, resulting in 40 percent water savings.
  • Since then, Facebook has been implementing this approach in its existing data centers and made it a standard for new ones.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish