How DPUs, IPUs, and CXL Can Improve Data Center Power Efficiency

DPU, IPU, and CXL technologies that offload switching and networking tasks from server CPUs have the potential to significantly improve data center power efficiency.

Salvatore Salamone, Managing editor

April 10, 2023

2 Min Read
abstract blue processor stacks for data center

Data Processing Units (DPUs), Infrastructure Processing Units (IPUs), and Compute Express Link (CXL) technologies, which offload switching and networking tasks from server CPUs, have the potential to significantly improve the data center power efficiency. In fact, the National Renewable Energy Laboratory (NREL) believes enterprises that use such techniques and focus on power reduction can realize about a 33 percent power efficiency improvement.

Improving power efficiency with new networking technologies

In the last year, several new technologies have emerged that take a new look at data center power consumption. While taking different approaches, the solutions all seek to improve energy efficiency by offloading switching and networking chores from CPUs, just as GPUs and hardware-based encryption reduce the load on CPUs (and thus drive down overall power usage). Here are some of the major developments to watch:

What are Data Processing Units (DPUs)?

The Data Processing Unit (DPU) is a relatively new technology that offloads processing-intensive tasks from the CPU onto a separate card in the server. Essentially, a DPU is a mini onboard server that is highly optimized for network, storage, and management tasks. (A general CPU on board a server was never designed for these types of intensive data center workloads and can often bog down a server.)

Related:Microsoft Extends Data Center IP With Acquisition of DPU Startup Fungible

What impact can DPUs have? “The use of hardware acceleration in a DPU to offload processing-intensive tasks can greatly reduce power use, resulting in more efficient or, in some cases, fewer servers, a more efficient data center, and significant cost savings from reduced electricity consumption and reduced cooling loads,” said Zeus Kerravala, founder and principal analyst with ZK Research.

How much power can DPUs save? A report by NVIDIA, which offers the NVIDIA BlueField DPU, estimates that offloading network, storage, and management tasks can reduce server power consumption by up to 30 percent. Furthermore, the report noted that the power savings increases as server load increases and can save $5.0 million in electricity costs for a large data center with 10,000 servers over the 3-year lifespan of the servers. There would be additional savings in cooling, power delivery, rack space, and server capital costs.

What are Infrastructure Processing Units (IPUs)?

Infrastructure services such as virtual switching, security, and storage can consume a significant number of CPU cycles. Infrastructure Processing Units (IPUs) accelerate these tasks, freeing up CPU cores for improved application performance and reduced power consumption.

Related:The Future of Connectivity: Innovations in Networking, Cloud, and 5G

Last year, Intel and Google Cloud launched a co-designed chip, code-named Mount Evans, to make data centers more secure and efficient. The chip takes over the work of packaging data for networking from CPUs. It also offers better security between different apps and users that may be sharing CPUs.


Continue reading this article on Network Computing

About the Author(s)

Salvatore Salamone

Managing editor, Network Computing

Salvatore Salamone is the managing editor of Network Computing. He has worked as a writer and editor covering business, technology and science; written three business technology books; and served as an editor at IT industry publications including Network World, Byte, Bio-IT World, Data Communications, LAN Times and InternetWeek.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like