Skip navigation
Inside the CERN data center in Meyrin, Switzerland, 2017 Dean Mouhtaropoulos/Getty Images
Inside the CERN data center in Meyrin, Switzerland, 2017

Is RDMA the Future of Data Center Storage Fabrics?

Big data analytics, ever larger databases, and dense workload consolidation are driving the rise of RDMA.

Flash, NVMe, and storage-class memory like Intel’s Optane are speeding up storage significantly, and they’re a good fit for the workloads that are driving the greatest demand for storage, such as containers, big data, machine learning, and hyperconverged infrastructure, all of which use file and object storage rather than the block storage of SANs. But as well as high-speed storage, those workloads also need lower-latency networking that doesn’t clog up the CPU, especially for hyperconverged systems using software-defined networking.

Ethernet is a well-known protocol, and installing Ethernet networking isn’t fraught with the complexity and fragility of Fibre Channel, where you have to take into account things like the angle you bend the cable at when you’re installing it and limits on the length of cables. You’re not limited to 10Gb/s; you can move to 25Gb/s, 40Gb/s, or 100GB/s.

But TCP has a lot of overhead, and that means the CPU spends a lot of time managing network transfers for write-intensive workloads like databases, reducing the overall performance of those workloads. That’s where RDMA (Remote Direct Memory Access) comes in.

“As you do a lot of writes to the system, the performance of hyperconverged systems does down. If you have a single node that’s doing 70 percent writes, and those have to be copied to the other node as fast as possible, if you’re not doing RDMA, the CPU has to get involved, and on a 10Gb NIC those writes can easily consume a core or two on a modern processor,” Jeff Woolsey, principal program manager for Windows Server noted recently at Microsoft’s Ignite conference.

The direct connection of RDMA reduces the impact on the CPU and with fast storage like NVMe or SCM it brings latencies down from milliseconds to microseconds.

“The rise of RDMA use in storage networks (in particular NVMe over Fabric, since that seems to be the one that is going to become the standard in this arena) is needed to meet ever more stringent storage performance requirements of big data analytics -- in particular real-time workloads -- ever larger databases, and denser workload consolidation,” Eric Burgener, research director for enterprise storage at IDC, told us.

Although he added the caution that while “NVMe is the future, it is still immature,” Burgener predicted that “Over the next three to four years, external storage platforms will be moving more in the direction of NVMe in the protocol stack, with devices, array backplanes, controllers, and fabric.”

Earlier this year, Intel put RDMA directly onto the motherboard of a Xeon server. DataON is using those in its new Lightning TracSystem Windows Server 2016 hyperconverged systems, which have Optane SSDs, NVMe for the fast cache tier, and an SMB3 RDMA fabric.

Now Hewlett Packard Enterprise is using Mellanox’s RDMA-based Ethernet Storage Fabric in its new StoreFabric M-Series storage-optimized switches, giving customers the kind of storage management tools they’re used to with Fibre Channel.

Also at the Ignite conference, a company called Chelsio showed Windows Server 2016 Storage Replica running at 25Gbps over SMB3 with RDMA network cards, but over a 50km loop (without needing the usual extension equipment for that kind of metro-scale connection).

RMDA is even supported in Windows 10; Hollywood post-production house Create replaced most of its Mac workstations and its six-figure Fibre Channel SAN with Windows PCs and Windows Server Storage Spaces Direct using NVMe so they can use RDMA to get fast network access to 4K files for video editing and VR creation.

The trade-offs you have to make to get an efficient storage system have changed, analyst Robin Harris of StorageMojo told us. “Fibre Channel made sense because storage arrays were very costly, and sharing them was required to make the investment pay. Today, we have individual PCIe cards that are as fast as most storage arrays and offer considerably more bandwidth for a fraction of an array cost. Thus, the conversation has shifted from how to share an expensive resource to how to share an inexpensive resource, in this case, IOPS.

“Given the primacy of cost in this scenario, the economics favor the lowest-cost network. Will Ethernet be that low-cost network? History strongly suggests it will. Once the protocols are sorted out and the technology is broadly available, I expect most greenfield storage networks will be Ethernet-based, with RDMA a critical factor in providing an economical and low-latency solution for sharing server-based storage.”

The fact that HPE is putting RDMA in a switch is a validation of the technology, Harris said. “The bigger question, from a market perspective, is whether other vendors will follow HPE’s lead.”

Stretching Out Fibre Channel

Even though next-generation architectures aren’t being developed on top of Fibre Channel, that doesn’t mean it’s going away of course. “I do believe the heyday of Fibre Channel has passed. But there is a large installed base that will continue to power a shrinking Fibre Channel market for at least the next decade,” Harris said.

Burgener agreed: “The use of Ethernet for storage is clearly on the rise, but according to our revenue forecasts Fibre Channel is clearly larger in 2017 ($9.6B for Ethernet, $11.7B for Fibre Channel). NVMe can be run over either FC or Ethernet, and I think what will happen is that Ethernet will continue to slowly encroach on FC revenues, but our forecasts still show FC ahead of Ethernet in terms of revenues ($9.8B vs $11.5B) in 2021.

“Enterprise data centers tend to have more of an investment in Fibre Channel and keep buying that so they can continue to leverage their existing Fibre Channel networks and management expertise. Smaller, newer shops that started with Ethernet stay with that approach as they grow,” Burgener told us.

“It’s rare that a customer would shift away from Ethernet towards Fibre Channel, but you will see Ethernet-based storage networks appear in shops that already have a large investment in Fibre Channel for new or departmental projects.”

For smaller systems where the cost of networking could be prohibitive (say for a node in a branch office or anywhere else you want a few terabytes of storage for edge computing), Thunderbolt is proving an interesting alternative to RDMA. Intel’s Thunderbolt 3 USB-C adapters are now certified for Windows Server 2016 and DataON has come out with a two-node hyperconverged storage system that gives you 40Gbps and 200K IOPS with only 20 percent CPU usage; that’s not quite as good as RDMA, but it’s more CPU-efficient than Ethernet, and with 10TB of storage it costs only about $9,000.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish