As more IT organizations start to replicate data between servers, Linbit announced this week that its software can now replicate data using remote direct memory access.
With the release of the company’s DRBD9 open source software IT organizations can eliminate reliance on TCP/IP in data replication between servers, says Greg Eckert, business development manager for Linbit.
“Now we can bypass the CPU using RDMA,” he says. “A lot of organizations have come to realize that TCP/IP is a bottleneck when it comes to high-performance replication.”
Capable of connecting 30 nodes in real time to ensure data availability, Eckert says, the data replication software running on servers eliminates the need to use expensive replication software on storage systems. DRBD9 can interconnect more than 30 geographically diverse distributed storage nodes across any network environment, according to him.
Using PCIe storage combined with InfiniBand network cards, Linbit claims DRBD9 is 100 percent faster than IP-based networks, while simultaneously reducing CPU load by 50 percent. To accomplish that, Eckert says, DRBD9 sits above the disk scheduler on a server to take control of the data replication process on Linux systems.
A new DRBD Manage application enables deployments in a matter of minutes and exposes a set of APIs to facilitate integration with other management frameworks, such as OpenStack.
Not only does a software approach provide a more flexible way to replicate data that is divorced from any hardware upgrade process, the Linbit software also doesn’t require IT organizations any commercial software licenses.
Eckert says adoption of RDMA inside cloud computing environments makes clouds the first place that will witness adoption of DRBD9. But he adds it’s also a matter of time before traditional enterprise IT organizations begin to rely on DRBD9 along with BRBD9 Proxy software to replicate data directly between servers both inside a data center and servers located in distributed data centers.
Replication software is a critical component of just about any disaster recovery strategy. The challenge that many organizations now face is that the amount of data that needs to be protected is growing in leaps and bounds. Keeping up not only with that volume of data, but also with the velocity at which that data is being created is tough.
IT organizations may still need to rely on storage systems to replicate data in some instances. But in high-performance computing scenarios responsibility for replication appears to be shifting to the server.