Skip navigation
Network cables
(Photo by Sean Gallup/Getty Images)

Mellanox Embraces Next-Gen Data Center Ethernet

Rolls out switches that support from 10 GbE to 100 GbE connectivity

Looking to accelerate the shift to 25 Gigabit, 50 Gb and 100 Gb Ethernet architectures, Mellanox Technologies has unveiled a new high-end switch alongside a new line of faster network adapters.

Gilad Shainer, vice president of marketing at Mellanox, said at this point making the shift to 25 GbE and 50 GbE is a no-brainer because the price points for older data center Ethernet technologies, 10 GbE and 40 GbE , are now essentially the same.

As IT organizations make that switch at the adapter level, Shainer added, a migration to 100 GbE switches also becomes all but inevitable. In fact, Big Data applications that are rapidly emerging inside the data center that depend on access to large amounts of network bandwidth to ultimately succeed are increasingly forcing a network upgrade.

“The key to being able to use data is to actually move it,” says Shainer. “You need to able to move data fast enough to support all these new applications.”

Via a Spectrum integrated circuit developed by Mellanox, the company’s line of data center Ethernet switches can be configured to support 10 GbE, 25 GbE, 40 GbE, 50 GbE, and 100 GbE connectivity at a rate of non-blocking 6.4 Tbps full wire speed. Those switches can also be invoked via an open source API that Mellanox co-authored and contributed as the Switch Abstraction Interface specification in the Open Compute Project.

The switches themselves now support twice as many virtual machines as the previous generation of switches, can be configured with 32 100 GbE ports, 32 40/56 GbE ports, 64 10 GbE ports, 64 25GbE ports, or 64 50GbE ports.

Meanwhile, like other Mellanox adapters, the ConnectX-4 Lx 10/25/40/50 GbE adapter is designed to support Mellanox Multi-Host technology that enables multiple compute and storage hosts to connect to a single adapter. ConnectX-4 Lx also includes native hardware support for RDMA over Converged Ethernet (RoCE), stateless offload engines, and GPUDirect.

The end result, Shainer said, is 2.5 times greater performance in the same adapter footprint.

Given the fact that most IT organizations have not yet made the move to 10 GbE and 40 GbE, the chances that both these technologies will soon be orphaned inside the data center is fairly high. In fact, as demand for 25 GbE and 50 GbE technologies continues to expand, it might not be too long before it’s less expensive to use them than 10 GbE and 40 GbE technologies.

Whatever the outcome, IT organizations that have taken their time when it comes to upgrading their data center environments may very well soon find themselves enjoying a significant second-mover advantage.

TAGS: Networks
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish