Ethernet cables lead to a server at the Rittal stand at the 2013 CeBIT technology trade fair in Hanover, Germany. (Photo by Sean Gallup/Getty Images)

Ethernet cables lead to a server at the Rittal stand at the 2013 CeBIT technology trade fair in Hanover, Germany. (Photo by Sean Gallup/Getty Images)

How Open Source is Changing Data Center Networking


This month, we focus on the open source data center. From innovation at every physical layer of the data center coming out of Facebook’s Open Compute Project to the revolution in the way developers treat IT infrastructure that’s being driven by application containers, open source is changing the data center throughout the entire stack. This March, we’ll zero in on some of those changes to get a better understanding of the pervasive open source data center.

The perfect data center masks the complexity of the hardware it houses from the requirements of the software it hosts. Compute, memory, and storage capacities are all presented to applications and services as contiguous pools. Provisioning these resources has become so automated that it’s approaching turnkey simplicity.

This is the part of the story where the open source movement stands up, takes a bow, and thanks its various supporters, agents, and parents for making everything possible for it. To say that open source efforts are responsible for the current state of data center networking would be like crediting earthquakes for the shapes of continents. Yes, they play an undeniably formative role. But the outcome is more often the result of all the elements — many of them random — they put into play.

One of these elements is a trend started by virtualization — the decoupling of software from the infrastructure that supports it. Certain open source factions may be taking credit for the trend of disaggregation now, but the next few years of history may record it as something more akin to a gathering storm.

“A very fundamental architectural principle that we believe in is, first of all, we want a platform in the future that allows hardware innovation and software innovation to grow independently,” said Equinix CTO Ihab Tarazi, in a keynote address to the Linux Foundation’s Open Networking Summit in Santa Clara, California, earlier this month.

Ihab Tarazi, CTO, Equinix (Photo: Scott Fulton III)

Ihab Tarazi, CTO, Equinix (Photo: Scott Fulton III)

“I don’t think the industry has that today,” Tarazi continued. “Today, if you innovate for hardware, you’re still stuck with this specific software platform, and vice versa; and all the new open source software may not have… a data center, without customized adoption in specific areas by service providers. So what we want to create in our data center is the platform that allows the new explosion of hardware innovation that’s coming up everywhere, in optics and switching and top-of-rack — all of that, to have a home, to be able to connect to a platform independently of software. And also we want all the software innovation to happen independently of the hardware, and be able to be supported.”

From Many Routes to One CORD

It isn’t so much that open source, in and of itself, is enabling this perfect decoupling of hardware from software that Tarazi envisions. Moreover, it’s the innovation in data center automation and workload orchestration happening within the open source space in just the past three years that is compelling the proprietors of the world’s largest data centers to change their entire philosophy about the architecture, dynamics, and purpose of networks. Telecommunications providers especially now perceive their networks as data centers — not just in the figurative sense.

Read more: Telco Central Offices Get Second Life as Cloud Data Centers

“We want to scale out [the network] in the same way that we scale out compute and storage,” explained AT&T SDN and NFV engineer Tom Anschutz, speaking at the event. It’s part of AT&T’s clearest signal to date that it’s impressed by the inroads made by Docker and the open source champions of containerization at orchestrating colossal enterprise workloads at scale. But it wants to orchestrate traffic in a similar way, or as similar as physics will allow, and it wants open source to solve that problem, too.

Last June, AT&T went all-in on this bet, joining with the Open Networking Lab (ON.Lab) and the Open Network Operating System (ONOS) Project to form what’s now called Central Office Re-imagined as a Datacenter (CORD, formerly “Re-architected”). Its mission is to make telco infrastructure available as a service in an analogous fashion to IaaS for cloud service providers.

Anschutz showed how a CORD architecture could, conceivably, enable traffic management with the same dexterity that CSPs manage workloads. Network traffic enters and exits the fabric of these re-imagined data centers using standardized interfaces, he explained, and may take any number of paths within the fabric whose logic is adjusted in real-time to suit the needs of the application.

“Because there’s multiple paths, you can also have LAN links that exceed the individual thread of capacity within the fabric,” he said, “so you can create very high-speed interfaces with modest infrastructure. We can add intelligence to these types of switches and devices that mimic what came before, so control planes, management planes, and so forth can be run in virtual machines, with standard, Intel-type processors.”

Pages: 1 2 3

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Add Your Comments

  • (will not be published)