Tech Titans Back OpenFlow Networking Standard

The world’s largest data center operators are joining forces to back an open networking standard, a move they say will accelerate innovation and make it easier to manage their far-flung networks.

Google, Microsoft, Facebook and Yahoo have teamed with telecom giants Verizon and Deutsche Telekom to form the Open Networking Foundation (ONF), which will advance the development of a new open source networking protocol called OpenFlow.

The group’s goal is to boost Software-Defined Networking (SDN), which separates the programming of routers and switches from the underlying hardware. This approach could simplify the management of global networks of data centers, making it easier to redirect traffic around hardware failures. It may also bring energy savings by making it easier to identify underused devices and shut them off until they are needed again.

From the Academy to the Data Center

The OpenFlow standard is the result of a six-year research collaboration between Stanford University and the University of California at Berkeley, and allows users to manage network equipment using software that can run on servers that communicate with switches, rather than directly on the switch or router.

“OpenFlow really has the potential to be a very important shift in how people look at networks,” said Urs Hoelzle, Senior Vice President of Engineering at Google, who will be the president and chairman of the ONF. “It can help make complicated networks simpler. Software-Defined Networking will allow networks to evolve and improve more quickly than they can today. Over time, we expect SDN will help networks become both more secure and more reliable.”

OpenFlow support can be added to commercial Ethernet switches, routers and wireless access points and provides a standardized hook to allow users to customize features of their network – without requiring vendors to expose the inner workings of their devices.

Network Vendors Join Foundation

With the largest purchasers of network equipment now throwing their support behind SDN, networking vendors have greater motivation to add support for OpenFlow. That’s reflected in the ONF membership list. Look beyond the six founders, and the membership reads like a Who’s Who of the networking world: Cisco, Brocade, Juniper Networks, HP, Broadcom, Ciena, Riverbed Technology, Force10, Citrix, Dell, Ericsson, IBM, Marvell, NEC, Netgear, NTT and VMware.

“With broad industry support from technology leaders and networking experts, the ONF brings new opportunities and flexibility to the future of networking,” said Jonathan Heiliger, ONF Founding Board Member and Vice President of Technical Operations at Facebook. “We’re actively encouraging new members to join us in this endeavor.”

The new foundation’s first task will be to lead the ongoing development of the OpenFlow standard and encourage its adoption by freely licensing it to all member companies. ONF will then begin the process of defining global management interfaces.

“In the first year, ONF will focus most of its attention on the OpenFlow standard,” said Nick McKeown of Stanford University, who helped develop OpenFlow and will be a board member of ONF. “It will look like a small standards body, with user groups bringing people together around interoperability and compliance. There will be many discussions with vendors about how to fit their capabilities into the data center company’s needs. Beyond that will be a discussion leading to higher-level interfaces and abstractions.”

Seeking Simpler Networks

Software-Defined Networking has been happening for some time, mostly at large data center companies like Google that have built custom solutions to centralize their network management. But the process left them yearning for a better way.

“Networks are very complicated,” said Hoelzle . “One of the reasons these networks are complicated is that they have a lot of protocols and they don’t always play well together. OpenFlow breaks that model. In the OpenFlow network, all the intelligence will be in a central point, so it’s easier to do complex things. Deciding what to do when an element fails is now trivial. It pushes millions of lines of code out to the network.”

Scott Shenker, a professor at Cal-Berkeley and an ONF board member, called OpenFlow a response to broadly felt pain.  “Most of the companies we talked to were experiencing the same frustrations,” said Shenker. “We started talking to a lot of companies, and it became very obvious that everyone was thinking along similar lines. The biggest thing about the data center is the scale and the detailed need for control.”

Business Model is ‘Broken’

Amazon Web Services is not among the ONF members. But Amazon researcher James Hamilton has been a prominent voices in calling for a better approach to data center networking.

“The network equipment business model is broken,” said Hamilton. “We love the server business model where we have competition at the CPU level, more competition at the server level, and an open source solution for control software. In the networking world, it’s a vertically integrated stack and this slows innovation and artificially holds margins high. It’s a mainframe business model.”

A broad shift to open networking would disrupt that model. While the network equipment vendors are participating in the consortium, they have much at stake. Making it easier to replace a portion of network switches with commodity servers could mean fewer switches sold. But Stanford’s McKeown believes there are areas where the equipment vendors can benefit from OpenFlow, especially in the need for network operating systems.

“The network-wide OS could come from many sources,” said McKeown. “Big data center operators might develop their own. But the equipment vendors are really well placed to do this as well.”

Cabling image by Sugree via Flickr.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. Anonymous

    They talk of a "network-wide OS" like it is reasonable or even desirable. The OS and hardware integration is what separates a decent network vendor from something like Vyatta/Quagga. Sure, Vyatta may work in a small datacenter but no big ISP is going to use it as a BRAS for example. For something that's pushing Tb/s (intelligently) you just can't rely on "general-purpose OS" slapped onto random hardware. You need millions of dollars in research for hardware/software integration and testing. I don't think this mix-and-match is ever going to fly anywhere but the simplest of operations in the networking world. A standard configuration API, on the other hand, could do wonders. Maybe we could call it the Simple Network Management Protocol? Wait a second ...

  2. Gopal Agrawal

    There is simply no reason for every router out there to be running copies of OSPF, RIP, BGP etc. Makes them heavyweight. The whole purpose of OpenFlow is to move the control plane out of the switch. Leave only the data forwarding mechanism within a "lightweight" switch and let external controllers make the decision on forwarding table entries. External controllers in different domains will have to coordinate with each other as well (perhaps via a hierarchical mechanism or P2P type mechanism). Further, these forwarding table entries are based on combinations of L2-L3-L4 fields and far more flexible than say a very expensive multi-layer switches. There's still a lot of new protocols and policy databases to be experimented with to make OpenFlow switches effective, but I'm sure that will evolve over time. SNMP can still be an integral part of the switches/routers. Most likely there will be translation going on somewhere between the SNMP and OF packet formats if the vendor is inclined to retain SNMP for managing various other issues not defined within scope of Openflow. The key to openflow is the ability to tell the switch .... "If packet has source IP = and protocol is ftp, then replicate this packet on port 5 and 6, then change vlan# to 15 and send out to port 7". I don't think current commodity routers are capable of doing that.

  3. Travis Marlow

    Isn't this just like PBB-TE, where you had "dumb" network equipment with a centralized and separate control plan that existed within some set of service provisioning servers. The difference it seems is that this is an "open standard" and now you have to let programmers rule your network. Not convinced yet.

  4. Aparna

    Agree wtih Travis.....not convinced yet about the need for OpenFlow. What is the real "need" to let programmers control the network? For example, referring to the example mentioned by Gopal Agrawal .... "If packet has source IP = .... then change vlan# to 15 and send out to port 7" ... what is the real use case for switches to support such arbitrary commands? The only use case seems to be for experimentation, but is that a strong enough use case for all networking equipment to start supporting OpenFlow?