There are a multiplicity of trends simultaneously altering our collective vision of what a data center is, and what it is becoming. And those trends are not necessarily acting in concert. We thought software-defined networking would make it easier for data centers to stage workloads more efficiently on a Layer 3 that was more effectively decoupled from Layer 2. But then NFV came along, and suddenly telcos are introducing the rest of the world to a completely new way to envision the role of the data plane in SDN.
It’s not as easy to predict where data center technology is going when all the trends converge. At the OpenStack Summit in Austin, Texas, a few weeks ago, network functions virtualization stole the show. Attendance at sessions that had the slightest relationship to NFV was as much as two orders of magnitude higher than those dealing with ordinary OpenStack administration. IT professionals are curious as to whether this new methodology for workload orchestration will have any impact, directly or indirectly, upon data center architecture.
NFV came about as a result of the common need among communications providers to automate the provisioning of customer services when deployed on common, commodity servers. Virtualization was essentially the means to an end; NFV’s initial goal was automation. What makes NFV attractive to data centers outside of telcos is that high-level automation aspect. What makes it risky is the degree to which NFV would reform data centers to make this automation feasible.
Read more: Telco Central Offices Get Second Life as Cloud Data Centers
The Four-Step Program
It would be technically inaccurate to say that Tom Nadeau wrote the definitive book on SDN, because he actually wrote or co-wrote several (with co-author Ken Gray). He’s currently busy completing the book on NFV, due out in August from Morgan Kaufmann Elsevier. With Cisco, Nadeau’s achievements included serving as the principal architect for the MIBs for MPLS protocol; and with Juniper, he led the SDN development effort. Now with Brocade, he is the driving force behind VNF Manager, a commercial implementation of OpenStack’s Tacker component, staging virtual network functions on an NFV platform.
Related: Specialized Data Center Network Gear on Its Way Out
In an interview with Data Center Knowledge at the recent OpenStack Summit in Austin, he told us he disagreed with the opinion held by a number of OpenStack contributors that NFV will be a methodology confined mainly to telcos.
“If you step forward in the evolutionary progress of virtualization, Step 1 is what we’re doing with virtual machines,” said Nadeau. “OpenStack deploys a virtual machine, and there you have it. If you look at the cost model around that, it’s going to be difficult to make that cost-effective in the long run. Where you need to go is Step 2, which is containers; Step 3, microkernels; Step 4, Platform-as-a-Service.”
Arguably, the whole point of OpenStack is to enable data centers to deploy resources using a service model inspired by Amazon. Even internally within organizations, users should be able to provision the services and applications that apply to them through a self-service portal. From that perspective, Nadeau’s Step 4 (which will be fleshed out in his forthcoming book) is the main goal in the first place.
But it’s Steps 2 and 3 that tell the full story here. For data centers to deliver services to their users the way modern telcos do, Nadeau plots a course that leads not only to Docker-style containerization but to minimal, bootstrap operating loaders, capable of provisioning and managing servers at startup until a full OS kernel is available. OpenStack’s microkernel (MK) is called Razor, which its proponents also describe as a “provisioning engine.” In practice, Razor enables OpenStack deployments on bare metal servers.
Coupled with a container environment, Razor changes the picture of the optimum physical server in a private cloud. Now it looks a lot more like a product of the Open Compute Project, Facebook’s, Microsoft’s, and now also Google’s effort to socialize the specification for “plain vanilla” in the data center.
Related: Equinix, AT&T, Verizon Join Facebook's Open Source Data Center Project
Nadeau said VNF Manager will be developed with this trajectory in mind: weaning data centers off of reliance upon first-generation virtual machines such as the VMware variety, and toward a highly scalable NFV platform where heterogeneous workloads may co-exist. In such an environment, he said, it should not matter much to the admin where network functions are deployed.
“In fact, the more you cater to the enterprise environment, the more ubiquitous that’s going to be,” he continued. “I think enterprises, large and small, have a need for these things.”
This, Too, Shall PaaS
Regardless of their size, Nadeau believes, enterprises are driving a more cloud-native applications development model. That drive is pressing all clouds, including the private variety championed by OpenStack, to adopt one path toward PaaS-style provisioning — a single path that works for everyone, including the largest customers.
“If you can be part of that heterogeneous PaaS model that has physical elements at the end, and maybe some virtualized functions and then applications running over a message bus in the middle — which is the enterprise application model — it all makes sense,” said Nadeau.
“Plain vanilla” OpenStack, consisting of just the open source components without any vendor support baked in, does enable containerization today, by way of a component called Magnum. In a containerized model, highly reduced virtual containers stored in an open source format rely upon the Linux kernel for virtualization, rather than upon a hypervisor. While this does represent the state of OpenStack today, Nadeau acknowledged, most industries that have adopted OpenStack remain at Step 1 of their journey to the PaaS model.
“They’re still using this model of what I call ‘aggregated virtual machines,’” he explained. Software-defined networks rely upon virtualized appliances such as virtual routers — which happen to be Brocade’s key products. But these vRouters, vSwitches, and vNICs are too often deployed within VMs, introducing tremendous overhead and making automated deployment difficult.
What’s more, such an environment is more difficult to scale up — by some accounts, more so than a physical environment, by virtue of all the automation instructions that have to be accounted for. With the more clearly defined NFV that Nadeau and his colleagues seek, virtual network appliances would lose the “appliance” motif — the notion that they’re effectively emulators for physical devices, floating blithely in a VM envelope.
“As we progress down that roadmap,” Nadeau predicted, “you’ll see disaggregated VMs, and more and more disaggregation, until we get to that point at the end where you have just the nuggets of the functionality that you would need from that router, and maybe from somewhere else... database functions, for example, or analysis functions, and combining them together to create the service.”
Once we have more microkernels and microservices running in this NFV environment, we asked Nadeau, will we have inadvertently weaned ourselves from OpenStack as we have come to know it? Or will OpenStack take on a new mission?
“I think a lot of what exists today in OpenStack is fine and will be preserved,” he responded. “There are things that you need to wrap around OpenStack to augment it, to make it suitable for that microservices/PaaS model going forward. And a lot of guys are doing that today: Cloud Foundry, Open Baton — there’s a variety of these things happening. And there are people seeing that.
“What I see happening in this technology, there are service providers and enterprises, and they’re all coming this way,” Nadeau continued, locking his fingers together to illustrate his point. “I think a lot of what both sides of this coin want to do, eventually, is the same thing.”