Kent Christensen leads Datalink’s virtualization practice, directing the adoption of virtualization hardware and software technologies and services.
If you’re an IT person involved in data center operations, you’ve heard a few things about the software-defined data center (SDDC). This is one of the latest acronyms touted by a variety of industry vendors and open source organizations. In fact, just about every main vendor involved in servers, cloud, networking or storage (or the software to manage any of these areas) has their own vision for SDDC. Big SDDC proponents include VMware, Cisco and OpenStack, among others.
What SDDC is all about
Currently, the SDDC acronym is easier to spell out than it is to define. It is a concept that is as much about IT architectural theory and philosophy as it is about the high-level technical platform or ‘dashboard’ you may ultimately deploy to automate, monitor and manage your emerging service-oriented (ITaaS) or cloud architecture.
The overall SDDC vision goes something like this: Someday, you can have software (via a software-based control plane or overarching management console) automate the running of just about everything in the data center—servers, network, compute, etc. Said software will also logically abstract (or virtualize) features of the underlying hardware so that you might, conceivably, use various commodity hardware components. Your software-based controls for all these moving parts of the data center will move up the stack to reside in a universal software platform.
For purveyors of SDDC, this vision is the ultimate Holy Grail for how a next-generation data center should operate.
Sounds good so far. So, what’s the problem?
The challenge with SDDC is that most of the vision is still just that: A vision, with not too many clear, real-world use cases. Early adopters tend to be hyper-scale cloud providers like Google, Amazon and Microsoft, who use their own, homegrown SDDC constructs. Other highly competitive companies or cloud service providers may also see the need to gain extra competitive edge with a faster move to embrace SDDC. But these are still a relatively small breed.
Some of what I’ve described about SDDC might sound a lot like what your IT organization is already doing. Many are doing advanced server virtualization with advanced management of dynamic workloads. On the storage and network side, many are also doing something similar, with software policy-based functionality that helps automate and manage many virtual components of traditional hardware.
Many have progressed from pockets of virtualization to the wider development of virtual data centers (VDC) and the use of converged infrastructures (CI), fabric architectures or unified “pods.” These unite many layers of the compute/server/storage stack together, often with deep integration by vendors that strives to automate many previously manual operations associated with resource configuration, provisioning and monitoring.
Are all of these SDDC? They are part of it. The missing piece remains the higher layer of automation, orchestration and management that ties it all together. At this point, this piece is more vision than reality for most of today’s data centers.
Move forward or wait?
Some vendors would argue they have SDDCs missing pieces right now and can give you the exact blueprint of steps needed to bring it to your data center. Even if you aren’t ready yet to jump feet-first into the land of SDDCs, you’ve probably already begun the journey.
We tend to see SDDC as part of an incremental journey, just like IT’s journey to VDCs, private cloud, and ultimately, as brokers of an ITaaS-based hybrid cloud world that offers an efficient mix of internal and external cloud services. In this evolution, we see organizations developing their hybrid strategy as part of a larger SDDC push toward data center automation and orchestration.
Knowing SDDC is part of the journey, what advice can we offer?
- Study the visions of key, early SDDC proponents like VMware, Cisco and OpenStack. These have high-level ideals in common, but their execution surrounding SDDC is very different. In the case of OpenStack, you are dealing with open source software that may offer less vendor lock-in but may still be somewhat immature for enterprise deployment. On the VMware and Cisco side, study how much vendor lock-in might be involved if you go with one vision or the other and want to experiment or switch later to another SDDC management layer.
- There are a lot of ponies in this race. Pick yours carefully. You may find you’ve already bought into the current vision of your main hypervisor vendor or your main networking or storage vendor. Or, you might be an early fan of open source methods. You may even find you like a vendor’s emerging roadmap that gets you from where you are to that vendor’s vision of SDDC.
Before you make large investments in the higher-level abstraction of SDDC, consider a smaller pilot or trial period. Remember: He who controls the abstraction layer is He who will have inherent control of your data center. As your data center transforms into more of a hybrid cloud architecture, He who controls that abstraction layer will also have more inherent control over your cloud operations. This harks back to some of the early points I made in 2012 when I urged readers to own their own cloud.
By all means, move forward toward the utopian ideal of SDDC. But, as the construction signs say: Caution. Proceed with Care.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.