Just imagine, commodity servers and off-the-shelf drives as far as the eye can see. All being managed by virtual servers and logical controllers! Well, we’re not quite there yet, but the wheels of software-defined technologies are certainly pushing the modern data center into this direction.
Over the course of a few years, data center giants like Facebook, Google, and Amazon began developing their own networking, servers, and even storage platforms. Why? Because it simply made sense for them.
- They had the manpower to support hardware systems.
- They had developer support internally to create software and code.
- They were able to create management run-books to control parts, assets, and the overall data center.
- They had a very well developed cloud management layer capable of dynamic scale.
Now let's take a look at the modern organization. We see how server virtualization has impacted enterprises all over the world. With all of these advancements, other pieces of the data center were bound to catch up to the virtualization revolution as well. We saw this with software-defined networking, and now we’re seeing it with storage.
Software-defined storage is a lot more than just a buzz term. It’s a way for organizations to manage heterogeneous storage environments under one logical layer. When convergence around network, storage, and compute intersect with software-defined technologies, you create the building blocks for a commodity data center.
But this conversation isn't entirely about software-defined storage. Today, we look at three technologies which are directly allowing greater commodity storage and data center platforms to become realities. These three technologies involve your hypervisor, the software layer, and new kinds of physical commodity platforms.
Let’s look at two technologies here (although there are more doing SDS) and see what they really give you.
- Your next-generation hypervisor. One of the most powerful server virtualization technologies is now acting as a converged storage hub for your data needs. Let me give you an example, with vSAN you have a platform which is capable of abstracting and pooling server-side flash and disk to deliver high performance, resiliency, and persistent at the logical storage tier. If you’re an organization running multiple data centers on top of a vSphere hypervisor, vSAN should be a consideration. Through this layer, you’re basically creating a hardware-independent architecture where everything is controlled from the hypervisor. VM-centric policies can be applied, you can scale up and out without disruptions, and even build in redundancies to hardware that can be commodity. The amazing piece here is that vSphere can also help you manage SDN, allowing for true software-defined data center capabilities. Still, from a storage perspective, you suddenly abstract the hardware and allow your VMware hypervisor to take over. This type of virtual control can scale private and even public cloud environments.
- The software control layer. Your workloads simply require resources to let them run. Your challenge revolves around effectively presenting those resources to your applications and users. The future of commodity storage won’t really care the kind of hardware running underneath. Furthermore, it also won’t care what kind of hypervisor you’re running either. The concepts around USX are similar to vSAN. USX runs as a virtual appliance, consolidates storage resources, and allows for unified management. One of the biggest differences here is that USX is being designed around the concept of hypervisor agnosticism. Currently, it works on VMware, but future releases hope to take the hypervisor question completely out of the equation. In this scenario, imagine running two data centers with different server virtualization platforms. One can be XenServer and the other can be VMware. Both of them have USX running on top. The USX virtual machine will allow for the passing of data, policies and entire workloads regardless of the actual hypervisor. On top of all of this you have a hybrid cloud model with software-defined storage managing, pooling, accelerating and optimizing existing SAN, NAS, RAM and any type of DAS (SSD, Flash, SAS). Software-defined technologies aren’t limited to private data center platforms. USX, for example, has integration with OpenStack, and even VMware’s vCAC automation technologies. Administrators can have heterogeneous storage platforms within multiple locations running on different hypervisors, all being managed by one logical storage solution.
- The commodity data center layer. Let me give you a quick example. Cumulus Networks has its own Linux distribution, Cumulus Linux, which was designed to run on top of industry-standard networking hardware. Basically, it’s a software-only solution that provides the ultimate flexibility for modern data center networking designs and operations with a standard operating system, Linux. Furthermore, Cumulus can run on “bare-metal” network hardware from vendors like Quanta, Accton and Agema. Here’s the big part: customers can purchase hardware at cost far lower than incumbents. Furthermore, hardware running Cumulus Linux can run right alongside existing systems because Cumulus Linux uses industry standard switching and routing protocols. Hardware vendors like Quanta are now making a direct impact around the commodity conversation. Why? They can provide vanity-free servers with storage options capable of supporting a much more commoditized data center architecture.
There’s a reason we looked at these three technologies. One offers direct integration with an existing powerful hypervisor model, while the other abstracts the hypervisor and acts as its own VM. Finally, the physical data center piece allows for all of this commoditization to actually happen. The point is that these virtual machines and services simply don’t care about the brand feeding it storage resources. To them, a flash array is a flash array. These logical controllers care about efficiency, resiliency, and the ability for you to manage your data center more easily.
Before everyone jumps into the commodity data center argument there are a couple of things to be aware of. Many organizations are simply not ready to take on the hardware management project. And, there is the very real fact that software-defined storage is barely even breaching its 2.0 days. But the though process is nevertheless interesting. In working on a number of projects across a various industries, we’re seeing many more organizations introduce “white-box” hardware to have it be managed by a virtual machine. This is now happening at the storage layer with IT shops much smaller than that of Amazon.
Over the next few years, many more organizations are going to offset a part of their data center with commodity gear which will be managed at the virtual layer. It simply makes sense. Everything from management, workload migration, and even optimization is controlled from one plane. The big question revolves around adoption and adoption pace. How fast will your organization adopt a software-defined technology? Or, maybe it has already.