As the evolution of the data center and cloud continues, IT shops are continuously looking for ways to make their infrastructure operate more efficiently. We saw this happen at the server level with virtualization, and many other physical aspects of the modern data center have begun to be abstracted as well. It happened with networking, at the compute layer and now storage.
Software-defined technology is much more than just an IT buzz term. It’s a new way to control resources on a truly distributed plane. The ability to abstract powerful physical components into logical services and features can help a cloud platform scale and become more robust. It also allows the data center to control key resources more efficiently. One of those resources, which was beginning to sprawl physically quite a bit, was storage. Storage admins would have to buy bigger controllers, more disks and additional shelves just to keep up with modern data and cloud demands. Something had to give.
The rise of software-defined storage got a lot of people really excited. So much so that several players have already dived right into the SDS pool.
Other vendors have jumped into the mix as well. Nutanix, for example, was recently granted a patent for their software-defined storage solution. The patent provides clarity as to how software-defined storage solutions are optimally designed and implemented, detailing how a system of distributed nodes (servers) provides high-performance shared storage to virtual machines (VMs) by utilizing a “Controller VM” that runs on each node. Ultimately – this presents the entire data center with powerful, scalable, technologies.
With all of that in mind, why should the traditional storage infrastructure worry? Well, there are some real reasons as to why SDS is already making an impact on today’s storage ecosystem.
- Software-defined storage is agnostic. SDS doesn’t really care what storage you have sitting in the backend. It could be DAS, FC, FCOE, or iSCSI. The virtual service or appliance simply asks you to point the storage repository to the SDS VM and it’ll do the rest. The storage can be a spinning disk or flash array. Applications and data requests are then passed into the appropriate storage pool. The beauty is the intelligence, control and scale that can be achieved with SDS. Effectively, administrators can gain unified control over a heterogeneous storage environment.
- More logical control, less physical requirements. You’re basically offloading a lot of the controller functionality onto virtual appliances. This means that storage controllers can be a bit smaller, utilizing less disk. Intelligent storage controls, routing and optimization can all happen on the virtual level while still interacting with a number of different underlying storage platforms. This logical layer can span multiple data centers and aggregate various storage environments under one roof. From there, administrators are able to better control storage resources and optimize utilization without having to buy additional disks.
- Making storage smarter and more distributed. The ability to control all storage components from a virtual machine has quite a few benefits. One of this is the ability to create a directly extensible storage infrastructure. With a virtual storage controller layer utilizing SDS , you’re able to aggregate your storage environment and then distributed from data center to cloud. Ultimately, SDS platforms won’t care which hypervisor you’re using or which physical controllers you have. It’ll only want to be presented the appropriate resources. From there, the VMs will be able to communicate with one another while still living on heterogeneous platforms.
- Say hello to commodity storage. This is a big one. A major conversation point in the industry is the boom in commodity hardware usage. This is happening at the server level, and other data center levels as well. Big shops like Google are already building a lot of their own hardware as well as services. With the concepts around software-defined storage, there isn’t anything stopping anyone from buying a few bare-metal servers, and filling them up with their own spinning disk and flash storage. From there, they can deploy an SDS solution which can manage all of these disks and resources. In fact, you can replicate the same methodology over several data centers and cloud points. These distributed locations could all be running commodity storage and hardware while all being controlled by a singular SDS layer. That’s pretty powerful stuff – and certainly potentially disruptive to the traditional storage methodology.
Before anyone in the storage community gets nervous, we need to remember that some of the big players are already deploying their own version of a software-defined storage solution. NetApp delivers software-defined storage with clustered Data ONTAP, OnCommand management, and FlexArray software. Similarly, EMC’s ViPR technology probably has it the closets out of the big storage vendors when it comes to SDS. Their model introduces a lightweight, software-only product that transforms existing storage into a simple, extensible, and open platform.
Despite these advancements, many IT shops are already re-evaluating their storage situation. Before they purchase a new controller or an extra set of shelves, administrators are looking at the software-defined option. Regardless of the chosen direction, one thing is for sure: storage is evolving, very rapidly, to meet the demands of the user and the cloud.
Data is becoming much more critical in an ever-connected world. Now, storage environments need to be smarter, easier to manage, and highly efficient. In some cases, this might mean the introduction of a software-defined storage platform which goes on to manage the rest of your storage platform. Either way, examine your storage infrastructure carefully and make sure it aligns directly with your business vision.