cloud-storage-470

As Software-Defined Storage Gains, is Physical Storage in Trouble?

3 comments

As the evolution of the data center and cloud continues, IT shops are continuously looking for ways to make their infrastructure operate more efficiently. We saw this happen at the server level with virtualization, and many other physical aspects of the modern data center have begun to be abstracted as well. It happened with networking, at the compute layer and now storage.

Software-defined technology is much more than just an IT buzz term. It’s a new way to control resources on a truly distributed plane. The ability to abstract powerful physical components into logical services and features can help a cloud platform scale and become more robust. It also allows the data center to control key resources more efficiently. One of those resources, which was beginning to sprawl physically quite a bit, was storage. Storage admins would have to buy bigger controllers, more disks and additional shelves just to keep up with modern data and cloud demands. Something had to give.

The rise of software-defined storage got a lot of people really excited. So much so that several players have already dived right into the SDS pool.

Other vendors have jumped into the mix as well. Nutanix, for example, was recently granted a patent for their software-defined storage solution. The patent provides clarity as to how software-defined storage solutions are optimally designed and implemented, detailing how a system of distributed nodes (servers) provides high-performance shared storage to virtual machines (VMs) by utilizing a “Controller VM” that runs on each node. Ultimately – this presents the entire data center with powerful, scalable, technologies.

With all of that in mind, why should the traditional storage infrastructure worry? Well, there are some real reasons as to why SDS is already making an impact on today’s storage ecosystem.

  • Software-defined storage is agnostic. SDS doesn’t really care what storage you have sitting in the backend. It could be DAS, FC, FCOE, or iSCSI. The virtual service or appliance simply asks you to point the storage repository to the SDS VM and it’ll do the rest. The storage can be a spinning disk or flash array. Applications and data requests are then passed into the appropriate storage pool. The beauty is the intelligence, control and scale that can be achieved with SDS. Effectively, administrators can gain unified control over a heterogeneous storage environment.
  • More logical control, less physical requirements. You’re basically offloading a lot of the controller functionality onto virtual appliances. This means that storage controllers can be a bit smaller, utilizing less disk. Intelligent storage controls, routing and optimization can all happen on the virtual level while still interacting with a number of different underlying storage platforms. This logical layer can span multiple data centers and aggregate various storage environments under one roof. From there, administrators are able to better control storage resources and optimize utilization without having to buy additional disks. 
  • Making storage smarter and more distributed. The ability to control all storage components from a virtual machine has quite a few benefits. One of this is the ability to create a directly extensible storage infrastructure. With a virtual storage controller layer utilizing SDS , you’re able to aggregate your storage environment and then distributed from data center to cloud. Ultimately, SDS platforms won’t care which hypervisor you’re using or which physical controllers you have. It’ll only want to be presented the appropriate resources. From there, the VMs will be able to communicate with one another while still living on heterogeneous platforms.
  • Say hello to commodity storage. This is a big one. A major conversation point in the industry is the boom in commodity hardware usage. This is happening at the server level, and other data center levels as well. Big shops like Google are already building a lot of their own hardware as well as services. With the concepts around software-defined storage, there isn’t anything stopping anyone from buying a few bare-metal servers, and filling them up with their own spinning disk and flash storage. From there, they can deploy an SDS solution which can manage all of these disks and resources. In fact, you can replicate the same methodology over several data centers and cloud points. These distributed locations could all be running commodity storage and hardware while all being controlled by a singular SDS layer. That’s pretty powerful stuff – and certainly potentially disruptive to the traditional storage methodology.

Before anyone in the storage community gets nervous, we need to remember that some of the big players are already deploying their own version of a software-defined storage solution. NetApp delivers software-defined storage with clustered Data ONTAP, OnCommand management, and FlexArray software. Similarly, EMC’s ViPR technology probably has it the closets out of the big storage vendors when it comes to SDS. Their model introduces a lightweight, software-only product that transforms existing storage into a simple, extensible, and open platform.

Despite these advancements, many IT shops are already re-evaluating their storage situation. Before they purchase a new controller or an extra set of shelves, administrators are looking at the software-defined option. Regardless of the chosen direction, one thing is for sure: storage is evolving, very rapidly, to meet the demands of the user and the cloud.

Data is becoming much more critical in an ever-connected world. Now, storage environments need to be smarter, easier to manage, and highly efficient. In some cases, this might mean the introduction of a software-defined storage platform which goes on to manage the rest of your storage platform. Either way, examine your storage infrastructure carefully and make sure it aligns directly with your business vision.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the National Director of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)

3 Comments

  1. Definitely something to consider. As SDN makes its way further into the data center, it will undoubtedly leave its mark on all aspects of the environment. SDN is really a story of best leveraging investments, which brings about new efficiencies and capitalizes on the benefits of for instance the latest Intel processor capabilities. However, it also shines a bright light on areas of need. For example, as firms embrace SDN, it will become strikingly clear if the network is serving as a bottleneck that requires an upgrade to 10GbE.

  2. Patrick

    I am missing the most proven and truly software defined vendor Nexenta in your story. One of a select company of vendors that can really use commodity hardware and has no horse in the hardware race.

  3. Bill Kleyman Post author

    @Patrick - I agree with you there. Nexenta does make a solid platform based on ZFS! It'll be interesting to see how competitive the SDS market gets out there.