Beating the Storage Odds in Age of Big Data

Add Your Comments

Ambuj Goyal is general manager, IBM Systems, Storage and Networking.

ambuj-goyal-tnAMBUJ GOYAL
IBM

Technical evolution moves at different rates and for different reasons. Unlike other areas of computing, for example, storage solutions for distributed systems have evolved as a result of proliferation, rather than more traditional reasons such as price, performance and technical advancements. In other words, when organizations have bought a particular storage technology, they’ve grown with it whether they planned to or not.

That’s largely because storage vendors have spent a lot of time creating products that are based on a variety of individual architectures and protocols. Once an organization commits to one of those architectures, it’s difficult to even consider adding or transitioning to another, different, architecture, even if that alternative offers cost, performance, or management benefits. The result of being painted into this proverbial corner, of course, is that it can lead directly to things like storage sprawl, underutilized storage systems, and complex management – all of which reduces productivity and adds cost.

Storage Controllers at the Center

One area of repeated isolation has been the storage controller, or the brains of the storage system. For various reasons, the industry has had a propensity to create separate storage controllers for different protocols, such as block, file or object. Even though the media on which these controllers store the information is the same, the storage systems will only support the designated protocol it is serving. The software (or, so called microcode) simply interprets the protocol and stores the information.

So the question becomes, why has the industry produced so many different controllers? One reason is that technology has a tendency to be “fast out of the gate.” The industry is rife with examples of technologies that have raced to production and market only to be reined in at a later point with standards or consortium-led initiatives that enable more competition, ease of use, or ease of management. And to be honest, it’s often in the vendor’s best interest to push the concept of “engineered” or “optimized” boxes for each protocol.

The Revolution is Here

The storage situation is not dissimilar to what the industry experienced with the original x86 ecosystem, where suppliers and vendors succeeded by creating a certain technology proliferation in the enterprise. Today, however, that ecosystem has been revolutionized. Now, through workload consolidation technologies implemented in private and public clouds, there is higher utilization, and consistency of management. And, note, that in the mainframe and Unix worlds, workload consolidation and the resulting improved utilization has been the norm for more than a decade.

The storage environment is ready for the same kind of revolution. It’s ready for solutions that abandon the proliferation strategy of days gone by and help organizations avoid lock-in through wide protocol support, and encourage scalability through openness. That’s what we’re working on at IBM. Our Storwize platform of high-capacity systems, for example, tackles these issues head on.

Do Your Research

But don’t take my word for it. Ask yourself, what if there was a way to abstract the protocols from the basic store and retrieve functions? What if you could use old storage and new storage simultaneously, thus maximizing the return on capital investments? What if an application provider could automatically manage the life cycle of storage without getting a storage administrator engaged?

That’s where the storage industry should be headed.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Add Your Comments

  • (will not be published)