Skip navigation
Commodity Data Center Storage: Building Your Own
Supermicro’s commodity servers (Photo: Supermicro)

Commodity Data Center Storage: Building Your Own

New storage solutions allow any organization to deploy custom commodity storage architectures. But how do you go about it?

It's becoming a really interesting topic out there. I very recently had a conversation with an administrator and a friend who asked me if it's a good idea to buy a commodity server chassis and fill it with Flash drives to create their own data center storage system. They argued that they can use a hypervisor or third-party software to manage it, create high availability, and even extend into the cloud.

Does it make sense for everyone? What are the actual ingredients in creating your own commodity system?

Before we dive in, there are a couple of points to understand. Data center storage has become a very hot topic for large, small, and mid-size enterprises. They’re all looking for ways to better control their arrays, disks, and now expanding cloud environments. There will certainly still be many use cases for traditional storage systems. However, new virtual layers are allowing for even greater cloud control and data abstraction.

With that in mind, let’s look at how you can deploy your own commodity storage platform.

The Hardware

Depending on your use case, you might have a few different configuration considerations. In some cases you’re designing for pure IOPs. There, you’ll want to use an SSD array. In other cases, where you want a mix of more capacity and some performance, you’re probably going to want a mix of SSD and HDD. The point is that you can populate an entire set of servers with the kind of disk you require to allow it to later become your high-performance repository. Consider this:

  1. Full SSD arrays are ideal for non-write-intensive applications requiring loads of IOPs but small capacities.
  2. Hybrid or very fast HDD systems are ideal for most high-performance applications including virtualization, transaction processing, business intelligence, data warehouse, and SLA-based cloud applications.
  3. Low-cost SATA HDD drive systems are ideal for backup, write-once/read-infrequently applications, and archive-based systems.

You can pretty much go full-on commodity and purchase a set of low-cost servers and populate it with the disk that you want. Or, there are new storage solutions that are removing a lot of the software add-ons that you might not need or want.

Something to keep in mind: new hyper-converged virtual controller solutions now focus on absolutely pure performance, maximum capacity, and new kinds of cloud capabilities. For a number of organizations moving toward a more logically controlled data center storage platform this is very exciting. Specifically, they are looking for solutions that offer “commodity-style” storage while still offering a warranty and manufacturer support.

What about all of those features? This brings us to the next point.

Software-Defined Storage

New architectures focusing on hyperscale and hyper-convergence allow you to directly abstract all storage and let you manage it at the virtual layer. These new virtual controllers can reside as a virtual machine on a number of different hypervisors. From there, it acts as an enterprise storage controller spanning your data center and the cloud.

This kind of hyper-converged virtual storage architecture delivers pretty much all of the enterprise-grade storage features out there including dedup, caching, cloning, thin provisioning, file replication, encryption, HA, and DRBC. Furthermore, REST APIs can directly integrate with proprietary or open source cloud infrastructure management systems.

Now, you can take your underlying commodity storage system and allow it to be completely managed by the logical layer. The cool part is that you can also point legacy storage gear to give it new life with a virtual storage controller. You’re almost ready to deploy your own commodity storage platform!

The Workloads

Actually, the workloads don’t matter as much as the policies you wrap around them. Here is your chance to create a truly efficient, SDS-managed platform. You can set very specific policies around high-performance workloads and around workloads that should go to cheaper, slower disk.

Furthermore, you can instruct that very same workload to span into the cloud. This is the part of the recipe where you have to figure out your own ingredients. What kinds of workloads do you have? Big Data, VDI, app delivery, and database workloads all have very different requirements. Most importantly, you’re also positively impacting both the business process and end-user experiences. Storage economics have the capability to shift very dynamically when software-defined storage is coupled with custom, or whitebox, storage architectures.

As you look at this list of steps, you might be asking whether it is really that easy. The reality is that it all comes down to your specific use case and business. These kinds of technologies are revolutionizing the way we control and manage data. New powerful VMs are helping us abstract storage and allow it to live in the data center, cloud, and beyond. But does it make sense for everyone to go out and do this? How nervous are folks like EMC about the future?

Regardless, new ways to deploy storage are now making an impact across a larger number of organizations. And new capabilities around cloud are allowing data centers to create even more elastic storage architectures.

TAGS: Storage
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish