Planning for a Cloud-Ready Distributed Storage Infrastructure

Many organizations are leveraging larger, more comprehensive storage arrays which help them distribute their environment. Here’s the important part to consider – in many cases, the storage platform has become the heart of a cloud solution.

Bill Kleyman, CEO and Co-Founder

July 9, 2013

5 Min Read
Planning for a Cloud-Ready Distributed Storage Infrastructure

cloud-storage

cloud-storage

Storage array systems provide greater amounts of flexibility and business agility. Where direct-attached storage solutions may fall short - SAN, NAS and other types of shared storage environments can really help a company scale. Many organizations are leveraging larger, more comprehensive storage arrays which help them distribute their environment. Here’s the important part to consider – in many cases, the storage platform has become the heart of a cloud solution. Intelligent replication and storage control mechanisms now allow for the distribution of cloud components. This includes user information, workload replication and of course – big data.

IT managers are seeing that by using intelligent storage platforms can help them stay agile and continue business operations should a site – or even storage controller – fail. The idea is to create a resilient and distribute storage infrastructure which can support the user base, the workloads, and a growing business. In creating such an environment, engineers need to be aware of a few concepts which will revolve around building a successful storage solution.

Consider bandwidth.

A distributed storage environment will require a thorough amount of planning around bandwidth. The amount will completely depend on the following:

  • Distance the data has to travel (number of hops).

  • Replication settings.

  • Failover requirements.

  • Amount of data being transmitted.

  • Number of users accessing the data concurrently.

There may be other requirements as well. In some cases, certain types of databases or applications being replicated between storage systems have their own resource needs. Make sure to identify where the information is going and create a solid replication policy. Improper bandwidth sizing can create serious performance issues (if one under-sizes) or too much bandwidth may result in an organization overpaying for services. In some cases, using WAN optimization is a good idea.

Pick the right storage platform.

Although this might seem like common sense – the process of selecting the right storage platform for a distributed environment is very important. In some cases, organizations forget vital planning steps and select storage systems which suite them now and only in the near future. In selecting the proper type of platform, consider the following:

  • Utilization – What is your utilization - now, 3 years, 5 years, and end of life? Also, how well is the controller able to handle spikes in usage? Does it meet IOPS requirements?

  • Migration – How easy is it to migrate data once you outgrow your current needs or need to upgrade?

  • Data Management – Does the system have granular data control mechanisms? Does it do data deduplication – file or block level?

  • Policy Management – Ensure that the system you select has good integration with your internal systems and is able to support the storage policies that you require for your organization.

For large deployments, many vendors will gladly offer a POC or pilot program for their controllers. Although there may be some deployment costs associated with this pilot, it may be well worth it in the long run. By establishing which workloads, applications and what data will reside on a distributed storage system, administrators can better forecast their needs and spend less time (and money) trying to fix an undersized environment.

  • Control the data flow. Distributed storage systems require special attention as information traverses the Wide Area Network. As mentioned earlier WAN optimization may be the right move to help support a more robust data transfer methodology. Furthermore, controlling where the other storage controllers reside can really help narrow down bandwidth requirements. By settings up dedicated links between data centers and using QoS to facilitate the right amount of bandwidth – administrators can control the data flow process and still have plenty of room on the pipe for other functions. Basically, there needs to be consistent visibility in how storage traffic is flowing and how efficiently it’s reaching the destination.

  • Use intelligent storage (thin provisioning/deduplication). Today’s enterprise storage solutions are built around direct efficiencies for the organization. Data control, storage sizing optimization, and intelligent deduplication all help control the data flow and management process. By reducing the amount of duplicate storage items, administrators can quickly reclaim space on their systems. Furthermore, look for controllers which are virtualization-ready. This means that environments deploying technologies like VDI, application virtualization or even simple server virtualization should look for systems which intelligently provision space – without creating unnecessary duplicates.

  • Distributed storage as DR. Storage infrastructures deployed within a distributed environment can be used for a variety of purposes. Data resiliency, better performance or just placing the storage closer to the user would all be good business use-cases. In some instances, companies look to deploy a distributed architecture for the purposes of disaster recovery. In these cases, it’s important to have special considerations around using storage for the purposes of DR. It’s recommended that an organization first conduct a business impact analysis (BIA) to establish some very important metrics. This includes isolating systems, platforms and other data points which are deemed critical. Then, organizations can identify their recovery times and establish a scale of importance for their various workloads. Once that is identified, it becomes much easier to select a distributed storage system capable of meeting those needs.

Designing a good storage platform can become very expensive – very quickly. This is especially the case when the planning and architecture processes are either skipped or rushed. Although modern storage arrays may be expensive, they’re built around efficiency. The ability to logically segment physical controllers, remove duplicate data and archive information are all features which help to control the storage environment. When a solid storage platform is in place, organizations can see benefits in performance, agility and – very importantly – uptime.

About the Author

Bill Kleyman

CEO and Co-Founder, Apolo

Bill Kleyman has more than 15 years of experience in enterprise technology. He also enjoys writing, blogging, and educating colleagues about tech. His published and referenced work can be found on Data Center Knowledge, AFCOM, ITPro Today, InformationWeek, Network Computing, TechTarget, Dark Reading, Forbes, CBS Interactive, Slashdot, and more.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like