Posted By Bill Kleyman On July 9, 2013 @ 9:00 am In Cloud Computing,Storage | 1 Comment
Storage array systems provide greater amounts of flexibility and business agility. Where direct-attached storage solutions may fall short – SAN, NAS and other types of shared storage environments can really help a company scale. Many organizations are leveraging larger, more comprehensive storage arrays which help them distribute their environment. Here’s the important part to consider – in many cases, the storage platform has become the heart of a cloud solution. Intelligent replication and storage control mechanisms now allow for the distribution of cloud components. This includes user information, workload replication and of course – big data.
IT managers are seeing that by using intelligent storage platforms can help them stay agile and continue business operations should a site – or even storage controller – fail. The idea is to create a resilient and distribute storage infrastructure which can support the user base, the workloads, and a growing business. In creating such an environment, engineers need to be aware of a few concepts which will revolve around building a successful storage solution.
A distributed storage environment will require a thorough amount of planning around bandwidth. The amount will completely depend on the following:
There may be other requirements as well. In some cases, certain types of databases or applications being replicated between storage systems have their own resource needs. Make sure to identify where the information is going and create a solid replication policy. Improper bandwidth sizing can create serious performance issues (if one under-sizes) or too much bandwidth may result in an organization overpaying for services. In some cases, using WAN optimization is a good idea.
Although this might seem like common sense – the process of selecting the right storage platform for a distributed environment is very important. In some cases, organizations forget vital planning steps and select storage systems which suite them now and only in the near future. In selecting the proper type of platform, consider the following:
For large deployments, many vendors will gladly offer a POC or pilot program for their controllers. Although there may be some deployment costs associated with this pilot, it may be well worth it in the long run. By establishing which workloads, applications and what data will reside on a distributed storage system, administrators can better forecast their needs and spend less time (and money) trying to fix an undersized environment.
Designing a good storage platform can become very expensive – very quickly. This is especially the case when the planning and architecture processes are either skipped or rushed. Although modern storage arrays may be expensive, they’re built around efficiency. The ability to logically segment physical controllers, remove duplicate data and archive information are all features which help to control the storage environment. When a solid storage platform is in place, organizations can see benefits in performance, agility and – very importantly – uptime.
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2013/07/09/planning-for-a-cloud-ready-disturbed-storage-infrastructure/
URLs in this post:
 Bill Kleyman: http://www.datacenterknowledge.com/archives/author/bkleyman/
Copyright © 2012 Data Center Knowledge. All rights reserved.