Skip navigation
How Storage is Shaping The Cloud Data Center

How Storage is Shaping The Cloud Data Center

As multiple trends converge in the data center, infrastructure has been forced to evolve to support more agile and scalable platforms. Part of the conversation revolves around unified computing systems, while the other part revolves around something even more specific: storage.

Several converging trends, such as IT consumerization, increased number of users, more devices and a lot more data, have pushed the storage environment to a new level. Now, these new technologies aren’t only driving the cloud — they’re pushing forward all of the technologies that support cloud computing. At the epicenter of the cloud sits the data center. This is the central point where all information is gathered, and then distributed to other data centers or to the end-user.

Because of these new initiatives and new ways to deliver data, the data center has been forced to evolve to support more agile and scalable platforms. Part of the conversation revolves around unified computing systems, while the other part revolves around something even more specific: storage.

Today’s infrastructure is being tasked with supporting many more applications, users and workloads. Because of this, the storage infrastructure of a data center — especially one that’s cloud-facing — must be adaptable and capable of intelligent data management. So, many storage vendors have evolved their solutions to provide more efficient systems capable of much more to help support these new IT and business demands.

  • Solid State Drives (SSD) and Flash. There is a growing argument around this technology. Will it take over all storage or is it still a niche player? The truth is that SSD and flash are still somewhat pricey and are really designed to play a specific role within storage. For workloads that require very high IOPS — VDI or database processing, for example — working with SSD or flash systems may be the right move. Now, organizations looking to offload heavy cycles from their primary spinning disks can load flash or SSD to help control that load. In many cases, a good array can off-load 80 percent to 90 percent of the IOPS from spinning disks that may be a part of the controller.
  • Unified Computing Systems. Since efficiency plays a big part in any data center environment, many vendors have been integrating storage solutions into a unified communications system. And for good reason, too. Using FlexPod (Cisco, NetApp, VMware) or vBlock (Cisco, EMC, VMware) as validated designs, administrators can launch entire systems which are robust and capable of direct scalability. High amounts of network through-put, computing capable of great user-density, and a clean rack deployment all make unified computing systems very attractive. The idea is the integration of storage into systems which are capable of supporting massive amounts of resources. In working with storage, this type of deployment can create an easier to manage environment which is much more ready for business demands and growth.
  • Replication. A big part of cloud computing and storage is the process of data distribution and replication. New storage systems must be capable of not only managing data at the primary site — but they must also be able to replicate that information efficiently to other locations. Why? There is a direct need to manage branch office, remote sites, other data centers, and of course — disaster recovery. Setting the right replication infrastructure will mean managing bandwidth, scheduling and what data is actually pushed out. Storage can be a powerful tool for both cloud computing and business continuity. The key is understanding the value of your data and identifying where that data fits in with your organization. For example, some data may be less vital than other data sets. In these cases, understanding important data points will help create a more powerful storage environment.
  • Multi-tenancy. Many storage vendors have now applied some storage virtualization practices into their product strategy. The idea is simple: With one controller, split up services to allow others access to a locked down instance of storage. Storage manufacturers are logically segmented physical controllers and delivering private, virtual, arrays to sub-admins. Now, instead of having to purchase controllers and arrays for multiple corporate departments, storage administrators can split up the one and control the entire environment. To the sub-admin, it looks like they have their own physical unit. However, to the primary administrator, they are still tasked with management of just one controller. This type of storage deployment can help with data control, security and resource management.
  • Data deduplication. Control over the actual data within the storage environment has always been a big task as well. Storage resources aren’t only finite, they’re expensive. So, data deduplication can help manage data that sits on the storage array as well as information being used for other systems. For example, instead of sending out 100 20mb attachments, the storage array would be intelligent enough to only store one file and create 99 pointers. If a change was made to the file, the system is smart enough to log those changes and create secondary pointers to a new file. Having direct visibility into the data infrastructure not only controls storage-sprawl, but it also helps continuously maintain a health storage environment.

Because cloud computing will only continue to advance, there will be new demands placed around storage. Even now, conversations around big data and storage are already heating up. Taking the conversation even further, new types of big data file systems make the big data management process even easier. Working with the same open-source product, the Hadoop Distributed File System (HDFS) has taken distributed big data management to a whole new level.  In working with big data, it’s important to understand that these platforms are new and can have their limitations. For example, HDFS cannot be mounted directly by an existing operating system. In this scenario, administrators would have to use a virtual file system (FUSE for example) to get into information out of the HDFS data table.

In developing future systems, administrators will look for platforms which are highly scalable and can manage large amounts of information. Whether it’s big data, a distributed file system, cloud computing or just the user environment — the storage infrastructure will always play an important role. The idea will always revolve around ease of management and control over the data. In developing a solid storage platform, always make sure to plan for the future since data growth will be an inevitable part of today’s cloud environment.

For more on storage news and trends, bookmark our Storage Channel.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish