Skip navigation
Building Blocks – Predictable and Simple Storage for Virtual Environments

Building Blocks – Predictable and Simple Storage for Virtual Environments

Traditional storage systems designed before the advent of virtualization are difficult to administer and scale, writes Saradhi Sreegiriraju of Tintri. As virtualization storage needs change, organizations need better options for building and scaling virtualized environments.

Saradhi Sreegiriraju is the senior director of product management at Tintri, a provider of smart storage for cloud and virtualized environments.

Today, virtualization is commonplace in the enterprise data center and its benefits are indisputable. However, several challenges must be addressed to ensure the success of any virtualization initiative.

To accommodate rapid growth and fluctuations in demand, the infrastructure needs to easily support the scaling of individual VMs, as well as the addition of new virtualized workloads. Although network scaling is becoming simplified with software-defined network architectures, scalability continues to be one of the top challenges when increasing the level of virtualization in the modern data center.

Virtualized environments are complex by nature. However, today’s virtualization solutions have been designed for easy scaling, simply by adding virtualization hosts with a building block approach. By using this approach, traditional storage frequently becomes the bottleneck when scaling virtualized environments. Let’s review some of the challenges enterprises face when using general-purpose storage solutions for virtualization.

Scaling adds complexity

Traditional disk-bound storage architectures can be scaled for either capacity or performance by adding disks, but this doesn’t make for an efficient environment. IT organizations usually over-provision traditional storage to obtain the performance needed by their virtualized environments resulting in excess, unused capacity and unnecessary operational complexity.

However, there are only so many disks traditional storage controllers can support. When the maximum number of disks is reached, enterprises have to either add another storage controller and manage it separately or perform a forklift upgrade to the entire storage system. In either scenario, scaling traditional storage to meet the growing needs of virtualized environments is an ongoing battle.

Lack of control and increased overhead

Although storage performance and the costs of scaling are top concerns for IT administrators, the increased management burden is the most significant issue for most organizations. From complex operations needed for provisioning, to mapping storage in virtual environments, administrators are forced to plan how they want their storage environments far in advance to encompass the different groups of VMs they will be creating.

This practice can overwhelm IT administrators as they manually track the assignments of virtual servers and VMs to storage arrays and volumes. Many organizations are still using extensive spreadsheets for storage LUN or volume-to-VM mapping, which is an incredibly inefficient and error-prone way to manage storage.

Additionally, traditional storage systems make it extremely difficult to predict how scaling will affect existing virtualized workloads. With traditional storage solutions, all VMs are thrown into one big pot of storage. There is little to no insight into the performance of individual VMs, and no way to easily apply or measure the quality of service for specific VMs. The problem is further exacerbated in the cases when environments with related VMs are forced to span multiple storage systems, as this leads to more performance issues.

The lack of insight into individual VMs across all storage makes troubleshooting time consuming for administrators, who are forced to investigate all possible issues across multiple dimensions and workloads. With traditional storage, finding the source of any performance issue requires multiple layers of management software.

The lack of insight and control over storage leads to unpredictable behavior in the virtualized environment. This can have serious consequences for organizations, especially when combining different types of critical workloads with dissimilar performance requirements onto the same storage system. Predictable performance is simply out of reach for organizations using traditional storage platforms, since it is impossible to know if new VMs will fit on the existing storage or cause any performance issues with the applications or services.

Building blocks: A better approach for scalable virtualization storage

There is an entirely different approach to scaling storage in virtualized environments. This new approach believes that storage should be scaled the same efficient way as the building block approach in virtualized environments is used for the compute layer. By understanding and operating at the VM level the new storage is able to leverage the management constructs from the virtualization layer, making scaling storage extremely simple and predictable.

This simple building block approach to scaling storage in virtualized environments means that each building block appears as a single datastore in the virtualization layer. This is in contrast to traditional storage, where administrators must spend time creating separate LUNs or volumes for each new group of VMs, then create and export the corresponding datastores. With the building block approach, there is no longer a need to create and manage complex webs of LUNs and volumes.

As an added bonus, building blocks can provide administrators with a clear and comprehensive view of system performance and capacity utilization. IT no longer needs to interpret piles of complex capacity and performance data, or worry about how much spare performance they have to work with.

A building block can also ensure there is never any impact from “noisy neighbors” on the same system. With VM-level QoS functionality, storage performance is predictable. Furthermore, different
types of VMs such as IO-intensive OLTP workloads and latency- sensitive end-user desktops, can reside on the same building block.

If administrators need to scale beyond the performance and capacity of a single building block, it is as simple as adding an additional system to the virtualization layer − a task that takes less than five minutes. This effectively adds another datastore that can be managed by the virtualization layer.

The ability to monitor and control the virtual environment is important when scaling storage--building blocks need a unified, intuitive control platform that lets administrators administer multiple building blocks as one.

The best choice

Virtualization requires storage to scale, and the easiest way to scale is by using a building block approach to adding performance and capacity. Traditional storage systems designed before the advent of virtualization are difficult to administer and scale as an enterprise’s virtualization storage needs change.

IT needs storage that understands virtualization by design in order to step up to the demands of dynamically changing virtualized environments. They also need simple, intuitive tools that can provide comprehensive visibility and control into the entire storage environment while empowering them to monitor and operate storage at the VM- level.

A building block approach paired with a centralized administration and control can offer organizations the best choice for building and scaling virtualized environments.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish