Understanding and Controlling Cloud Sprawl

Add Your Comments

Bill Kleyman is a virtualization and cloud solutions architect at MTM Technologies where he works extensively with data center, cloud, networking, and storage design projects. You can find him on LinkedIn.

Bill Kleyman, MTM TechnologiesBILL KLEYMAN
MTM Technologies

Within the data center community, server sprawl was an issue many administrators had to face and address. Virtualization helped consolidate many of those systems to create a leaner infrastructure. After a little while VM sprawl became the new challenge. With improved management and controls, managers were able to regain some virtualization real estate by tackling the VM sprawl issue, although many will insist that this issue still persists.

Now, with more WAN utilization, better underlying hardware components and more organizations moving to some kind of a cloud model, the IT industry is experiencing a new type of challenge: Cloud Sprawl.

Cloud computing has created numerous options for organizations and IT environments. New methods of delivering disaster recovery, improving backup and even expanding the data center are all reasons to move to a cloud model. Unfortunately, as with many new technologies, some control and management best practices aren’t being applied. Some organizations are departmentally deploying cloud instances without really understanding where their cloud platform is going. The other problem is that with too many uncontrolled cloud presences, security becomes a concern.

Regaining Control

Although cloud computing is a great technology, there needs to be a fundamental understanding of the platform. Cloud computing isn’t just one piece of software. Rather, it’s a collection of resources, hardware components and the Internet all working together to deliver a distributed data model. To regain control of your cloud environment, it’s important to look at the infrastructure as a whole, and the important components that make up cloud computing.

  • Use agnostic management tools. Since numerous components make up cloud computing, it’s important to have fewer tools which have greater visibility. Where native management tools fall short, modern monitoring and management platforms have created a plug-in or management pack solution to the issue. For example, tools like SCCM 2012 or SCOM 2012 have management packs that tie into the product suite so that the administrator only needs one management platform. Underlying physical and virtual servers can all be controlled form a single pane of glass.
  • Have clear communication (and documentation). One of the biggest issues within larger organizations has been the departmental cloud deployment problem. More recently, departments have had cloud initiatives, sought budget and have deployed their infrastructure. In many cases, a different department may be doing something very similar or can utilize the same resources. Without good communication, an organization can have duplicate cloud efforts. In line with good communication, having a solid documentation practice spanning the IT environment is a must. Testing and deployment documentation can save a lot of time and money. Furthermore, it can streamline further deployment operations and help alleviate duplicate efforts.
  • Monitor your VMs. A large part of cloud computing is, of course, virtualization. An important way to control your cloud presence is to begin at the VM level. As mentioned earlier, having a good management platform will help control how and where your VMs are being deployed. Remember, it’s important to have visibility into a virtualization infrastructure not only at the local data center, but at the cloud level as well. Rogue, cloud-based VMs, take up resources and can create security concerns.
  • Control your resources. Resources are very finite. This means that they have to be monitored and controlled more than ever. A rogue cloud model will see storage, physical server, and networking utilization all spike. This equals lost dollars and a less efficient infrastructure. Keeping a close eye on where resources are being distributed, which system is using it most and tracking those metrics will help with any cloud model.
  • Control POCs and pilots. One of the easiest ways to let a cloud deployment get out of control is to let numerous pilots and POCs (proof of concepts) happen at the same time. All of the above control metrics apply here since communication and resource management can help curtail this issue. Many times an organization will roll out a POC and forget to end it. Furthermore, some administrators will continue to use that POC environment as a full-out testing infrastructure. Although this can work, there needs to be a clear separation between a testing/development cloud model and one designed for a POC. It’s important to have departmental controls where no one department can run a POC or pilot without some IT management guidelines. This will help control rogue cloud deployments and help other departments gain knowledge on a given cloud initiative.

As more organizations move towards a cloud platform, there will need to be a shift in how this type of infrastructure is managed. A cloud presence is good, but having uncontrolled elements within any environment can be lost revenue, misused resources, and even create security holes. Before any testing or deployments are done – ensure that the right people are involved and that there is clear visibility into the cloud environment.

Whether an organization is deploying a private, public or hybrid cloud platform, controlling that infrastructure can have great environment and organizational benefits. When a cloud model is properly aligned with business strategies, it can be a very powerful tool.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Add Your Comments

  • (will not be published)