A symbolic data cloud is seen at the IBM stand at the 2014 CeBIT technology Trade fair on March 10, 2014 in Hanover, Germany. CeBIT is the world's largest technology fair and the year's partner nation was Great Britain. (Photo by Nigel Treblin/Getty Images)

Virtual Infrastructure Resource Monitoring Best Practices

Today, we’re going to take a look at a very critical monitoring and resource management aspect of the modern data center: Your virtualization layer (logical).

Pretty much all analysts agree that today’s data center is the driving engine behind major business initiatives. Most of all, today’s organizations rely heavily on their data center to enable real-world strategies and capabilities. However, the big challenge is actually creating monitoring and alerting systems which can look into some of the most advanced functions of the data center.

We know that virtualization continues to revolutionize how we deliver applications, workloads, and critical data points. We also know that the data center has evolved to support greater levels of density and new business initiatives. But how do you keep an eye on it all? How do you proactively manage one your most critical business components – the data center? Most of all, how do you optimize your entire virtualization ecosystem to ensure proper business and data center alignment?

The best way to do this is to look at new ways to monitor and manage data center and virtualization systems.

With that in mind; let’s start with the logical layer: virtualization.

The days of one application per server are coming to an end. With virtualization, IT shops can pack numerous virtual machines, running full operating systems and workloads, onto a single piece of hardware – which was unheard of just a few years ago. The best part is: You can run these workloads concurrently with negligible performance loss.

When it comes to working with a healthy virtualization ecosystem, there is a set list of metrics that should be monitored. This includes:

  • Memory
    • Host RAM utilization.
    • VM Ram usage.
  • Storage
    • Disk space on the SAN.
    • Space utilization on the VM.
  • CPU
    • Both vCPU and host CPU should be checked.
  • Network I/O
    • Check for heavy traffic patterns around VMs. Bottlenecks are not fun.
  • WAN
    • Ensure that remote links are operating properly.
    • Link saturation between sites must be monitored.

Remember, there are many events that can cause a resource spike. Problems within the environment will cause issues to arise, a programming loop can peg a CPU, or even a network error can saturate links. You must proactively plan for this to keep your systems up and running. This means potentially forecasting for infrastructure spikes and having capacity to handle that.

Consider this example:

You’re a travel agency with all of your systems being virtualized. You know you’ll experience spikes in usage when there is a specific season in effect. So, during peak holiday or sales seasons, your servers may see a massive hit. To accommodate for this, companies worried about overworked VMs utilize something called Workflow Automation and Infrastructure Orchestration. That is, if a host is pegged with resource requests and current running VMs can no longer handle the load, automation software will kick in and spin up additional VMs on separate hosts to help handle the load. The great part here is that this process can be entirely automated to ensure business continuity and minimal business disruption.

So, when looking at purchasing any sort of resource monitoring software for your virtualization ecosystem, make sure it can answer the following questions:

  • How many VMs do I have, and which ones are over or under provisioned?
  • Where are the performance bottlenecks in my virtualized environment?
  • How are my VMs configured?
  • How many app servers will fit in my current environment, and when will I need more resources?
  • What departments are using which resources?
  • How is my server utilization being tracked over a period of time?

Furthermore, there are three major features that many IT managers will generally want to have with them in a management software:

Capacity Management

  • Proactively monitor, predict, detect, and troubleshoot capacity bottlenecks with real-time dashboards and alerts
  • Determine optimal VM placement, explore what-if scenarios, identify capacity shortfalls, and determine application-specific capacity needs

VM Sprawl Control

  • Find idle/stale VMs, orphaned files, and over-allocated VMs

Performance Monitoring

  • Proactively monitor virtualization-unique performance problems
  • Deeply analyze storage I/O problems unique to virtual and private cloud deployments
  • Troubleshoot application and workload issues
  • Quickly discover and act on performance issues using flexible alerts and integrated recommendations

Final Thoughts

Creating a VM has never been easier. With just a few mouse clicks you have a new virtual machine ready to go. With that there are some important cautions. With such simplicity comes even more need for planning. Take the time to study your environment, understand the needs and then deploy the VMs.

Too often IT administrators get “click-happy” and deploy VMs at will. This creates VM-sprawl and can be difficult to manage.

When gathering metrics and understanding your unique environment – monitor your results over a span of time. This way, you’ll be able to know peak usage times, which machines are most heavily utilized, and where bottlenecks or I/O issues are occurring.

The more IT managers use their metric information, the better they are able to make decisions on their virtual infrastructure. And with that, they are able to deploy environments that best utilize their precious resources.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the Vice President of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)