Power Planning Supports Virtualization & Cloud

1 comment

Peter Panfil is Vice President & General Manager, Emerson Network Power’s Liebert AC Power. With more than 30 years of experience in embedded controls and power, he leads global market and product development for ENP’s Liebert AC Power business.

Peter PanfilPETER PANFIL
ENP’s Liebert AC Power

Data center managers are being challenged to maintain or improve availability in increasingly dense computing environments while reducing costs and augmenting efficiency. Some companies are looking to cloud computing and virtualization for help. Both strategies present certain advantages and opportunities, but supporting them requires a dedication to power—and the rest of the infrastructure, for that matter—so as not to compromise availability.

The Cloud and Downtime

Cloud computing has benefits that can’t be ignored, such as delivering infrastructure as a service, support for massive sharing, flexibility and a pay-as-you-go model. One of the downsides of cloud computing is in the nearly weekly headlines of high-profile outages at data centers that host sites. These incidents illustrate a general problem—power remains an issue, but in the cloud, it’s an issue you can’t control. Cloud users are completely tied to the provider’s level of infrastructure and availability.

Challenges Presented by Virtualization

Virtualization also is valuable with the ability to run multiple virtual machines on a single physical piece of equipment, sharing the resources of that computer across multiple environments. Virtualization improves the efficiency and availability of resources and applications in your organization in a dramatic fashion.

Virtualization also pushes up the utilization rate of the server, especially in blade server architecture. The impact on the power delivery systems is notable. You easily could move from a low-density power application, say a single-phase circuit at 15 -20 amperes to a higher density application. By virtualizing, it increases your utilization rate as well as the potential to need to deliver more power to that application. It might push you to a high-density power application.

Pre-virtualization, servers typically operate at a 10-20 percent utilization rate. Post-virtualization, they run at 60, 70 and even 80 percent. An interesting thing happens when you start pushing servers to those rates. They use all the compute power they’re capable of using, which is a good thing because you pay for that compute power, but there are infrastructure impacts. For example, if you have been using only 200-300 watts of a 500-watt server and then ramp it up to capacity, along with the rest of the 500-watt servers in the rack, the rack gets hotter due to the increased power draw. It becomes necessary to review the cooling strategy as well.

Availability Still A Major Concern

As IT pros assess cloud services and virtualization activities to push up their utilization rates and to garner more efficiency and availability, they routinely name availability as a major concern. They typically are at large companies and are used to having responsibility for all of the servers the business uses. The idea of putting parts of their business on rented, “black box”-style cloud services or virtualizing and reducing their number of assets makes them uneasy. They understand the risks and how much an outage can cost them.

A large percentage of outages are triggered by electrical issues that can be minimized or eliminated with adequate power solutions. The challenge is optimizing the efficiency gains available in power approaches with IT criticality and the need for availability.

So the developers in your IT department may love the fact that they can immediately dial up thousands of virtual servers from a cloud provider or code to an infinitely scalable platform like Google’s App Engine (the platform-as-a-service tier), but it’s up to IT management to strike the balance between rapidly developing applications at Internet scale and planning for the impact on the business that any cloud-related downtime will have. And the higher up in the cloud stack you go to rent your services, the more vulnerable you are to downtime because you’re more locked into that particular provider’s solution.

Users also need to understand and negotiate a Service Level Agreement with the providers so the proper level of availability and service is in place and user expectations are met. Before employing a cloud computing vendor or instituting a virtualization strategy, have a third-party provider conduct a data center audit and implement a complete availability/sustainability, maintainability/efficiency/growth plan.

Considering Power Helps Both Cloud and Virtual Environments

Ultimately, the cloud is here to stay and virtualization is the first step many are taking to better energy savings within a data center. By conducting due diligence prior to implementation, power issues can be avoided and the threat of outages lessened.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Add Your Comments

  • (will not be published)

One Comment