Reducing Server Power Consumption

Add Your Comments

Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.

Clems Pfeiffer Power AssureCLEMENS PFEIFFER
PowerAssure

Servers in data centers waste a substantial amount of energy. The reason is that servers are deployed and configured for peak capacity, performance and reliability, usually at the expense of efficiency. Such waste unnecessarily increases capital and operational expenditures, and can result in finite resources (particularly power and space) being exhausted, thereby creating a situation where the organization might outgrow its data center(s).

However, there are several steps IT managers can (and should) take to improve overall server efficiency—sometimes dramatically—without adversely impacting on capacity, performance or reliability. Here are the four steps that afford the highest return on investment.

Consolidate and Virtualize as Many Servers as Possible

Poor server utilization is one of the biggest sources of waste in most data centers. Consolidating and/or virtualizing as many servers as possible can increase overall utilization from around 10 percent (typical of dedicated servers) to between 20 percent and 30 percent.  The significant reductions in both capital and operational expenditures have motivated most organizations to virtualize at least some of their servers, and those with aggressive efforts have discovered another major benefit: the ability to reclaim a considerable amount of both rack space and stranded power.

AOL, for example, recently reported annual savings of $5 million from “decommissioning” about one-fourth of its servers worldwide, including $2.2 million in OS licenses and $1.65 million in energy bills.

Continuously Match Server Capacity to the Actual Load

Even the best virtualized and most recently-refreshed server configurations waste power during periods of low application demand. Total server power consumption can be reduced by up to 50 percent by matching online capacity (measured in cluster size) to actual load in real-time. Runbooks can be used to automate the steps involved in resizing clusters and/or de-/re-activating servers, whether on a predetermined schedule or dynamically in response to changing loads.

The savings here are not trivial. Both the U.S. Department of Energy and Gartner have observed that the cost to power a typical server over its useful life can now exceed the original capital expenditure. Gartner also notes that it can cost over $50,000 annually to power a single rack of servers. So reducing the power consumed while servers are “idle” or clusters are lightly utilized holds the potential to deliver significant savings while continuing to satisfy application performance objectives. Furthermore, dynamic management can increase application capacity way beyond the original cluster allocation, supporting even unforeseeable spikes in demand therefore increasing the reliability of the application dramatically.

Determine Actual Power Consumption under Various Loads

Another obvious way to reduce power consumption is to utilize more energy-efficient equipment. Most IT departments are, therefore, starting to improve energy efficiency when adding capacity and/or during routine technology refresh cycles. To help IT managers make more fully-informed decisions, Underwriters Laboratories created a new performance standard (UL2640) based on the PAR4 Efficiency Rating. PAR4 provides an accurate method for determining both absolute and normalized (over time) energy efficiency for both new and existing equipment.

According to UL, “With the introduction of the new standard, IT professionals for the first time can make valid comparison between servers, better calculate total cost of server ownership, and make better decisions about the life and management of their servers.” To calculate server performance using the UL2640 standard, a series of standardized tests is performed, including a Power-On Spike Test, a Boot Cycle Test and a Benchmark. The Benchmark results determine the server’s power consumption under various loads, measures transactions per watt in seconds, which is a particularly meaningful metric for comparing legacy servers with newer ones, and new models with one another, when making purchasing decisions and allows data center managers to use actual idle/peak power consumption for allocation of space and power.

Load-balance by “Following the Moon”

Although many organizations now operate redundant data centers to satisfy business continuity needs, very few currently take full advantage of this powerful configuration. Having multiple, strategically-located data centers enables loads to be shifted to where power is currently the most stable and the least expensive. Because power is invariably the most abundant and least expensive at night (when outside air temperature is also at its lowest), such a “follow the moon” strategy can result in considerable savings. Integrating virtualized and load balanced applications across multiple data centers allows data center managers to shift and shed capacity on demand to maximize application availability while minimizing power and operating cost. The same functionality can also be used during demand response requests to benefit from utility incentives supporting the stability of the power grid, ultimately increasing the reliability of the applications.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Add Your Comments

  • (will not be published)