Nitin Mishra, VP, Product Management & Solutions Engineering, Netmagic Solutions Pvt. Ltd.
This column is part two of a two-part series on Power Usage Effectiveness (PUE). See Going Beyond PUE for Data Center Efficiency for part one.
In today’s data center, what else needs to be measured along with Power Usage Effectiveness (PUE)? PUE is best used for tracking the impact of changes made to the data center infrastructure. But there are other metrics and methods used to reduce power usage.
While it is important for an organization to reduce losses in the power system and the power used for the support infrastructure, it is also apparent that the bulk of the power consumption in the data center goes to the IT load itself. If the organization can reduce the IT load, it will reduce the overall power required for the data center. IT Load is depicted on the right-hand side of the graphic below.
Cascading Energy Savings
As a matter of fact, reducing the IT load will have a compounding effect, as it will also reduce the losses in the power system and the power required for the support infrastructure. This can be termed as a cascading effect.
If we assume that one watt of power can be saved at the IT load, it will reduce losses in the server power supply (AC to DC conversion), reduce losses in the power distribution (PDU transformers, losses in the wiring itself), reduce power losses in the UPS, reduce the amount of cooling required and, finally, reduce power losses in the building transformer and switchgear. The end result of the cascade effect is that saving one watt at the IT load may actually result in two or more watts of overall energy savings.
Powering the IT load forms a major chunk of the overall electricity cost in a data center, hence, for any energy efficiency initiative to be successful an organization should first look at reduction of the IT load.
Reducing the IT Load
There are a number of ways to reduce the IT load in the data center. These include:
- Decommission or repurpose servers which are no longer in use
- Power down servers when not in use
- Enable power management
- Replace inefficient servers
- Virtualize or consolidate servers
- Decommission or re-purpose servers
Data center managers always struggle with how to identify unused or lightly used or “ghost” servers. One way of identifying a ghost server is to use CPU utilization as a measure of whether or not a server is being actively used. However, this may not hold true every time. A server may appear to be busy when it is actually only performing secondary or tertiary processing not related directly to the primary services of the server.
For example, the primary service of an e-mail server is to provide e-mail. In addition, this server may also provide monitoring services, backup services, antivirus services, etc., but those are secondary, tertiary, and similar types of service. If the e-mail server stops being accessed for e-mail, the monitoring, backup, and antivirus services may no longer be necessary, but the server may still continue to provide them. So from a CPU-utilization standpoint, the unused server may appear to be busy, but that may only be secondary or tertiary processing. Hence, CPU utilization as a measure will become ineffective.
Another way of determining whether a server is actively being used or not is Server Compute Efficiency (ScE). The ScE metric measures CPU usage, disk and network I/O, incoming session-based connection requests and interactive log-ins to determine if the server is providing primary services. The ScE metric can provide data center managers with the ability to determine which servers are being actively used for primary services vis-à-vis ‘ghost’ servers which may be good candidates for virtualization or consolidation.
Power Down Servers When Not in Use
While the majority of servers in data centers may be utilized around the clock, there may be some servers which may only be used during certain parts of the day or week. These servers should be turned off when not in use to save power going into these servers.
Enable Power Management
To reduce usage of power in servers, data center managers should employ Demand-Based Switching (DBS) to attain significant savings in the data center.
Replace Inefficient Servers
Once a server is purchased, it is considered as ‘sunk cost’. Not taken into account are the ongoing operational costs to power the server which includes power, cooling, software licensing and so on. A new multi-core server may replace as many as 15 single-core servers, saving as much as 93% of the power usage. In addition to the power savings, software licensing and other maintenance costs can also be considerably reduced. Additional savings include a reduction in data center cooling costs and the potential to reclaim valuable rack space.
There are various compelling reasons for virtualizing servers. From a business continuity viewpoint, virtual machines can be isolated from physical system in the event of failures to augment system availability. In addition, parallel virtual environments allow for an easier transition to a backup facility. From an energy efficiency viewpoint, virtualization provides numerous opportunities for energy savings.
Virtual machines provide minute control over workloads and can be moved to additional active servers as demand increases. Overall, virtualization can increase server CPU usage by 40-60%. As the CPU usage is increased, the energy efficiency of the server power supply will also increase.
Comprehensive Energy Management Needed
The Power Usage Effectiveness metric provides valuable information for measuring data center energy efficiency. But it represents only one component in a comprehensive energy management program. While data center managers are under tremendous pressure to reduce the PUE, doing so without a full understanding of power usage in the data center might actually be detrimental.
Data center managers must consider other metrics such as energy usage at the IT device level and server compute efficiency other than the PUE to affect sustained reductions in energy usage.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.