Energy Savings for Legacy Equipment – Realistic?

1 comment

Jeff Klaus is the general manager of Data Center Manager (DCM) Solutions at Intel Corporation. Jeff leads a global team that is pioneering power- and thermal-management middleware, which is sold through an ecosystem of data center infrastructure management (DCIM) software companies and OEMs.

Jeff-Klaus-tn-2013jpgJEFF KLAUS
Intel

We should all be very encouraged that energy conservation has been widely embraced by a spectrum of technology manufacturers. Data center energy use, as measured by SPECPower, has dropped by 40 percent over the last five years – even as performance has increased nearly 10x during the same period.1 Quite a testament to technology advances and IT design best practices. However, data center managers often ask us if there is any way they can cut back the energy consumption levels of their legacy equipment. They don’t have the budget to replace inefficient hardware, or re-architect their solutions to take advantage of virtualization or retrofits.

Our answer is simple. If you can’t afford to upgrade, you can’t afford to NOT introduce energy optimizations. Energy savings and legacy systems are not mutually exclusive, nor should they be examined in isolation. Efficient resource utilization comes from understanding how the systems consume the shared resources in the data center (regardless the size).

Getting Started: Gaining Visibility

Advanced energy management solutions provide real-time power and temperature data, and automate the logging of historical performance in one place. Usually implemented as a middleware platform, they are generally non-invasive, and support a broad range of interface protocols facilitating monitoring of legacy and current equipment. Also, energy management solutions utilize collected data to enhance real-time decision making and long-term planning.

The first step toward optimization is to understand your power use. At-a-glance thermal and power maps can help identify the biggest power consumers, and correlate their power and temperatures with workloads. Even when you can’t upgrade or replace these systems, identifying the most inefficient infrastructure offers opportunity for making affordable improvements yielding significant power savings:

  • “Ghost” servers and under-utilized servers can be identified and workloads reassigned. Some servers can be put into low-power mode or even powered down during less busy periods. Before introducing energy management solutions, most data centers have approximately 15 percent of their servers idle at any point in time; yet, these servers are still drawing power.
  • Rows and racks can be rearranged to avoid hot spots in the data center that drive up cooling costs. On an on-going basis, monitoring temperature by rows, racks, and individual servers can allow you to spot temperature changes before they escalate, and when they can be proactively remedied without driving up cooling costs.
  • Airflow handlers can be positioned for maximum efficiency, with potentially some reductions in the numbers of required units.

Next Steps: More Control and More Savings

The same energy management solution that provides fine-grain visibility of real-time conditions should let you introduce and enforce power policies that maintain optimal operating conditions. With a superior solution, power thresholds can be set along with automated alerts and triggered responses that protect against equipment-damaging power spikes.

Maintaining a consistent temperature has a major impact on the reliability and lifespan of data center equipment. Armed with historical trending data from the data center, the IT and facilities teams can intelligently define and maintain an optimum operating temperature for their particular systems. Instead of over-cooling, temperature often can be raised because monitoring and alerts guard against hot spots.

It should be noted that data center managers report that cooling systems account for almost 50 percent of the data center energy budget. Raising data center temperature by as little as one degree can lower cooling energy costs by four percent.2 A compelling business case for energy management can be built on this fact alone.

Besides adjusting power and temperature thresholds, energy management solutions can help IT maximize rack densities in the data center. With appropriate protection from power spikes and elimination of hot spots, each rack can be optimally loaded to maximize density while adhering to cooling requirements.

Facing the Future With Energy Facts in Hand

Eventually, every legacy system becomes impractical with operating costs that skyrocket after end of life support. By putting an energy management solution in place, the data center team will have insights that drive smart decisions regarding decommissioning systems during migrations and upgrades.

Implementing a collection point for energy management data is an excellent start to developing a long-term power management strategy. Combining real-time and historical trending information with business processes and best practices ensure ongoing energy requirements remain minimized. This also paves the way for an eventual longer range decision process. In this “sooner or later” scenario, the “sooner” option provides a longer payback period and faster time to savings.

When data center managers ask us about best practices for saving energy in a data center with legacy equipment, we answer by showing them how their systems currently consume shared resources in the data center. This data typically provides the necessary clarity and insight that enables them to identify areas with the potential for the biggest returns.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.


Endnotes:

1Intel’s tests verify a reduction in server energy consumption by 40 percent since 2008. See: http://www.datacenterknowledge.com/archives/2012/06/12/server-efficiency-aligning-energy-use-with-workloads/

2From “DCM Overview 1212” slide eleven: Data center managers can save 4 percent in energy costs for every degree of upward change in the set point.“ (Sun Microsystems) http://www.datacenterknowledge.com/archives/2008/10/14/google-raise-your-data-center-temperature

Add Your Comments

  • (will not be published)

One Comment

  1. Great points on the importance of visibility in energy management! Data centers with the ability to truly understand their power usage at an individual server and application level are equipped with the information to easily measure, analyze and fix those areas of inefficiencies. Power controlling servers can also eliminate unnecessary power and cooling costs for idle servers.