Aaron Rallo is the founder and CEO of TSO Logic. Aaron has spent the last 15 years building and managing large-scale transactional solutions for online retailers, with both hands-on and C-level responsibility in data centers around the world.
The word “holistic” comes up frequently in the data center industry, with industry leaders and trade publications urging operators to take a more comprehensive view in their journey to improve energy efficiency.
This idea has been leading to some significant advances in energy efficiency at the facility level: better cooling systems, power distribution units, universal power supplies, backup power generation, and so on. These kinds of advances have undoubtedly had a positive impact on energy waste at datacenters. In fact, the average Power Usage Effectiveness [PUE] in the industry dropped from 2.7 in 2007 to 1.65 in 2013.
Even so, this facilities-centric approach has fostered one very large blind spot, stopping short of a truly holistic view of the data center. The reality of the situation is that there is an equally large opportunity to improve efficiency on the IT side itself, but this has been largely overlooked.
Perhaps you’re thinking that IT energy consumption is simply tied to the size of the data center, and is otherwise set in stone. If you can’t imagine that IT equipment actually consumes a lot of energy unnecessarily, take a look at some of these numbers:
- 52 percent or more of the power coming into a datacenter is used directly by IT equipment.
- Server utilization rates are typically very low, currently averaging in the 6–12 percent range.
- An idle server that is doing nothing at all can still draw 60 percent of its maximum power.
- One watt of power saved at the server level can generate as much as 2.84 watts of savings along the entire datacenter power chain.
Relatively speaking, a single server may not use much energy. But multiply this small amount by thousands or tens of thousands of servers, and it’s clear that that a considerable portion of the energy drawn by data centers is simply being wasted on powering and supporting idle servers. That’s a very valuable opportunity to reduce energy consumption and operating costs.
Compounding this waste is the fact that individual servers are growing more power hungry. The International Data Corporation (IDC) reports that energy consumption per server is actually rising by an average of nine percent annually. (Of course, the computing power of each server is growing at the same time, but since data demand continues to explode, the absolute number of servers just keeps growing too.)
With all of this in mind, it’s imperative that data centers look at both the facility and the IT infrastructure when they track their energy performance—that is, if they want to remain competitive.
On the Cusp of Change
The good news is that major change is already starting to happen in the industry, in large part due to the introduction and increasing acceptance of software tools that allow for a deeper understanding of datacenter energy consumption. Although these software solutions cover a wide gamut of types and capabilities, the general emphasis is on detailed, real-time measurement and monitoring, opening the door for improved understanding and finer-grained control.
Still, the majority of these software tools focus primarily on the facilities side of the equation. To finally start addressing the IT side, a whole new breed of software technology called application-aware power management (AAPM) is gaining traction. AAPM solutions monitor a datacenter’s current and incoming workload at the server and application level, allowing for dynamic control of the power state of individual servers in direct response to application demand. As a result, much less power is wasted on servers that aren’t performing any real work, without any infrastructure changes or negative impact on performance.
Pages: 1 2