Taking a Leap Forward in Efficiency with Real-Time PUE

Add Your Comments

Patrick Flynn is lead sustainability strategist at IO, equipping customers with the tools they need to improve data center performance.

Patrick-Flynn-tnPATRICK FLYNN
IO

PUE 101

Introduced by The Green Grid 2007, Power Usage Effectiveness (PUE) has become the de facto standard metric for tracking the energy efficiency of data centers. While data center designs have evolved tremendously to manage data more efficiently, the approach to PUE has remained constant, even in the face of on-going concerns around PUE limitations. These concerns validate that the industry needs to establish a new methodology for measuring energy efficiency.

PUE represents the ratio of total amount of energy that comes into a data center to the energy that reaches and is used by Information Technology (IT) equipment. The energy that reaches the computing equipment is considered productive, while energy used for infrastructure (e.g. cooling, lighting, security, system inefficiency) is auxiliary and viewed as waste. This waste is the place to look for efficiency gains. Data Centers strive for a PUE of 1.0, which represents a hypothetical efficient data center where energy is used exclusively to power IT, and there is no energy loss or overhead in the system.

Troubles with Today’s PUE Approach

Though widely used, PUE has industry-acknowledged shortcomings in terms of accuracy. For starters, PUE does not measure efficient performance of a data center’s end purpose, which is to conduct digital work. In reality, the data center’s job is not to provide energy to IT equipment or infrastructure, but to do useful and productive computing; that power is going to IT equipment does not mean the equipment is doing good work.

The second inaccuracy is that PUE very rarely provides a reliable way to compare among data centers. Without controlling for variables like location, size, design, and data sets, PUE cannot tell you with accuracy which data center is performing better.

Aside from its accuracy, there are serious (but rarely discussed) questions about the utility of today’s PUE. Typically, PUE is measured for entire facilities. A mixed-use building may house any number of functions, such as data centers, labs, and offices. For these types of mixed-use environments, determining the power usage of just the data center environment is difficult, especially when some systems share power or cooling infrastructure.

Also, PUE measurement is a challenge in a colocation data center, which is a mixed-use facility from the perspective of having multiple customers. To Tenant A, the neighbors’ power, efficiency, equipment and overhead are unknown, and do not contribute to the energy that reaches the IT gear of Tenant A. For PUE to be useful to Tenant A, it must be specific to their defined infrastructure and data systems, even –and perhaps especially– if they are a small fraction of a larger shared data center infrastructure.

Further reducing the utility of today’s PUE is the fact that it is typically calculated retroactively, looking at energy consumption over a historical period of time and then calculating an implied average power usage over the period. Such an approach masks any volatility in PUE and doesn’t create timely feedback to operators.

The Solution: Real-Time PUE

With a data center industry mandate for greater energy efficiency, we need to evolve to newer, enhanced measurement models. Though a retroactive, building-averaged PUE may serve to confirm overall progress, it falls short of helping to pinpoint opportunities for improvement. Service providers must strive to provide customers information that allows them to improve performance and make better business decisions. This is why IO is increasing the usefulness of energy efficiency measurement through an evolved methodology called real-time PUE, as measured through the IO.OS operating system.

Real-time PUE measures power efficiency instantaneously and provides a level of granularity down to the individual server. This level of specificity is made possible by our software defined data centers, which are able to capture live data from across the infrastructure, enabling monitoring, measurement, benchmarking and continuous improvement.

Additionally, our modular data center designs allow system administrators to pinpoint where power is being used and where improvements are possible. By taking the analytical lens down to the server level, we are one step closer to tying PUE to actual digital work.

For example, a company may only want to assess the performance of one data module, or how much power is being used to cool the systems. With real-time PUE, this is possible. By contrast, using the traditional PUE calculation – summing all electrical input to the entire system and attempting to allocate a portion of that energy to the data module of interest – this may prove impossible or misrepresent PUE for that particular module. This results in misallocated costs and poorly informed operational decisions.

Initial conversations with companies are generating strong interest in implementing real-time PUE methodology. There is a hunger to understand data center performance all the way into the applications layer. When our industry achieves this level of measurement, we will have arrived at a truly comprehensive measure of efficiency, inclusive of the work output.

Working from Where We Are

We recognize that traditional PUE measurement has a place in current customer assessments of data center efficiency and environmental sustainability. To inspire the industry to rethink data center design, IO partnered with Arizona Public Service (APS) on a comparative, independent third party study. The goal of this study was to level-set PUE, by evaluating both construction-based data centers and modular data centers. IO runs its own data centers and operates both traditional and modular environments, so we were in a position to reliably evaluate the power efficiency differential between Data Center 1.0 and 2.0 designs.

This month, APS released their report, showing that IO’s manufactured, modular data center approach achieves 19 percent energy cost savings compared to the traditional construction-based environment. In its research, APS analyzed 12 months of data from both IO.Anywhere modules and the traditional build-out located at the IO.Phoenix multi-tenant data center environment.

APS monitored PUE for calendar year 2012 and found the data center 1.0 environment had a PUE of 1.73, while the data center 2.0 modular environment had a PUE of 1.41. The portion of PUE above 1.0 denotes energy not going to IT equipment, and that’s where efficiencies can be found. We’ve reduced this portion from 0.73 down to 0.41 in our switch to the Data Center 2.0 technology, representing a 44 percent reduction in energy spent on infrastructure versus IT equipment. Over the course of the year, IO achieved annual savings of $200,000 per MW of average IT power within the modular build-out.

Just as our industry pushes the boundaries with data center technology innovations, we all must continue to evaluate and challenge the status quo around measuring data center efficiency. Data center infrastructure technology has gone through a transformation over the last decade. There is a performance gap between the way the industry measures PUE and the need for more useful information to continue driving performance.

At IO, we believe real-time PUE is the logical next step for the industry, especially as modular data center designs become the standard. But it is just one critical step on the path to achieving break-through cost and efficiency gains. While providing data center users with more useful efficiency measurement enables better decision-making, it does not provide recommendations to take action. This is where data center analytics come into play — a very interesting area of innovation for the industry. Evolving our category’s thinking on PUE is an instrumental step in improving data center efficiencies and will serve CIOs, CFOs, facilities managers, end-users, and the planet.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Add Your Comments

  • (will not be published)