Skip navigation

The Problem of Inefficient Cooling in Smaller Data Centers

While web-scale operators build the most efficient data centers, they are a tiny fraction of the industry’s overall energy consumption

A lot of the conversation about inefficiency of data centers focuses on underutilized servers. The famous splashy New York Times article in 2012 talked about poor server utilization rates and pollution from generator exhaust; we covered a Stanford study earlier this year that revealed just how wide the problem of underutilized compute is in scope.

While those are important issues – the Stanford study found that the 10 million servers humming idly in data centers around the world are worth about $30 billion – there’s another big source of energy waste: data center cooling. It’s no secret that a cooling system can guzzle as much as half of a data center’s entire energy intake.

While web-scale data center operators like Google, Facebook, or Microsoft expound the virtues of their super-efficient design, getting a lot of attention from the press, the fact that’s often ignored is that these companies only comprise a small fraction of the world’s total data center footprint.

The data center on campus operated by a university IT department; the mid-size enterprise data center; the local government IT facility. These facilities, and others like them, are data centers anybody hardly ever hears about. But they house the majority of the world’s IT equipment and use the bulk of the energy consumed by the data center industry as a whole.

And they are usually the ones with inefficient cooling systems, either because the teams running them don’t have the resources for costly and lengthy infrastructure upgrades, or because those teams never see the energy bill and don’t feel the pressure to reduce energy consumption.

Data center engineering firm Future Resource Engineering found ways to improve efficiency in 40 data centers this year that would save more than 24 million kWh of energy total. Most of the improvements were improvements to cooling systems. Data center floor area in these facilities ranged from 5,000 square feet to 95,000 square feet.

The biggest culprit? Too much cooling. “The trend is still overcooling data centers,” Tim Hirschenhofer, director of data center engineering at FRE, said. And the fact that they’re overcooling is not lost on the operators. “A lot of customers definitely understand that they overcool. They know what they should be doing, but they don’t have the time or the resources to make the improvements.”

There are generally two reasons to overcool: redundancy and hot spots. Both are problems that can be addressed with proper air management systems. “You overcool because you don’t have good air management,” Magnus Herrlin, program manager for the High Tech Group at Lawrence Berkeley National Lab, said.

Because data center reliability always trumps energy efficiency, many data centers have redundant cooling systems that are all blasting full-time at full capacity. With proper controls and knowledge of the actual cooling needs of the IT load, you can keep redundant cooling units in standby mode but turn them back on automatically when they’re needed, when some primary capacity is lost, or when the load increases.

But most smaller data centers don’t have control systems in place that can do that. “Air management is not a technology that has been widely implemented in smaller data centers,” Herrlin said. Recognizing the problem if widespread inefficiency in smaller data centers, LBNL, one of the US Department of Energy’s many labs around the country, is focusing more and more on this segment of the industry. “We need to understand and provide solutions for the smaller data centers,” he said.

Overcooling is also a common but extremely inefficient way to fight hot spots. That’s when some servers run hotter than others, and the operator floods the room with enough cold air to make sure the handful of offending machines are happy. “That means the rest of the data center is ice-cold,” Herrlin said.

Another common problem is poor separation between hot and cold air. Without proper containment or with poorly directed airflow, hot exhaust air gets mixed with cold supply air, resulting in the need to pump more cold air to bring the overall temperature to the right level. It goes the other way too: cold air ends up getting sucked into the cooling system together with hot air instead of being directed to the air intake of the IT equipment, where it’s needed.

While Google uses artificial intelligence techniques to squeeze every watt out of its data center infrastructure, many smaller data centers don’t have even basic air management capabilities. The large Facebook or Microsoft data centers have built extremely efficient facilities to power their applications, Herrlin said, “but they don’t represent the bulk of the energy consumed in data centers. That is done in much smaller data centers.”

Leonard Marx, manager of business development at Clearesult, another engineering firm focused on energy efficiency, said hardly anybody has a perfectly efficient data center, and because the staff managing the data center are seldom responsible for the electric bill, the philosophy of “if it ain’t broke, don’t fix it” prevails.

Understandably, a data center manager’s first priority is reliability, and building more reliable systems through redundancy creates inefficiency. With a system that’s reliable but inefficient, and with a data center manager who is not responsible for energy costs, there’s little incentive to improve. Without changes that divert more attention in the organization to data center energy consumption, the problem of energy waste in the industry overall will persist, regardless of how efficient the next Facebook data center is.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish