(Photo by Michael Bocchieri/Getty Images)

(Photo by Michael Bocchieri/Getty Images)

The Problem of Inefficient Cooling in Smaller Data Centers

A lot of the conversation about inefficiency of data centers focuses on underutilized servers. The famous splashy New York Times article in 2012 talked about poor server utilization rates and pollution from generator exhaust; we covered a Stanford study earlier this year that revealed just how wide the problem of underutilized compute is in scope.

While those are important issues – the Stanford study found that the 10 million servers humming idly in data centers around the world are worth about $30 billion – there’s another big source of energy waste: data center cooling. It’s no secret that a cooling system can guzzle as much as half of a data center’s entire energy intake.

While web-scale data center operators like Google, Facebook, or Microsoft expound the virtues of their super-efficient design, getting a lot of attention from the press, the fact that’s often ignored is that these companies only comprise a small fraction of the world’s total data center footprint.

The data center on campus operated by a university IT department; the mid-size enterprise data center; the local government IT facility. These facilities, and others like them, are data centers anybody hardly ever hears about. But they house the majority of the world’s IT equipment and use the bulk of the energy consumed by the data center industry as a whole.

And they are usually the ones with inefficient cooling systems, either because the teams running them don’t have the resources for costly and lengthy infrastructure upgrades, or because those teams never see the energy bill and don’t feel the pressure to reduce energy consumption.

Data center engineering firm Future Resource Engineering found ways to improve efficiency in 40 data centers this year that would save more than 24 million kWh of energy total. Most of the improvements were improvements to cooling systems. Data center floor area in these facilities ranged from 5,000 square feet to 95,000 square feet.

The biggest culprit? Too much cooling. “The trend is still overcooling data centers,” Tim Hirschenhofer, director of data center engineering at FRE, said. And the fact that they’re overcooling is not lost on the operators. “A lot of customers definitely understand that they overcool. They know what they should be doing, but they don’t have the time or the resources to make the improvements.”

There are generally two reasons to overcool: redundancy and hot spots. Both are problems that can be addressed with proper air management systems. “You overcool because you don’t have good air management,” Magnus Herrlin, program manager for the High Tech Group at Lawrence Berkeley National Lab, said.

Because data center reliability always trumps energy efficiency, many data centers have redundant cooling systems that are all blasting full-time at full capacity. With proper controls and knowledge of the actual cooling needs of the IT load, you can keep redundant cooling units in standby mode but turn them back on automatically when they’re needed, when some primary capacity is lost, or when the load increases.

But most smaller data centers don’t have control systems in place that can do that. “Air management is not a technology that has been widely implemented in smaller data centers,” Herrlin said. Recognizing the problem if widespread inefficiency in smaller data centers, LBNL, one of the US Department of Energy’s many labs around the country, is focusing more and more on this segment of the industry. “We need to understand and provide solutions for the smaller data centers,” he said.

Overcooling is also a common but extremely inefficient way to fight hot spots. That’s when some servers run hotter than others, and the operator floods the room with enough cold air to make sure the handful of offending machines are happy. “That means the rest of the data center is ice-cold,” Herrlin said.

Another common problem is poor separation between hot and cold air. Without proper containment or with poorly directed airflow, hot exhaust air gets mixed with cold supply air, resulting in the need to pump more cold air to bring the overall temperature to the right level. It goes the other way too: cold air ends up getting sucked into the cooling system together with hot air instead of being directed to the air intake of the IT equipment, where it’s needed.

While Google uses artificial intelligence techniques to squeeze every watt out of its data center infrastructure, many smaller data centers don’t have even basic air management capabilities. The large Facebook or Microsoft data centers have built extremely efficient facilities to power their applications, Herrlin said, “but they don’t represent the bulk of the energy consumed in data centers. That is done in much smaller data centers.”

Leonard Marx, manager of business development at Clearesult, another engineering firm focused on energy efficiency, said hardly anybody has a perfectly efficient data center, and because the staff managing the data center are seldom responsible for the electric bill, the philosophy of “if it ain’t broke, don’t fix it” prevails.

Understandably, a data center manager’s first priority is reliability, and building more reliable systems through redundancy creates inefficiency. With a system that’s reliable but inefficient, and with a data center manager who is not responsible for energy costs, there’s little incentive to improve. Without changes that divert more attention in the organization to data center energy consumption, the problem of energy waste in the industry overall will persist, regardless of how efficient the next Facebook data center is.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

San Francisco-based business and technology journalist. Editor in chief at Data Center Knowledge, covering the global data center industry.

Add Your Comments

  • (will not be published)


  1. Can anyone help me understand why there is little talk about the role that software can play in power utilisation: it is feasible to ramie efficiency whilst whilst lowering power power consumption. It is 100% conceivable that compute platforms can be right-sized for VM workloads using a sized rules and constraints based decision engine and that disparate servers and hypervisor estates can be homogenised across all compute assets combining these into a single resource pool accessed by a unified interface. A scalable (up to and beyond 100 000 VMs!) interface is then provided.

  2. Mike

    Cold isle containment may be a solution.

  3. The author is right. .. over cooling,.. air pressure control,.. high air speeds,... it's all a waste of energy. The solution is available and very simple... Keep air speeds low, then air pressure control isn't necessary and you can steer the climate by just resupplying the cold air used by the servers. Devices to control the climate based on this cold air handling principle are available and an article about it was published before; http://www.datacenterknowledge.com/archives/2014/11/27/air-circulation-in-data-centers-rethinking-your-design/

  4. Wilsom

    Good article. After rightsizing the cooling system of smaller data centers (<4 or 5 ton), the challenge is equipment selection (split vs. portable vs roof/ceiling vs. local rack/self-contained vs. central-shared vs. window/wall etc.); space use, cost, air flow, and their efficiency especially in colder days (in NYC, my case) are decisive