Focus on Fans Delivers Cost Savings on Cooling

Cooling experts say minor adjustments to fan speed in air handling units can yield major savings, and in many cases can be more cost-effective than installing containment systems to control airflow.

Orlando Castro of Kaiser Permanente (left) watches RaingWire's Jim Kennedy make a point during the 2011 Data Center Efficiency Summit Friday in San Jose, Calif.

SAN JOSE, Calif. - Data center managers who are under pressure to cut costs may find that pressure is their ally - air pressure, that is.

Experts in data center cooling said Friday that minor adjustments to fan speed in air handling units can yield major savings, and in many cases can be more cost-effective than installing containment systems to control  airflow.

New approaches could make it cheaper and easier for older data centers to improve the efficiency of their cooling systems, according to panelists at Friday's Data Center Efficiency Summit sponsored by the Silicon Valley Leadership Group. That  includes the ability to implement variable frequency drives (VFDs) in cooling systems where they previously have been seen as problematic.

Is the 'Era of Containment' Over?

Containment has been one of the major success stories in the effort to make data centers more energy efficient. By separating cool supply air and warm exhaust air, containment strategies allow users to slash the amount of airflow - and energy - needed to keep servers running.

In Friday's session, a key pioneer in containment said it may no longer be the right solution for many data centers.

"Has the era of containment come to an end?" asked Mukesh Khattar, the Energy Director at Oracle. "I have thought about this very deeply. I was an early adopter of containment in 2004 and 2005. I'm beginning to think containment may not be the ideal solution, because there may be better ways."

Refined management of airflow and fan power may provide a more compelling approach, Khattar said. A key strategy is the use of VFDs, which allow data center managers to adjust the speed of fans in the air handlers and air conditioners providing air to the data center.

In Friday's panel, experienced end users shared case studies in which they discussed the benefits of managing air pressure within the data center.

  • Fortune Data Centers, which provides wholesale data center space, was able to cut cooling costs by 6 percent by adjusting the air pressure in its server rooms in its San Jose facility. "It all wound up with fan power," said Dan Jenkins, Director of Operations and Engineering at Fortune. "We found out that a lot of the rows were overpressurized and had too much CFM (cubic feet per minute, a key measure of airflow). We slightly reduced the fan speed over time. All rows remained cool and had positive pressure. If you do this across 70 air handlers, a little bit at a time, it adds up."
  • Healthcare provider Kaiser Permanente conducted a detailed review of rack temperatures in its data centers, searching for "cold spots" where too much cooling was being applied, according to Orlando Castro, Program Manager for Data Center Facilities Services at Kaiser. The analysis allowed the company to reduce airflow to these areas, saving $450,000 a year in cooling costs.
  • Sacramento data center provider RagingWire uses infrared imaging to identify "hot spots," which is an ongoing challenge because colocation customers make frequent changes that can impact thermal conditions in the data center. Jim Kennedy, Director of Critical Facility Engineering at RagingWire, emphasized the importance of real-time monitoring in managing conditions in the data center and adjusting airflow and fan speed to remove heat.

Perhaps the most intriguing case study involved the use of VFDs in direct expansion (DX) cooling systems, in which air passes over the cooling coil of an air conditioning unit (as opposed to the air being cooled by a chilled water loop). Some vendors have cautioned against the use of variable speed drives in mission-critical systems using DX cooling, according to Dennis Symanski, Senior Project Manager for the Electric Power Research Institute (EPRI). The concern, Symanski said, is that reducing the airflow across DX units could cause condensation and icing.

Big Savings in EPRI Data Center

Symanski thinks there's plenty of data to suggest otherwise, and used his own organization's data center as the testbed in a proof-of-concept using VFDs in a DX cooling system. The result? The EPRI team was able to reduce its fan power use by 77 percent, from 0.17 kW to 0.04 kW.

"We took our own data center and put VFDs on the fans, and progressively dropped the fan speeds," Symanski said. "We tested a range of different speeds. The only thing the IT guys noticed was that it was quieter in the data center.

"We're continuing to do analysis, but so far this looks outstanding," said Symanski. "Every time we make a transition (in fan speed), we check in the data center. There hasn't been any problem whatsoever. This is an easy retrofit. We put in VFDs with a bypass on them so they can revert if they need to."

Symanski says this strategy could be particularly useful for older, smaller data centers that may not have the budget for containment retrofits.

"These CRACs (computer room air conditioners) are at least 10 years old," he said. "There's a lot of legacy data centers out there that can do this. It pays for itself in weeks and months."

EPRI will soon publish a case study, co-sponsored by California Energy Commission. Symanski hopes the data from the case study will make it easier to justify using VFDs in DX units.

"I had to put my job on the line," said Symanski. "This is the data center for our headquarters. We did enough paper analysis, and had a dialogue with the IT guys. It takes a lot of convincing. But the payback is really quick and there have been no issues."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish