Focus on Fans Delivers Cost Savings on Cooling

8 comments

Orlando Castro of Kaiser Permanente (left) watches RaingWire's Jim Kennedy make a point during the 2011 Data Center Efficiency Summit Friday in San Jose, Calif.

SAN JOSE, Calif. – Data center managers who are under pressure to cut costs may find that pressure is their ally – air pressure, that is.

Experts in data center cooling said Friday that minor adjustments to fan speed in air handling units can yield major savings, and in many cases can be more cost-effective than installing containment systems to control  airflow.

New approaches could make it cheaper and easier for older data centers to improve the efficiency of their cooling systems, according to panelists at Friday’s Data Center Efficiency Summit sponsored by the Silicon Valley Leadership Group. That  includes the ability to implement variable frequency drives (VFDs) in cooling systems where they previously have been seen as problematic.

Is the ‘Era of Containment’ Over?

Containment has been one of the major success stories in the effort to make data centers more energy efficient. By separating cool supply air and warm exhaust air, containment strategies allow users to slash the amount of airflow – and energy – needed to keep servers running.

In Friday’s session, a key pioneer in containment said it may no longer be the right solution for many data centers.

“Has the era of containment come to an end?” asked Mukesh Khattar, the Energy Director at Oracle. “I have thought about this very deeply. I was an early adopter of containment in 2004 and 2005. I’m beginning to think containment may not be the ideal solution, because there may be better ways.”

Refined management of airflow and fan power may provide a more compelling approach, Khattar said. A key strategy is the use of VFDs, which allow data center managers to adjust the speed of fans in the air handlers and air conditioners providing air to the data center.

In Friday’s panel, experienced end users shared case studies in which they discussed the benefits of managing air pressure within the data center.

  • Fortune Data Centers, which provides wholesale data center space, was able to cut cooling costs by 6 percent by adjusting the air pressure in its server rooms in its San Jose facility. ”It all wound up with fan power,” said Dan Jenkins, Director of Operations and Engineering at Fortune. “We found out that a lot of the rows were overpressurized and had too much CFM (cubic feet per minute, a key measure of airflow). We slightly reduced the fan speed over time. All rows remained cool and had positive pressure. If you do this across 70 air handlers, a little bit at a time, it adds up.”
  • Healthcare provider Kaiser Permanente conducted a detailed review of rack temperatures in its data centers, searching for “cold spots” where too much cooling was being applied, according to Orlando Castro, Program Manager for Data Center Facilities Services at Kaiser. The analysis allowed the company to reduce airflow to these areas, saving $450,000 a year in cooling costs.
  • Sacramento data center provider RagingWire uses infrared imaging to identify “hot spots,” which is an ongoing challenge because colocation customers make frequent changes that can impact thermal conditions in the data center. Jim Kennedy, Director of Critical Facility Engineering at RagingWire, emphasized the importance of real-time monitoring in managing conditions in the data center and adjusting airflow and fan speed to remove heat.

Perhaps the most intriguing case study involved the use of VFDs in direct expansion (DX) cooling systems, in which air passes over the cooling coil of an air conditioning unit (as opposed to the air being cooled by a chilled water loop). Some vendors have cautioned against the use of variable speed drives in mission-critical systems using DX cooling, according to Dennis Symanski, Senior Project Manager for the Electric Power Research Institute (EPRI). The concern, Symanski said, is that reducing the airflow across DX units could cause condensation and icing.

Big Savings in EPRI Data Center

Symanski thinks there’s plenty of data to suggest otherwise, and used his own organization’s data center as the testbed in a proof-of-concept using VFDs in a DX cooling system. The result? The EPRI team was able to reduce its fan power use by 77 percent, from 0.17 kW to 0.04 kW.

“We took our own data center and put VFDs on the fans, and progressively dropped the fan speeds,” Symanski said. “We tested a range of different speeds. The only thing the IT guys noticed was that it was quieter in the data center.

“We’re continuing to do analysis, but so far this looks outstanding,” said Symanski. “Every time we make a transition (in fan speed), we check in the data center. There hasn’t been any problem whatsoever. This is an easy retrofit. We put in VFDs with a bypass on them so they can revert if they need to.”

Symanski says this strategy could be particularly useful for older, smaller data centers that may not have the budget for containment retrofits.

“These CRACs (computer room air conditioners) are at least 10 years old,” he said. “There’s a lot of legacy data centers out there that can do this. It pays for itself in weeks and months.”

EPRI will soon publish a case study, co-sponsored by California Energy Commission. Symanski hopes the data from the case study will make it easier to justify using VFDs in DX units.

“I had to put my job on the line,” said Symanski. “This is the data center for our headquarters. We did enough paper analysis, and had a dialogue with the IT guys. It takes a lot of convincing. But the payback is really quick and there have been no issues.”

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

8 Comments

  1. This is an interesting story regarding VFDs in the data center. The basic premise is sound that VFDs are the best approach on data center air handlers, the problem is that it's been true for years. At the SuperNAP it's been standard since about 05. Also, VFDs have no serious relation to containment. Containment separates hot from cold air, making the introduction of new air more efficient. Regardless of whether you're using VFDs you still want your hot and cold air isolated.

  2. Not many people are aware of the impact of airflow management on efficiency let alone overall data center performance and longevity. This session was a good first step for the SVLG. It was good to hear first hand accounts but I can see that the industry is in desperate need of education on data center physics. The approaches that were discussed are not universal and could even do damage under the right conditions. Only a physics based approach to airflow management is universally effective.

  3. I could not agree more with Mark's comment. Whilst VFD's can make a huge difference to run costs, they should not be installed in isolation. The greatest savings are to be had when combining VFD's with other strategies including but not limited to: - Cold Aisle / Hot Aisle containment - Blanking - Brush strips - Sensible Inlet temperatures There is an excellent document written by The Green Grid which explains this thoroughly - http://www.thegreengrid.org/events/TheROIofCoolingSystemEnergyEfficiencyUpgrades.aspx

  4. The largest challenge in DC energy efficiency is educating operators about the complexity of airflows in data centers. My blog, DC Huddle, is set out specifically to provide that education. Here is a post on the pros and cons of under floor air distribution: http://www.dchuddle.com/2011/underfloor-air-distribution/#more-207 And here is the first part of a series about airflow cooling control: http://www.dchuddle.com/2011/non-existent-data-center-cooling-control/

  5. Eric Swanson

    Agree with the other commentators. Fan speed and containment measures are both part of the greater Airflow Management domain, with containment most likely being an easier and safer first step in a retrofit. For smaller data centers who are facing cooling crises (not able to reduce fan speeds even if they could), it should also be the primary step. That was my situation a few years ago; I was able to grow IT-load 60% over 3 years by doing the things that Martin mentioned. This is detailed (summary, presentation & case brief) at the Uptime Institute's Symposium website: http://symposium.uptimeinstitute.com/advanced-search/1224-intelligent-containment-beyond-hot-and-cold

  6. I agree with Mark Thiele. Minimizing mixing of warm and cold air has the highest priority in the cooling efficiency game. As Martin Patrick cited, it can be can be achieved by any number of methodologies, such as the basics blanking plates (always) brushed collars for floor cable openings (or better still put the cables overhead), partial containment, (roof over the cold aisle, flexible curtains, etc.), as well as chimney cabinets and up to and including full containment solutions. The mechanical energy saving of the increased Delta-T which results in higher warm air return to the CRAC/CRAH is substantial. However, as Symanski noted, reducing the airflow across DX units could cause condensation and icing. Simply adding VFDs to existing DX units without consideration to monitor the evaporative coil temps/condition should be considered very carefully before blindly implementing VFDs. On the other hand, CRAH’s are the ideal candidates for VSD solutions (by VSD I am referring to all types of Variable Speed Drives – including VFD and EC Fans which are much more efficient especially at lower speeds and can be retrofitted to many existing CRAHs). CRAHs unlike CRACs can vary the cooling proportionately and continuously, from 0-100% via the chilled water valve, while CRACs can only cycle the internal compressor(s) on or off, to meet the heat load. By lowering the airflow it also increases Delta-T across the cooling coil, further improving heat transfer and again improving mechanical (chiller) performance and energy efficiency. There is no question that VFDs offers energy saving and should be included if there is a appropriate retrofit opportunity, (and should always be the first choice for new equipment), but to minimize the importance of good airflow management practices, such as containment, is not a good suggestion.

  7. So long as the variable speed/frequency drives arent used to minimize air flow, I see no reason as to why they shouldn't be used.

  8. This is a very interesting article take on airflow issues in the data center. It was always my understanding that VFD solutions were a large initial investment but also with a good ROI, am I wrong in that thinking? Additionally, there are some innovative approaches to slowing down velocity in subfloor applications to increase static pressure, and more evenly distribute the cold air throughout the raised floor without the addition of VFD's and the inherent risks associated. Unfortunately, most companies are not aware of these methods so are limited in their thinking of how to solve their given inefficiencies. At the end of the day, no two data centers are alike and although some of the root issues may be the same, you cannot take a cookie cutter approach to solving todays data center issues. Each data center must be handled with a custom approach and innovative methodologies.