Skip navigation

Letting Your Servers Adjust the Temperature

What if your servers could talk to your building management system and tell it when to adjust the temperature in the data center? They can, according to researchers at Lawrence Berkeley Labs (LBL) and Intel.

What if your servers could talk to your building management system and tell it when to adjust the temperature in the data center? They can, according to researchers at Lawrence Berkeley Labs (LBL) and Intel, who have just completed a proof-of-concept test at an Intel data center in Santa Clara, Calif.

Cooling systems are a major focus of efforts to improve the energy efficiency of data centers. A key challenge is supplying just enough cooling to keep servers at optimal working temperatures within racks. Many data centers waste energy by overcooling the data center to compensate for lack of granular control.  A successful solution needs to collect temperature data from key areas of the data center, and provide a way to adjust the cooling system to respond to changes in conditions.

Where to Measure?

The "where" of the temperature readings is important. Many data center operators adjust their cooling based on the temperature of air as it returns to the computer room air handler (CRAH). Recent approaches to cooling monitoring have placed sensors throughout the data center, which report back to management software using wireless signals.

The Intel/LBL team set out to capture energy savings through keeping the data center slightly warmer, a strategy recently affirmed by the cooling industry group ASHRAE, which recently raised its recommended upper limit from 77 degrees to 80.6 degrees. That recommendation is based on temperatures at the server inlet, not the CRAH.

Cross-Protocol Conversation

But most new servers include sensors that can detect inlet temperature. The Intel/LBL team set out to integrate the sensor data into the building management system. The IT monitoring system uses the Simple Network Managment Protocol (SNMP) while the facilities systems use the BACnet (Building Automation and Control) protocol.  

"It turns out it's not too difficult to tie into a building management system," said Bill Tschudi of LBL. "All you need is a conversion from one protocol to another. There are at least three vendors that are connecting these protocols to talk to one another."

The next step was to decide how best to fine-tune the temperature settings. One of the challenges in data center air conditioning is addressing temperature fluctuations between the top and bottom of seven-foot tall racks. A historic problem in raised-floor data centers - where cold air from a plenum area under the floor is pumped into the equipment area - is the tendency for "hot spots" near the top of racks.

Sensors at Top and Bottom

The Intel/LBL study design sought to address this problem using temperature readings at the top and bottom of the rack. "What we were doing was using the servers at the bottom of the rows to control the chilled water (temperature), while the servers at the top of the rack control the fan speed," said Tschudi.

The CRAH airflow setting is critical. If the air flow isn't strong enough, the top of the racks don't get enough cool air. If the airflow is too strong, air is driven past the top of the rack and can cause recirculation that mixes hot and cold air in the upper section of the cold aisle.

CFD Modeling, Then Field Test

The group tested its concept using computational flud dynamics modeling (see video demo), and published its findings in a white paper earlier this summer. The next step was to set up a testbed using three racks of servers in Intel's Santa Clara facility, which validated the group's methodology.

"I think in the long run the goal is going to be to integrate IT with the building," said Tschudi. "You'll have a lower cost involved if you can eliminate one of the control systems."

That strategy wuld require some adjustments by IT staff, who would need to work more closely with the facilities staff and yield key controls to the building management system. In addition, many data center operators are wary of relying too heavily on giving automated systems the ability to adjust data center environmental conditions.

But Tschudi said the data center industry needs to continue to seek new ways top make data centers easier to manage, cheaper to run and more energy efficient.     

"We're frequently faced with the challenge of getting people to try new things," said Tschudi. "But it's what we do."

RELATED STORIES:

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish