Posted By Rich Miller On February 25, 2011 @ 8:00 am In Cooling | 6 Comments
Why can’t servers run in the desert with no air conditioning? And why can’t data center managers automatically ramp processor power usage up and down to match their workloads? Those questions were debated by some of the world’s leading data center experts Wednesday at the Technology Convergence Conference in Santa Clara, Calif. The surprising answer: some of these scenarios are closer to reality than you think.
Take the data center in the desert. Subodh Bapat, the former VP of Energy Efficiency at Sun Microsystems, shared an anecdote about a data center user in the Middle East that wanted to test server failure rates if it operated its data center at 45 degrees Celsius – that’s 115 degrees Fahrenheit.
Testing projected an annual equipment failure rate of 2.45 percent at 25 degrees C (77 degrees F), and then an increase of 0.36 percent for every additional degree. Thus, 45C would likely result in annual failure rate of 11.45 percent. “Even if they replaced 11 percent of their servers each year, they would save so much on air conditioning that they decided to go ahead with the project,” said Bapat. “They’ll go up to 45C using full air economization in the Middle East.”
One of the largest Internet e-commerce operations is pursuing a similar strategy in the U.S. eBay will use fresh air cooling in its new modular data center in Phoenix , where the average temperature exceeds 100 degrees in the summer. Dean Nelson, Senior Director of Global Data Center Services at eBay, says the servers can handle it if the facility is designed correctly.
“The reality is that the manufacturers baby the servers,” said Nelson. “That’s the truth.”
Raising the baseline temperature inside the data center can save energy used to operate chillers (air conditioning systems) by enabling more extensive use of “free cooling,” the use of fresh air from outside the data center. Free cooling is typically implemented in cool climates, but eBay isn’t alone in hoping to extend the areas where it can be implemented.
“If we can get the manufacturers to design for higher temperatures, you could operate IT equipment anywhere without a chiller,” said Bill Tschudi, progam manager at Lawrence Berkeley National Labs.
But nudging the thermostat higher is only appropriate for companies with a strong understanding of the cooling conditions in their facility. Just about all of the panelists at the Technology Convergence Conference were eager for more tools to provide granular management of server performance and power usage.
Nelson noted recent research from Data Center Pulse that identified potentially significant power savings from dynamically adjusting the clock speed  of CPU processors to match IT workloads. The group’s testing suggests that overclocking and underclocking processors as workloads fluctuate can reduce a server’s energy use by as much as 18 percent.
“There are huge potential reductions (in energy usage) available,” Nelson said. “Why can’t we have control over that chip? Why can’t we have the controls to give us a gas pedal, so that we can throttle up and throttle back?”
“You should be able to scale down your energy use,” agreed Mukesh Khattar, Energy Director at Oracle Corp. “That will give you more savings than anything else. If your servers are doing zero percent work, they should be using zero percent power. But they’re not. (Power usage at idle) is closer to 80 percent.”
It’s possible to tweak servers to match processor function to workloads, as Data CenterPulse proved in its testing. “There’s lots of knobs inside a server design you can tweak to enhance its performance and manage its power,” said Bapat.
The processor perspective was shared by Henry Wong, Senior Staff Technologist at Intel, who was sympathetic to data center executives’ yearnings for advanced power management tools. But that doesn’t mean it’s a simple problem to solve.
“Having many knobs makes it really difficult for IT managers,” said Wong. “To provide this level of control, you’ve got to automate it.”
Wong favors an approach that uses policies that can be translated into granular server settings. “That’s one of the technologies we’re trying to build in,” said Wong. “To get to that nirvana requires a lot of effort. And no one policy is going to fit everyone. We’re trying to build in these heuristics and artificial intelligence. But it’s still a few years away.”
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2011/02/25/whats-next-hotter-servers-with-gas-pedals/
URLs in this post:
 modular data center in Phoenix: http://www.datacenterknowledge.com/archives/2010/12/15/edi-wins-ebay-modular-design-contest/
 dynamically adjusting the clock speed: http://www.datacenterknowledge.com/archives/2010/10/27/the-next-efficiency-frontier-underclocking/
 Rich Miller: http://www.datacenterknowledge.com/archives/author/richm/
Copyright © 2012 Data Center Knowledge. All rights reserved.