What’s Next? Hotter Servers With ‘Gas Pedals’

6 comments

Subodh Bapat, the former VP of Energy Efficiency at Sun Microsystems, participates in a panel on server hardware at the Technology Convergence Conference.

Why can’t servers run in the desert with no air conditioning? And why can’t data center managers automatically ramp processor power usage up and down to match their workloads? Those questions were debated by some of the world’s leading data center experts Wednesday at the Technology Convergence Conference in Santa Clara, Calif. The surprising answer: some of these scenarios are closer to reality than you think.

Take the data center in the desert. Subodh Bapat, the former VP of Energy Efficiency at Sun Microsystems, shared an anecdote about a data center user in the Middle East that wanted to test server failure rates if it operated its data center at 45 degrees Celsius – that’s 115 degrees Fahrenheit.

Testing projected an annual equipment failure rate of 2.45 percent at 25 degrees C (77 degrees F), and then an increase of 0.36 percent for every additional degree. Thus, 45C would likely result in annual failure rate of 11.45 percent. “Even if they replaced 11 percent of their servers each year, they would save so much on air conditioning that they decided to go ahead with the project,” said Bapat. “They’ll go up to 45C using full air economization in the Middle East.”

eBay: Free Cooling in Phoenix

One of the largest Internet e-commerce operations is pursuing a similar strategy in the U.S. eBay will use fresh air cooling in its new modular data center in Phoenix, where the average temperature exceeds 100 degrees in the summer. Dean Nelson, Senior Director of Global Data Center Services at eBay, says the servers can handle it if the facility is designed correctly.

“The reality is that the manufacturers baby the servers,” said Nelson. “That’s the truth.”

Raising the baseline temperature inside the data center can save energy used to operate chillers (air conditioning systems)  by enabling more extensive use of “free cooling,” the use of fresh air from outside the data center. Free cooling is typically implemented in cool climates, but eBay isn’t alone in hoping to extend the areas where it can be implemented.

“If we can get the manufacturers to design for higher temperatures, you could operate IT equipment anywhere without a chiller,” said Bill Tschudi, progam manager at Lawrence Berkeley National Labs.

More Granular Server Management

But nudging the thermostat higher  is only appropriate for companies with a strong understanding of the cooling conditions in their facility. Just about all of the panelists at the Technology Convergence Conference were eager for more tools to provide granular management of  server performance and power usage.

Nelson noted recent research from Data Center Pulse that identified potentially significant power savings from dynamically adjusting the clock speed of CPU processors to match IT workloads. The group’s testing suggests that overclocking and underclocking processors as workloads fluctuate can reduce a server’s energy use by as much as 18 percent.

“There are huge potential reductions (in energy usage) available,” Nelson said. “Why can’t we have control over that chip? Why can’t we have the controls to give us a gas pedal, so that we can throttle up and throttle back?”

“You should be able to scale down your energy use,” agreed Mukesh Khattar, Energy Director at Oracle Corp. “That will give you more savings than anything else. If your servers are doing zero percent work, they should be using zero percent power. But they’re not. (Power usage at idle) is closer to 80 percent.”

The Processor Perspective

It’s possible to tweak servers to match processor function to workloads, as Data CenterPulse proved in its testing. “There’s lots of knobs inside a server design you can tweak to enhance its performance and manage its power,” said Bapat.

The processor perspective was shared by Henry Wong, Senior Staff Technologist at Intel, who was sympathetic to data center executives’ yearnings for advanced power management tools. But that doesn’t mean it’s a simple problem to solve.

“Having many knobs makes it really difficult for IT managers,” said Wong. “To provide this level of control, you’ve got to automate it.”

Wong favors an approach that uses policies that can be translated into granular server settings. “That’s one of the technologies we’re trying to build in,” said Wong. “To get to that nirvana requires a lot of effort. And no one policy is going to fit everyone. We’re trying to build in these heuristics and artificial intelligence. But it’s still a few years away.”

Henry Wong, Senior Staff Technologist at Intel, speaks on a panel Wednesday at the Technology Convergence Conference.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

6 Comments

  1. Ebay has been doing some interesting things in this area. I hope it spills over into meaningful partnerships being established with operators of data centers and server manufacturers so that there is real world data being shared from real world environments...

  2. ASHRAE 9.9 is scheduling the release of the 3rd edition. It looks like the environmental envelope will be expanded again to allow for more "free cooling". See the ARARAE Data Center Weather Report... http://www.ctoedge.com/content/ashrae-data-center-weather-report

  3. But what about the embedded carbon in replacing servers so frequently? This is a money saving exercise, let's not dress it up as anything else - certainly not "sustainable"!