Skip navigation

The Next Efficiency Frontier: Underclocking

<img src="/sites/datacenterknowledge.com/files/wp-content/uploads/2010/10/dcp-clockspeed.jpg" width="470" height="333" /> New research from Data Center Pulse has identified potentially significant power savings from dynamically adjusting the clock speed of CPU processors to match IT workloads.

A slide from the Data center Pulse presentation at the recent SVLG Data Center Efficiency Summit, showing potential efficiency gains from dynamically matching clock speeds to workloads.

What's the next frontier in making data centers more energy efficient? New research from Data Center Pulse has identified potentially significant power savings from dynamically adjusting the clock speed of CPU processors to match IT workloads.

Early results of testing suggest that overclocking and underclocking processors as workloads fluctuate can reduce a server's energy use by as much as 18 percent, according the Dean Nelson, a co-founder of Data Center Pulse.

Overclocking a computer's processor or memory causes it to go faster than its factory-rated speed. The extra speed results in more work being done by the processor, boosting the performance of the machine. Overclocking is popular among users seeking peak performance, and is particularly popular among gamers. Underclocking, which is modifying the clock speed to a lower rate, is typically used to save power or reduce heat generated by the processor.

Matching Clock Speeds to Workloads
As data center professionals continue to seek new methods of improving energy efficiency, the Data Center Pulse team decided to look at the potential gains from matching clock speeds to workloads.

"We said 'let's overclock all of these and see what happens,'" said Nelson, who presented preliminary results of the research at the Data Center Efficiency Summit  on Oct. 14 in San Jose, Calif. "Why aren't we doing this stuff? Intel already has all the hooks in there to adjust the frequency of the CPUs. There is software available to do it.  The point is, we were able to go in there and figure out how to do it."

Nelson noted that the overclocking he's discussing differs from the overclocking common among video gamers, which typically is boost performance over lengthy stretches of consistently high CPU activity. In this case, the goal is to capture potential gains from machines where the workload varies over time.

Combination of Performance Management Techniques
Nelson says the efficiency gains are captured through a combination of overclocking and underclocking. As the workload diminishes, the CPU slows. As it rises, the CPU speeds up to accommodate.

"The net savings is real. said Ray Pfeifer, Senior Vice President of Business Development at SynapSense.  "We're basically telling the CPU to lower its voltage and lower its performance."

The Data Center Pulse tests aren't the first to explore energy savings though "adaptive" management of clock speed. Researchers from the University of Rhode Island have demonstrated an approach to dynamically manage CPU frequency and voltage on a PC, while also monitoring temperature (PDF report).

More recently, Intel has added a TurboBoost feature in its Nehalem chips. With TurboBoost, the processor can detect when it’s running below its capacity and below the limits on temperature and power usage, and can then increase its clock frequency to handle an increased workload. When the workload decreases, the processor slows back down to its normal frequency.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish