Skip navigation

Real Innovation is in the Applications

While 80 percent of power is being consumed by IT equipment in the data center, the other 20 percent is being consumed by the power and cooling infrastructure. Why are we still talking about cooling innovations and overlooking hardware and software trends?

Dr. Joe Polastre is co-founder and chief technology officer at Sentilla, a company that provides enterprise software for managing power and performance in the data center. Joe is an energy efficiency evangelist and defines the company’s technology and product strategy.

JOE POLASTRE
Sentilla

Over the past few years, the data center industry has gotten smarter about power and cooling. They're adopting cold and hot aisle containment, fresh air cooling, water and air economizers, and bypass UPS systems. All of these are common sense techniques aimed at lowering the overhead of data center operations, and while most data centers had a PUE of more than 2.0 a few years ago, new and modernized data centers are now routinely in the 1.2 to 1.4 range.

What this means is that 80% of power is now being consumed by IT equipment, with the power and cooling infrastructure consuming the other 20%. Yet, we still keep talking about cooling innovations and continue to overlook some disturbing hardware and software trends. This is a classic case of the 80/20 rule: Why spend your time optimizing the 20%, when the 80% is consuming the power? This 80% is the IT load, responsible for performing the useful work for the business.

Why Cooling is No Longer Needed

There's an important trend going on among server, storage and networking vendors: They are routinely exceeding ASHRAE recommended limits. Dell, HP, and Cisco produce servers warrantied to 95F, and SGI to 104F respectively. If you ask Intel or the server vendors, they will even begrudgingly give you new code for fan control that causes the fans to stay at lower RPMs when exposed to higher temperatures. Add to that the fact that racks are now grounded, so there's little worry of static discharge and thus humidity isn't as much of an issue. Therefore, data centers can run up to 104F with minimal humidity control, allowing cooling expenses to be significantly cut.

Let's explore this idea with The Green Grid's online free cooling calculator. Set the drybulb threshold at 40C/104F. Enter the zip code of the warmest sustained temperatures in North America: 92328, Death Valley, California. We find out there are 8,584 free-air cooling hours in Death Valley. That's 357 days per year! Instead of building a cooling plant, move applications for the other 9 days each year.

Innovative Software Drives Data Center Efficiency

Now it is time to stop building applications like we did in the mainframe days, and instead build modern, modular services. There’s a shift now going on in how applications are written, deployed, and managed. Enterprise applications have typically been built with a different piece of the application residing on different systems — the database, web server, business intelligence platform, etc. This necessitated 2N redundancy with a copy of each major component running to support failover in the case of an issue. That’s the old way of building applications.

In the last 10 years, there’s been tremendous innovation in software that has been enabled by the emergence of commodity, inexpensive servers. Enormous mechanical and electrical innovation has delivered high quality, high performance, and low cost computing systems. And so, software developers started to take a different approach to building applications: Instead of worrying about the very expensive computing resource, treat servers as disposable and abundant. Expect that the hardware will fail, and re-architect to embrace the vast resources at your disposal. This philosophy is the core of what I consider to be “Cloud Computing”.

Google is the leading innovator when it comes to software development in this manner. MapReduce and follow-on Hadoop have dramatically changed the way that modern applications and services are developed and deployed. Instead of a "componentized" system, each part of the application can process the whole lot of data in parallel. If any single system fails, the performance of the application degrades, but the service keeps running. In this model, we only need N+1 redundancy, not 2N. Don’t worry about fixing the failed system — throw it out! Bring up a new instance of the application, the performance recovers, and most users aren’t even aware that anything has happened.

With innovative application architectures like this, services are now truly independent of where they run (of course, financial trading is a notable exception). Workloads can run where power is cheapest, cooling isn’t needed, and resources are available. Applications can quickly re-provision without the user even knowing. And enterprises can deliver monumental new online services at a fraction of the cost.

If the data center’s sole purpose is to deliver services to its users or business, then why do we keep rehashing age-old common-sense cooling strategies? We should be talking about the applications, because that’s where real innovation and efficiency lies. Make the applications more efficient, and the rest of the infrastructure will reap the rewards.

Sidenote

MapReduce and Hadoop aren’t the only innovative new approaches to building efficient applications. Google File System, Google App Engine, the suite of Amazon Web Services including SimpleDB and Facebook’s HipHop are just a small number of incredible technologies that are also revolutionizing the way applications are made. You can even trace the roots of this new paradigm back to the Network of Workstations project at Berkeley. If you want a peek into the cutting edge of the future, check out the RADLab, sponsored by none other than every major software company including Google, Microsoft, Oracle, SAP, Amazon, Facebook, eBay, and VMWare.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish