Skip navigation
Microsoft’s Seven Tenets of Data Center Efficiency
An aerial view of the 470,0000 square foot Microsoft data center in San Antonio, Texas (Photo: Microsoft)

Microsoft’s Seven Tenets of Data Center Efficiency

While they don't have the scale and buying power of Microsoft, smaller enterprise data center operators can still benefit from some key concepts used by the giant's infrastructure team

Paul Slater thinks robots replacing some of the humans working in data centers today is not only a real possibility but something that’s likely to happen within the next decade. And if you’re designing a data center today that you’re planning to use for longer than 10 years, you should probably think about what that means for your design.

As the field of robotics shifts away from static “dumb” robots that have resulted in inflexible manufacturing facilities toward more versatility, and design of data centers and especially data center hardware move toward more standardized commodity equipment where individual components can be easily replaced (see Open Compute Project), “we’d expect to see robots much more inside the data center,” Slater said.

Slater is director of the Applied Incubation Team at Microsoft, where he is deeply involved with the company’s data center strategy. He presented on that strategy at the Data Center World conference in Las Vegas Monday.

While robots in data centers are a thing of the not-too-distant future, Microsoft already has some of the most efficient data centers in the world. Slater has started an initiative within the company to share the ways it achieves data center efficiency with the world and find areas that can apply to smaller enterprise data centers, whose challenges may be very different from homogeneous hyperscale facilities.

Microsoft’s data center efficiency strategy has seven key tenets:

1. Design the Data Center for Its Environment

There’s no one answer to the question of what makes a great data center. That’s why there are so many vendors selling such diverse solutions into the space, and none of them are effectively winning over the others.

A big reason for that is location. You can be in a place where space and power are cheap, in which case you can build a sprawling data center that’s efficient and reliable because it has relatively low density per rack. If you want a data center next to the New York Stock Exchange in Manhattan, you’re playing a very different game, where every square foot and every watt matter a lot.

This is why site selection always precedes data center design at Microsoft. The process takes into account the environment, cost and availability of power, proximity to the grid and the grid’s reliability, political implications of locating in a certain place, as well as tax implications.

“Only when the site selection is done are we looking to complete the design for that environment,” Slater said.

2. Design a Data Center Full of Standard Stuff

The most efficient data center is one that is full, so Microsoft always fills its facilities up as soon as possible. It’s also important to fill the data center with as much standard equipment as possible. This makes management of the assets through the use of software tools, such as Data Center Infrastructure Management, more effective.

DCIM is great, but it’s great only if you know the behavior of all the pieces of gear inside your data center, Slater said. Because everything is the same in Microsoft data centers, DCIM is “extremely powerful” for the company.

3. Design for Flexibility

You have to build into your design the ability to adapt to changes. Because technology changes so quickly, you have to assume equipment inside your data center is going to change and that you will not know exactly how it is going to change.

The move to SSD in storage, for example, has implications on thermodynamics in the data center, Slater said. Also, the robots are coming, remember?

4. Automate

“We ruthlessly standardize, and we ruthlessly automate,” Slater said. The two go hand in hand, because it is easier to automate management of a homogeneous environment than one with a wide variety of different systems.

One of the biggest standardization efforts at Microsoft happened only about two years ago, when the company switched from having every cloud service being supported by servers best suited for that particular service to a hardware strategy that consists of only three different SKUs. The company donated the designs of its new servers to the Open Compute Project last year.

5. Design the Data Center as an Integrated System

If you want data center efficiency, design “from the top down,” Slater said. You have to start with assessing the applications or services the data center is going to support, and make design decisions based on that knowledge.

If successful, you end up with a highly integrated system, where every moving part works to support the application in the most optimal way.

6. Rely on Resilient, Flexible Software

Instead of ensuring things don’t go down by doubling up on power and cooling gear, build resiliency into software. Software generally gets better over time, while hardware just gets older, so a software investment looks better three years down the line.

Since Microsoft has transitioned to being a cloud service provider rather than a company focused on selling software licenses, it has been running its own software at much larger scale than its customers run it, so its engineers have learned a lot about designing resilient software. The single biggest implementation of Exchange, for example, is Microsoft’s Office 365 service.

7. Design a Data Center That Will Be Ready to Operate Quickly

The faster you bring online a data center that will support a particular service the better. It will mean the software is written for the best available hardware, and the data center will be designed to support that hardware. Slater calls it “riding Moore’s law.”

Not everything here will apply to enterprise data centers or colocation facilities. Standardizing on a single hardware platform, for example, can be ruled out right away. But maximizing the degree of standardization in a facility can help a lot, as we learned in another Data Center World presentation Monday – one by Oak Ridge National Laboratory’s computer facility manager Scott Milliken.

Most enterprise data center operators also don’t have the scale and the buying power the likes of Microsoft have. Despite those differences, however, Slater believes there are still lessons in the way hyperscale operators set up their infrastructure that can be valuable for smaller facilities.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish