Microsoft Joins Open Compute Project, Shares its Server Designs

4 comments

Microsoft’s contributions to OCP will include hardware specifications, design collateral (CAD and Gerber files), and system management source code for Microsoft’s cloud server designs, which will also be posted to the open source code repository GitHub.

This effectively creates a second hardware track within Open Compute, as Microsoft’s designs have followed a slightly different path than the initial OCP servers.

12U Chassis, Server and Storage Blades

The primary building block for Microsoft’s infrastructure is a 12U chassis which can house 24 half-width blades, which can be either servers or storage blades. As with the OCP’s Open Rack, the power supplies and fans have been shifted from the server to the chassis level.

Microsoft also uses a shared signal backplane, and has shifted all of the cable connections to the back of the chassis, allowing system admins to quickly swap servers and storage by plugging the blades into the trays within the chassis. Up to four of these chassis can fit into an extra-tall 52U rack, allowing Microsoft to deploy up to 96 servers in a single rack. For more details, see Closer Look: Microsoft’s Cloud Server Hardware.

Microsoft’s cloud servers could expand interest in OCP among enterprises that have thus far seen it as an exercise for hyperscale cloud builders. While Facebook’s hardware is designed to power the Facebook social networking application, Vaid says Microsoft’s cloud servers are optimized to support more than 200 online services, ranging from gaming to Office 365.

Built for Diverse Workloads

“One of our goals in creating this hardware was to accommodate different requirements for all these workloads,” Vaid said. “It is a hyperscale design, but still provides a balance of driving towards hyperscale and balancing a wide range of workload requirements. It is our hope that the community will be able to adapt it in a much more meaningful way.”

“These servers are optimized for Windows Server software and built to handle the enormous availability, scalability and efficiency requirements of Windows Azure, our cloud platform,” said Laing. “They offer dramatic improvements over traditional enterprise server designs: up to 40 percent server cost savings, 15 percent power efficiency gains, and 50 percent reduction in deployment and service times.”

Microsoft may also benefit from OCP’s development of open network switches, which hold the potential to dramatically slash the cost of networking hardware. Microsoft moves enormous volumes of data around the globe, operating a huge edge network that manages content delivery for Xbox Live and other online services.

Microsoft’s entrance into OCP has implication for its hardware partners. Microsoft has historically worked closely with Dell and HP, which in recent years have been losing ground in the hyperscale server market to OCP-centric original design manufacturers like Quanta, WiWynn and, Synnex/Hyve.

Pages: 1 2

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

4 Comments

  1. Glad to see, MS is taking steps into the correct direction, IMHO: In the long run, companies who leverages best the network effects of open source & open communinites will have a bigger competitive advantage. And in MS case, in special, nice to get more information on their system design & architecture. Regards

  2. One key motivation is likely Google. Data centers represent one of Google's largest competitive advantages. Commoditization of your competitors' products and infrastructure is just good business: It's a strategy that works and usually benefits both consumers and upstart businesses.

  3. One of my clients is there and the big question they are trying to answer is how much of the OCP is a race to the bottom and commoditization of infrastructure led by the big companies who already expect, and receive, tighter margins from their vendors given their scale. If that is the goal, then the companies with the tightest supply chains will win.It also supports our notion of computing as a utility. It bodes well for containerized data centers in that I can now use a common form factor, fill it with standard hardware, and deploy a set number of resources measured by kw, cores, circuits, or containers. It is also opening up collaborative innovation opportunities with companies to integrate what used to be silos - like bus bars tightly integrated with cabinets that incorporate the same hardware connections, or preconfigured containers that can ship with servers and storage in racks so you can order a container a quarter or half full to service the first chunk of kit required with a form factor that is finite, controlled, constant, and flexible. It'll be fun to watch, that's for sure...

  4. zborn

    Up to a certain point, it's the services that count. What would be interesting is the amount of bloat in usage in the data centre. Cruft.