Skip navigation
Microsoft Builds Own Linux-Based Data Center Network OS for Azure
Microsoft’s cloud server blades on display at the Open Compute Summit in San Jose, California, in March 2015 (Photo: Yevgeniy Sverdlik)

Microsoft Builds Own Linux-Based Data Center Network OS for Azure

Claims faster debugging, leaner software stack than with vendor-supplied software

Microsoft has built a Linux-based data center network operating system for its global Azure cloud infrastructure to have more control over network management software than networking vendors can provide.

Like other companies that provide services over the internet globally, companies like Google, Facebook, and its main cloud-services rival Amazon, Microsoft designs its own data center hardware and much of the software that runs on that hardware. These companies have a lot to gain from having a custom technology stack that does exactly what they need – nothing less and nothing more.

“What the cloud and enterprise networks find challenging is integrating the radically different software running on each different type of switch [sold by vendors] into a cloud-wide network management platform,” Kamala Subramaniam, principal architect for Azure networking at Microsoft, wrote in a blog post. “Ideally, we would like all the benefits of the features we have implemented and the bugs we have fixed to stay with us, even as we ride the tide of newer switch hardware innovation.

Google reportedly designs its own data center networking hardware. Facebook started designing its own data center switches recently. The social networking giant has talked publicly about its Wedge and Six Pack switches. It appears that Microsoft does not make its own networking hardware, relying instead on vendor-supplied switches.

Its network OS, called Azure Cloud Switch, enables the company to use the same software stack “across hardware from multiple switch vendors,” Subramaniam wrote.

Microsoft does design its own servers to support Azure, using specs open sourced through the Open Compute Project, the Facebook-led open source hardware and data center design initiative, as the basis. Microsoft joined OCP last year.

ACS has enabled Microsoft to identify, fix, and test software bugs much faster. It also allows the company to run a lean software stack that doesn’t have unnecessary features for its data center networking needs. Vendors design traditional switch software for a variety of customers, all with different needs, which means an individual customer ends up with features they never use.

It also allows Microsoft to try new hardware faster and makes it easier to integrate the networking stack with the company’s monitoring and diagnostics system. It also means networking switches can be managed the same way servers are, “with weekly software rollouts and roll-backs, thus ensuring a mature configuration and deployment model.”

What enables the company to run ACS across different suppliers’ hardware is the Switch Abstraction Interface spec, an open API for programming switch ASICs. The SAI effort is part of the Open Compute Project, and Microsoft was a founding member of the effort, along with Facebook, Dell, Broadcom, Intel, and Mellanox. OCP officially accepted ACI in July.

SAI abstracts data center networking hardware underneath to make it easier for users or vendors to write networking management software without writing it for specific products. SAI was an “instrumental piece to make the ACS a success,” Subramaniam wrote.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish