Microsoft: Azure Stack Will Be Sold Separately, Eventually

Scott Fulton III, Contributor

July 26, 2016

5 Min Read

During Microsoft’s Worldwide Partner Conference last week, the company published several blog posts on the status of Azure Stack, its forthcoming hybrid cloud-based extension of Azure into customer data centers, which is currently being tested in preview. At least two of these posts — one from Corporate Vice President Mike Neil, the other from CVP Takeshi Numoto — used the term “prioritize” to describe how Microsoft will introduce Azure Stack as an integrated turnkey platform, through server partners Dell, HPE, and Lenovo.

That immediately led to press reports stating that the company had decided to tie Azure Stack directly to these three server makers, and would not enable the final release version to be installed on existing customer hardware. In an exclusive interview with Datacenter Knowledge Monday, Mark Jewett, the company’s director of product marketing for Cloud Platform, expressly denied those reports.

Jewett explained that the company’s plan, at least for now, is to begin the general release of Azure Stack through integrated systems — which more than once, he called “the starting point” — then learn how those systems are being utilized on customer premises. From there, he said, Microsoft can work out a plan for rolling out a general release of the infrastructure software on its own.

Lessons Being Learned

“The first learning is an operational learning about what it takes not just to deploy, but continue to operate and update, what turns into a relatively complex system,” explained Jewett. In previous experiences with its Cloud Platform System, he said, Microsoft learned several lessons about how to work with joint teams of engineers.

That work will be critical, he went on to say, as the company determines how best to implement rolling firmware updates to individual servers in Azure Stack clusters. Such updates take place behind Microsoft’s own firewall every day, but on server hardware that it already knows, and that has passed its testing.

In customer environments where Azure Stack will need to co-exist with other infrastructure platforms — particularly with VMware’s vSphere, and with OpenStack deployments from firms such as Red Hat and Mirantis — Jewett said, “I think it’s fair to say that those solutions face some challenges, in terms of getting deployed and being operational.”

Mesosphere would appear to provide one mechanism for rolling out software deployments for distributed systems. Microsoft and HPE have both been partnering with Mesosphere in the deployment of Azure Container Service, the public cloud’s system for deploying Docker containers and microservices. But as Jewett told Datacenter Knowledge, such a system probably would not be feasible for deploying low-level software and server firmware.

Rather, he explained, in order for Azure Stack to maintain seamless compatibility with Azure — among other reasons — Microsoft will need to engineer a kind of synchronous rollout system that makes produces updates to its hybrid cloud platform on the same agenda as updates for its public platform.

“We believe that part of the value proposition of Azure Stack is its extension of Azure,” said Jewett. “Part of that, fundamentally, is pace of innovation.

“The promise of Azure Stack says it will operate with the same level, or pace of innovation, that Azure has. And I think the eye-opener for us is the extent to which customers and service providers embraced that.”

Maintaining Alignment

A typical server deployment philosophy in a data center, he described, would have a server image be optimized to its maximum performance, and then frozen so it cannot be touched. Presently, the Azure public platform does not work that way; the platform evolves with incremental updates that improve performance without customers experiencing downtime.

Evidently, prospective Azure Stack customers were sold on the idea of seeing that same, dynamic pace implemented in their own data centers, with Microsoft managing the agenda. Enacting that promise, said Jewett, “is where we maybe learned a little differently than what we had gone in anticipating.”

Microsoft introduced its Patch and Update Framework (P&U) with the standard edition of its Cloud Platform System (CPS), and has been maintaining it since last October through partners such as Dell. CPS was designed to run Microsoft’s previous Azure Pack software, and has been re-engineered to be an on-ramp of sorts for Azure Stack.

It had been planning to open that on-ramp this year, although this critical phase of Azure Stack’s development agenda appears to be responsible for delaying its rollout until “mid-2017,” according to the company.

“One of the challenges that people are having with existing solutions today is, those updates can come from a variety of different sources,” Microsoft’s Mark Jewett told Datacenter Knowledge. “Part of what we deliver with [CPS], and we will deliver with the Azure Stack integrated system, is a coordinated patch and update process that takes care of the updates, from the firmware all the way through to the software and services; covers not just troubleshooting issues, but also adding new services; and does that in a way that recognizes the system needs to continue to run... while that updating is done, in a smart way.”

Update packages will go through a rigorous validation process before they’re used by customers. But as Jewett described, that validation will need to be aligned with the same process for Azure running in Microsoft’s own data centers.

In order to achieve that alignment, he said, Microsoft will need to work more tightly with system vendors and service providers. We asked whether these working relationships would include Intel or ARM, which would obviously be responsible for producing firmware for vendors, though Jewett declined to go into that level of specifics.

Jewett said Microsoft should have more specifics to reveal when its Ignite conference in Atlanta kicks off this upcoming September 26.

About the Author(s)

Scott Fulton III


Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like