Is a Retreat from Private Cloud Also Under Way? Cisco Weighs In

A less-than-stellar earnings report this week from Cisco is blamed, in part, on a technology that its CEO admits may be too complex for its customers. But is OpenStack to blame for everything going wrong?

Scott Fulton III, Contributor

November 18, 2016

8 Min Read
Is a Retreat from Private Cloud Also Under Way? Cisco Weighs In

This last fiscal quarter was certainly less than wonderful for Cisco, and there may be plenty of reasons why to fill an entire storage volume.  But a UBS analyst focused on one: a possible lull in the adoption, or at least the excitement, around private cloud — specifically, the retooling of internal data center infrastructure so that compute, storage, networking, and other resources may be pooled together.

UBS Managing Director Steven Milunovich, during Wednesday’s conference call, gave the world a peek at what went on in-between the various CFO speeches that took place during the banking giant’s three-day technology conference this past week.  During a panel discussion that filled some of the blank space between often blank speeches, evidently there was a discussion about the status of private cloud.

“We had a panel on folks who helped companies moved to the cloud,” said Milunovich [our thanks to Seeking Alpha for the transcript], “and the general consensus was that private cloud implementations generally are not working, and many companies that begin on a private cloud path end up going down a public cloud path.”

The veteran hardware analyst was framing a question with the premise that Cisco is devoting its energies to a business that may be — at least for the present time — declining.  Most importantly, Cisco CEO Chuck Robbins declined to disagree.  He cited “a lot of the complexity in building out private infrastructure,” and laid the blame for that complexity squarely upon the shoulders of OpenStack, the open source, internal cloud platform.

The focus of the entire industry at this point, said Robbins, is how best to automate operational processes — especially applying security policies to resources.  As you might expect, he believes those solutions will come, and even drew up a timeframe of between one and two years for customers’ pains to be — in his words — “alleviated.”  Then, having clearly framed his idea of the present state and the future state of the data center, he proceeded to shove something very big into the realm of the former.

“I think your observations are probably valid particularly if you look at like the lot of early OpenStack implementations,” said the CEO.  “But I do think that customers are going to want to have that capability, and I think we as an industry will continue to work on simplifying how that operational capability shows up within our customer base.”


“‘Private cloud’ is kind of a squishy term,” explained Marko Insights Principal Analyst Kurt Marko, speaking with Data Center Knowledge.  “People are not using virtualized, shared environments the same way they use a public cloud service or an IaaS service.  Part of the problem is the way enterprise users are consuming on-premises resources.”

During the earnings call, Cisco CEO Robbins did not relent on his company’s push for Applications-Centric Infrastructure (ACI), its policy-driven framework that incorporates software-defined networking (SDN), letting workloads themselves factor into decisions about deployments.  But Cisco has been unable to wean many customers away from their existing workloads, which is why the company has had to maintain its older infrastructure system, NX-OS, simultaneously.

Last month, Cisco not only unveiled new capabilities for its Nexus 9000 switches, but outlined a kind of NX-to-ACI migration plan that Nexus would facilitate.

When the public cloud first became a marketplace a decade ago, many major vendors staked out their competitive stake in the space.  HPE has since had to back off, but Oracle has doubled down on its bid.  As late as 2014, Cisco finally assembled a public cloud strategy that relied on partner service providers, and which involved the company’s entire, existing sales channel — as opposed to constructing a public cloud platform to rival Amazon.

Still, the customer side of that first value proposition relied on a kind of a la carte menu, where customers would always want some of both.  Perhaps spurred on by these same customers’ reluctance to dive all-in on ACI, Cisco’s strategy was to enable a de facto hybridization that accepted a reality that certain workloads would be better suited to public cloud deployment than others.

Pick and Choose

That strategy continued to represent Cisco’s point of view as recently as late last October, when at the OpenStack Summit conference in Barcelona, Cisco’s senior solutions marketing manager, Enrico Fuiano, prefaced his session by citing IDC survey data.

“Is it private cloud or public cloud?  We believe the debate is over,” Fuiano told attendees.  “Organizations want both, and there is no doubt about it.  You can see from the projections that private cloud, at least in the next couple of years, will continue to enjoy growth upwards of 40 percent [over two years].”

Fuiano was framing the introduction of a consultation service, jointly produced by Cisco and IDC, helping enterprises to determine a cloud strategy for themselves, and then to execute on that strategy without falling into traps.  He reminded attendees that one of the key reasons why enterprises choose OpenStack as an internal cloud platform is to avoid vendor lock-in.

Next, he pointed to IDC survey data indicating that enterprises lack the tools to effectively monitor, measure, or manage hybrid cloud environments.  Fuiano believes that perception of lacking tools comes from a deficiency of skills needed to manage the tools that enterprises have on hand.

That was three weeks ago.  Now, Cisco CEO Robbins’ assessment of the state of public cloud points not to a skills deficiency as a culprit, but a tools deficiency.

Lessons in Fence Straddling

Kurt Marko perceives a gap in an under-appreciated part of the spectrum: cloud-native software development, both inside enterprises and among ISVs.

Marko cites VMware as a case study in a company dealing with present and future platforms — not unlike Cisco’s NX-OS and ACI.  It has a more versatile, software-defined infrastructure to which it would like to move enterprises.  Doing so could open up plenty of new market opportunities, not just for VMware, but for a broader ecosystem around its platform.  And it could make a stronger case for private cloud as a whole.

“VMware is almost being held back by its customers,” Marko remarked, “because they’re using the VMware stack as a legacy virtualization stack.  It’s still client/server — it’s a bunch of servers, just like in the client/server era, except now we’re running 10 or 20 or 30 on a big piece of digital hardware.  But what we run on them and how we operate them, is no different than what it was 20 years ago.”

VMware’s platform runs on a higher level than Cisco’s, at least theoretically.  Both depend to a larger degree upon software-defined networking, but both may also manage SDN at their own respective levels.  Still, it’s this realization that workloads aren’t changing as fast as they could be, or perhaps should, that leads Marko to another realization:

Private cloud isn’t in a lull, as UBS’ Milunovich implied, as Marko believes.  “It hasn’t ever really taken off.”

Specifically, he argued that the strict NIST definition of a private cloud — where resources are pooled together and services are set up for full automation and self-provisioning — is not what enterprises think they have when asked whether they’ve adopted private cloud.

“It’s kind of a misnomer,” he said.  “Most people will say they have private cloud if they have a virtualization stack, even though they’re not running it like a cloud — they’re not allowing users to self-provision resources.  They’re not dynamically auto-scaling and moving resources around; they’re not providing database and application-level services out in the cloud.  They’re just giving people a virtual machine and a logical volume, period, and they call that a private cloud.”

Companies with the budget to build hyperscale data centers, including all-internal or mostly-internal designs, are avoiding the legacy vendors’ networking equipment, said Marko, in favor of Open Compute Project or OCP-like, bare-metal deployments plus SDN, open virtual switches, and containerization.  They may constitute the true private cloud market.  But it’s leaving Cisco, and other vendors in this perceived legacy space, behind, compelling them to adopt broader, looser definitions of private cloud and hybridization — and then to blame technologies like OpenStack when it doesn’t all stack up right.

Cisco’s strategy, he articulates, appears to have been to encompass its entire networking ecosystem in this ACI space, and then incrementally shift its customers from one all-Cisco space to the new all-Cisco space.  That doesn’t work well, in an environment where customers don’t want everything all from one vendor any more.

“In Cisco’s defense, networking is a little bit different than server infrastructure,” said Marko, “in that you’re always going to have devices that have to connect to the cloud.  Even if you’re a company with no data center, you’re still going to have a sizable network investment, to connect all your clients, to connect your WAN, and Cisco wants to be there to provide that.”

In the meantime, however, Cisco finds itself in a strange position.  As Credit Suisse analyst Kulbinder Garcha put it Friday, “The switching business faces increasing pressures, as Cisco continues to lose market share in the data center switching segment.”  Its bedrock business is a technology that may not be sinking, but its returns are less than rewarding.  Its future business may lie with a technology that ensures no single vendor can own it.  It would like its future to be a matter of choosing a bit from both plates — a little from this column, some from that column.

But in that event, it will be customers who make the choices.  That’s when all the guarantees fall apart.

About the Author(s)

Scott Fulton III


Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like