Skip navigation
One Year In, Has DC/OS Changed the Data Center?

One Year In, Has DC/OS Changed the Data Center?

A product that the tech press predicted would change history is healthy and growing today. But maybe history has other plans for it.

It was being called “democratization,” with the help of some partly democratic marketing.  Condé Nast’s Wired, which aimed to do for technology what Vogue did for yesterday’s clothes and Vanity Fair for yesterday’s celebrities, applied the superlative, “an enormous revolution sweeping information technology.”  The proliferation of polysyllabic poetry suggested a new chapter in history was being opened.

April 19, 2016:  On this day Mesosphere officially released to the open source community what it called Data Center Operating System, although it had had been selling the software for several months already.  Built around the Apache Mesos workload scheduler, its basic operating principle was this:  Pool together the basic compute resources from multiple data center clusters into a single virtual infrastructure.  Then distribute scalable workloads throughout that pool and continually manage them for efficiency.

Mesosphere marketing manager Andrew Llavore wrote in a company blog post that day:

“DC/OS helps companies adopt the advanced operational and architectural practices of organizations such as Apple, Yelp, Netflix, and Twitter (and, at a broader level, Facebook and Google) without having to reinvent the wheel or hire scores of distributed systems engineers. DC/OS brings this type of advanced data center environment and application platform to anyone, anywhere they wish to run it.”

The system bridges resources from owned and operated servers, leased servers, and cloud service providers.  The result is an infrastructure pool that pays less attention to whether one owns or rents a resource, as to how well it’s performing at the moment.  It lengthened the eligibility list for organizations to run modernized applications as microservices — as functions that scale themselves up or down with customer demand.  More importantly, though, DC/OS eliminated the mandate that an organization build or lease a colossal data center complex in order to deploy services that can be considered competitive in today’s marketplace.

DC/OS was not hyperconvergence.  But if it did everything Mesosphere said it would — and in the way it said it would — the question deserved to be asked:  Who needs hyperconvergence when you have DC/OS?

A Crowd Gathers at the Front Lines

To drain some of the romanticism from the subject, DC/OS was — and is — not alone.  Docker is the absolute champion of this new model of compartmentalized applications — the arbiter of the divorce between programs and the VMware / Xen / KVM style of virtual machine.  Without Docker containers, or the OCI containers based on Docker, there would be no DC/OS.  Kubernetes is the current leader in orchestration — in the concurrent management of containerized programs and distributed services.  Docker has an alternative, called Swarm; and Mesosphere bundles its own orchestrator, originally called Marathon, with DC/OS.  But organizations can, and do, run Kubernetes or Swarm on DC/OS instead; while the orchestrator manages the applications, the underlying component — which Mesosphere calls an operating system — manages the virtual infrastructure.

At its premiere, DC/OS was dazzling.  Its live resource allocation chart, comprised of colorful donut graphs whose wedges represent the distribution of active workloads, had impressed Microsoft engineers so deeply that they rebuilt Azure Container Service to support it, and began utilizing it themselves.  There was one report that Microsoft had tried to purchase Mesosphere in the summer of 2015, though ended up instead joining HPE in a Series C funding round some months later.  The timing for that funding round coincided with Mesosphere releasing version 1.0 of Marathon and just weeks before DC/OS made its way to open source.  Meanwhile, while AT&T was wrestling with how to couple NFV with its internal applications, Verizon was already moving to DC/OS.

If DC/OS is achieving its goals, then the current activity in the data center market should be providing us with evidence.  Infrastructure virtualization (perhaps we should start calling it that, assuming we don’t mind the abbreviation) should be making a dent in how Mesosphere’s high-end customers are architecting and provisioning their physical infrastructure, and changing the way other companies perceive their plans for data center construction.

Put another way:  If the revolution is for real, and the sweeping is being swept, the upsetting of modern data center topology should be in progress.  Right?

“The feedback that I’m hearing from customers is that they’re thinking less and less about the infrastructure itself, knowing now that they can just aggregate compute,” Peter Guagenti, Mesosphere’s CMO, said in an interview with Data Center Knowledge. (“Compute” is a malleable substance here, in the manner of “compote” or “compost.”)

Guagenti acknowledged that his firm’s customers are moving their lightweight Web apps onto platforms running on pools of commoditized, less specialized, hardware.  On the other side of the scale, he said, customers’ data services are moving toward an improved management model for their applications, which are now running more efficiently.  All of which may be interesting, though not the sort of material that a Condé Nast publication would go all “1812” about.

If a revolution is indeed taking place, or even if the first shift of sweepers has only just now entered the building, then either the old order is being swept out or the new one is being swept under the rug.  Even if all this revolution accomplished thus far is mass hybridization of data center resources, then where’s the proof in how data centers are being designed?  If fewer tenants are building big, centralized complexes in competitive markets, then why did the largest providers kick off 2017 by doubling down on capacity?

Revolutions, I put it to Mesosphere, tend to revolve something.

“That’s an interesting hypothesis,” responded Mesosphere’s Guagenti.  “I don’t know if I hundred-percent see that as a pattern emerging.  But we’re moving to a world where compute is absolutely moving back to the edge." Centralized data centers are not ideal for the demands of "certain new types of applications."

Indeed, this is a trend we’ve observed, from two distinct vantage points: The edge of the customer’s network and the edge of the service provider’s domain.  As the clients of computing power become more distributed geographically, one way to guarantee minimum latency is by moving that power closer to them.  Guagenti’s broader point is that DC/OS is enabling this broader distribution, while at the same time pooling that distributed power so that it may be managed more easily as a single pool.

But edge computing scenarios are typically reserved for particular classes of applications — for example, mobile logistics, or content delivery networks, or the Internet of Things.  Collectively, these classes represent a small subset of the general-purpose workloads managed by data center tenants today.  Though the micro data center form factor is more than just a novelty, it is certainly less than a ubiquity.

Communications networks, content delivery networks, and service providers all have particular requirements for service levels.  Mesosphere’s continuing relationship with Verizon testifies that DC/OS is making headway there.  But for financial services, healthcare, transportation, and the world’s other major industries, the distinctions for their service level requirements from one another are all qualitative.  And qualitative variables may be virtualized.

Perhaps that’s the whole point.

A Place for Specialization

In the past "we expected application infrastructure to be something that we built and tended, and that the infrastructure itself was something that we built specifically towards a targeted use,” Eric Hanselman, chief analyst at 451 Research, said in an interview with Data Center Knowledge.

Before the turn of the century, an SAP customer would build its computing infrastructure and service delivery model around the distribution of SAP services, he explained.  The hardware compatibility list of the applications an organization ran determined the buildout of its “data centers,” or what passed for them at the time.  And during this period organizations made significant investments in ensuring these delivery models kept working — which typically meant guaranteeing that they never changed.

“That was driven out of a day and age when the requirements for managing them were really very specialized,” said Hanselman.  “You had to have people who knew enough about the environment ... to get to the point where you could handle it generically — where you had high-level abstractions for things like building and imaging a server.

“What’s happened is, we’ve now gotten good enough that we’ve been able to build abstractions that allow us to say, ‘Create me a server with this particular operating system on it,’ then invoke automation or templates or builds in one form or fashion, and do that relatively generically.”

Specialization at this level is almost entirely qualitative.  The fact that specialized service levels and service models for almost any customer may be carved out of the same malleable substance is the foundation for DC/OS — and for Docker, Kubernetes, Spark, Prometheus, Jenkins, and other Marvel Comics-sounding names.

From one perspective, the idea that the infrastructure of data centers will eventually (if not already) always be abstracted away from the platform that runs the software is giving rise to the idea that “hybrid cloud” is a false notion.  The most vocal proponent of this point of view is the celebrated Google software engineer Kelsey Hightower, who at a conference last March amplified his claim: “There’s no such thing as hybrid cloud.”

It’s this point of view that gives rise to the idea that the architecture of data centers is becoming immaterial.  Granted, it’s software developers who uphold that idea — they’re not the ones who are in charge of operations, regardless of how closely you shove the “Dev” prefix together with the “Ops” suffix.  But it’s a point of view that you might think would benefit Mesosphere most of all — a company whose core product seeks (or at least sought one year ago) to make all data centers into liquid.

“Maybe if you’re Google. . . I can’t even see how, if you’re Google, there’s no need,” responded Somik Behera, Mesosphere’s DC/OS product lead.  “There’s a lot of physics that software can’t take on.  That’s where hybrid comes in.  There’s low latency and high latency.  That’s ‘This is Earth,’ versus, ‘This is Mars.’  This application spans North America; this other spans the entire planet Earth.  There will be this notion of hybrid.  What metric we will use to define hybridity may change.”

Because use cases will differ, Behera admits, the data centers whose structure and definition are being abstracted from the software platform will continue to mix and match capacities, availabilities, latencies, and locations.  On the one hand, it’s this abstraction layer that will forever separate applications from infrastructure.  On the other, it’s the services that provide this separation — DC/OS being one of them — that do share the responsibility, along with some of the glory.  And the evidence of their success will be that the data center market will go on as it has been already, pretty much unimpeded.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish