Skip navigation
Meg Whitman, CEO of Hewlett Packard Enterprise in New York City in November, 2015. Andrew Burton/Getty Images
Meg Whitman, CEO of Hewlett Packard Enterprise in New York City in November, 2015.

Why HPE Chose to Ship Docker in All Its Servers

THe reasons involve how the decision plays into the needs and desires of HPE's SP-style customer base

One of the big headlines that emerged from Hewlett Packard Enterprise’s Discover conference in Las Vegas last week was CEO Meg Whitman’s statement that she would be willing to consider public cloud partnerships with Google and Amazon similar to the deal the company struck with Microsoft last December for Azure services. Another was the announcement that HPE would soon begin shipping Docker – by all accounts the world’s leading application container platform – with all its servers.

There was a time not long ago when, if a manufacturer of HPE’s stature shipped commercially branded software produced by a vendor of Google’s or Microsoft’s stature, it would immediately trigger skepticism and a truckload of negative comments. Docker is not a brand at that level, at least not today. What the move does accomplish is the inclusion of a critical infrastructure component in modern data centers – especially of the hyperscale variety – in servers by a company responsible for about one-quarter of all global server sales by revenue, according to IDC.

Yes, it’s an open source component; yes, it’s not booted automatically; yes, it’s not part of some classic co-branding scheme where server cases are adorned by blazing blue logos of certification. We live and work in a very different world of data centers, where what we call our “hardware” is actually more of a liquid commodity, and it’s the software that firmly bonds it all together.

“While Docker is a disruptive technology,” said Docker Inc. CEO, Ben Golub, on stage with Whitman during the opening-day keynotes at HPE Discover, “we don’t want the adoption to be disruptive. We want it to be as easy and evolutionary as possible. The fact that you can get servers from HPE, hyper-converged systems from HPE that out-of-the-box have Docker’s commercial engine and commercial support involved, bundled – and you can get that from one company, working in concert – is really remarkable.”

Golub went on to praise his company’s and HPE’s collective efforts at integrating their tools – for instance, Docker’s orchestration tool Swarm in conjunction with HPE’s classic OneView. All of that makes for a nice footnote at the end of a convention news rundown story. But historically, back when giants made bundling deals with giants, both partners would argue that something about the joining of their two services or products would be greater than the sum of their parts.

That surplus element was not discussed at length on stage, perhaps because Docker is not perceived as an A-list market player. Analysts asked about the benefits of the deal, chocked it up as a great boost for Docker and probably a blanket endorsement of containerization as a viable platform for workload orchestration. But to understand the elements at play here, you have to look more deeply into the state of the technology orchestrating workloads in today’s data centers.

Global corporations – and to a larger extent than ever before also mid-level enterprises – want to provide IT services to their employees at all levels, as well as to their customers, using a service provider model. They’ve seen the Amazon model and the Google model succeed, and they want their own pieces of them. OpenStack and Docker have both made tremendous headway toward enabling organizations to adopt service provider-style IT models, with self-provisioning, variable distribution at scale, and continuous integration.

So the three big reasons why HPE including Docker is at the very least not a small deal, involve how this decision plays into the needs and desires of this SP-style customer base.

1. Docker helps HPE support more cross-platform workloads. One of the least appreciated aspects of Docker in recent months has been its assimilation into the universe of Windows Server, which remains so very relevant to the established HPE customer base. Indeed, some HPE engineers present at Discover this week were not quite aware that, for well over a year, Docker has been a staging platform for more than just Linux containers.

Helion Cloud Suite is the umbrella brand for HPE’s line of software for building data centers into private and hybrid cloud platforms. Helion OpenStack is the most obvious member of that suite, and Docker effectively joins the line-up, but without sacrificing its native branding or product identity. HPE’s CloudSystem 10 is the latest version of its server, storage, networking, and software bundle pre-configured for provisioning cloud services. The delivery of these services to any client or consumer, even internally, is a process HPE describes as “vending.” In this model, variety of product is key.

As HPE executives described it to Data Center Knowledge this week, this system has three primary customer use cases: 1) virtual machine vending, which is the first generation of virtualization; 2) private cloud for application vending; 3) hyperscale multi-cloud deployment, involving the distribution of applications across clouds. Docker plays a role in use cases #2 and #3.

“Our view is, we want an open ecosystem with different alternatives to deliver those use cases,” Ric Lewis, HPE’s senior VP and general manager for data center infrastructure, said in an interview. “We want to enable the VMwares; we want to enable the Dockers; we want to enable the Hyper-Vs; we want to enable the Helion Cloud Suites for our own stuff. Just like some of the software stacks that enable multiple hardware, we want to do the same thing for them, because we know customers want it. But we also know they’ll want things integrated.”

Lewis went on to remind us that HPE has bundled VMware with servers for years, so bundling Docker is not in any way a departure, from a market perspective. But as HPE engineers reminded us this week, that point Lewis made about customers wanting things integrated is critical. As data centers adopt service provider models, they cannot set themselves up for delivering all one flavor of virtualization, or all another flavor. And as prospective customers told HPE engineers during several demos we witnessed, it’s absolutely vital that the SP model they adopt enable the staging of containerized workloads alongside support for virtual machines, by way of hypervisors such as VMware NSX, Xen, KVM, and still bringing up the rear, Hyper-V.

So the cross-platform nature of what customers are demanding goes far beyond the border between Windows and Linux. They need to be able to deploy networks that can connect applications staged in containers, with virtual machines managed by hypervisors and in turn with SaaS applications served by public cloud providers.

Docker utilizes its own form of network overlays to enable communication between containers. Last year, it acquired a team of developers called SocketPlane to extend its ability in SDN, and the growing ecosystem of Docker support products includes alternative overlays – for instance from WeaveWorks. While these options open up possibilities for microservices architecture, they do not (at least by themselves) enable networking across container and hypervisor workloads.

At Discover, HPE engineers demonstrated a networking scheme they’re currently building called Distributed Cloud Networking (DCN). Its basic purpose is to enable policy-driven networking across containers, virtual machines (all flavors except Hyper-V for now), and bare metal servers. One of the inhibitors to the adoption of Docker in data centers, beyond developer sandboxes, has been its seclusion from the rest of the network. Bringing Docker into the Helion fold lets HPE work to integrate workloads and implement multi-tenant distribution of VMs with containers and with bare metal servers at the full scale of the data center.

Simply letting customers choose Docker at their leisure and provision it themselves would not have risen to this level. Still, expanding Docker’s use cases beyond its own ecosystem is a very new subject. Docker declined to comment on this subject.

2. Integrating Docker lets HPE build security in. One of the goals of the DCN project is to enable microsegmentation. There are many competing definitions of this term from various vendors – including HPE – all of whom want to put their own stamp on this emerging technology. But what these definitions have in common is this: DevOps professionals and security engineers can define access and usage policies for all workload classes in a data center network without them having to be segregated into separate subnets.

Ethan Melloul, a CSA with HPE, demonstrated DCN with microsegmentation to attendees. “If you don’t have DCN, you can’t do microsegmentation with a container,” Melloul told us as he implemented a security policy that was applicable to a VM and a Docker container in the same network. With a similar method, he continued, an operator could perform a service insertion – effectively re-mapping multiple virtual firewalls, or other security services, to an appliance.

During a demo session, HPE cloud architect Daryl Wan told attendees it’s a relatively trivial task for a security engineer to devise access control policies that apply to whatever’s in a subnet. But when two classes of workload are routed to the same subnet, it’s next to impossible without microsegmentation. But the side-benefit of this solution is that security policy follows workloads as they migrate to different hosts.

So as the DCN matures, HPE will also be filling a security gap that many say has been lacking in the Docker ecosystem up to now – and which has been another historically touchy subject for Docker. During the keynotes, Docker’s Golub addressed Whitman’s question about Docker container security by assuring the audience that in Docker’s repository all assets are digitally signed. It will be wonderful when digital signatures can be leveraged for other purposes besides identity: for example, workload class identification for purposes of defining security policy. This is something else that HPE’s direct participation brings to the table.

3. A single point of support. This has been a problem with respect to most every commercially available open source project this decade, including OpenStack and Hadoop, as well as Docker: When multiple providers coalesce to provide service under one vendor, how well can that vendor provide support? This is a significant problem, especially when you consider that support is how these vendors earn their revenue.

During his segment of the keynotes, Golub was careful to mention Docker’s continuing role in providing support for Docker as a component of Helion Cloud Suite. However, multiple HPE product support personnel made it clear to prospective customers this week that, while Docker would provide expertise, HPE would serve as customers’ primary points of contact. And more than once we were reminded that HPE was effectively founded prior to America’s involvement in World War II, while Docker is barely six years old, and its product just going on four.

Perhaps the key lasting takeaway behind Microsoft’s partnership agreement with Red Hat, announced last November, was a sharing of support resources, to the extent that the two companies would exchange personnel among each other’s offices. Certainly attendees of HPE Discover this week would like to see a similar arrangement between Docker and HPE. At any rate, what they clearly do not want to see is HPE fielding Docker-related support questions, posting them to Stack Overflow, and waiting for responses from the community. HPE cloud architects and product managers told Data Center Knowledge this week that HPE would be providing Docker expertise for Helion customers, to which Docker may contribute.

We asked Paul Miller, HPE’s VP of marketing for data center infrastructure, whether HPE’s integration of Docker with Helion (a process that began last year with enabling Docker visibility from its OneView management tool) was done more because HPE needed to weld Helion Cloud Suite’s components into a cohesive product, or because customers came to HPE directly and asked for it.

“Customers are seeing [Docker] as an alternative to virtualization, to simplify the delivery of applications, like Meg [Whitman] talked about on the stage,” Miller responded. “Since we’ve done integration of Docker with OneView... I can tell you, I’ve had more customers call us up and bring up OneView because of Docker integration than almost any other integration that we’ve done.” Yes, customers are having issues with this ongoing integration, and they’re also having successes. But the message is, they’re coming to HPE because they perceive OneView as the supplier of record.

The common theme here is this: Making Docker available by way of Helion has compelled HPE to apply engineering and support expertise to the problem of integrating Docker with its existing product lines and services. That integration can only serve to improve how Docker works with Helion servers and can certainly open up new avenues for Docker containers cohabitating with other virtualized workloads. VMs, we are frequently reminded, are not going away soon and may not disappear ever. If Docker doesn’t engineer a peaceful co-existence, perhaps a company like HPE should.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish