Skip navigation
Kubernetes 1.5 Could Bring Pre-configured, Containerized Data to Bare Metal
Supermicro’s commodity servers (Photo: Supermicro)

Kubernetes 1.5 Could Bring Pre-configured, Containerized Data to Bare Metal

An out-of-the-box solution for staging low-latency, huge database operations may be in the works, thanks to a new version of the leading open source orchestrator, and an expanded partnership around it.

The real meaning of containerization in the data center — the trend that caught fire three years ago with the arrival of Docker — is the ability to deploy parts of workloads across a variety of servers, and scale those deployments up or down as necessary.  This is different from firing up more virtual machines, with their built-in server operating systems and self-absorbed management consoles, and artificially balancing traffic loads among them.

As the trend becomes more widespread among data centers in production, the focus is shifting from the containers themselves (e.g., Docker, CoreOS rkt) to the orchestration systems that maintain workload deployments across server clusters, and even between clouds.  After a series of minor delays that have become commonplace in open source development, Kubernetes — the orchestration platform sponsored by Google, and based on Google’s own internal architecture — is declaring general release for version 1.5 this week.

With it, the reasons for data center operators maintaining artificial boundaries between conventional, virtual machine-based workloads and newer, hyperscale, microservices-based workloads, may diminish yet again.  At an industry conference Monday sponsored by CoreOS, a Google product manager made it official that intent-based configuration firm Datera has partnered with CoreOS, Google, and bare-metal server and switch maker Supermicro.

Their partnership, at least as Google and Datera have put it, is intended to produce a standard around persistent data containers — the ability to deploy huge databases that maintain state (“stateful”), amid the seemingly chaotic environment of microservices, all within the same cloud.

But Supermicro’s involvement implies a goal that looks a lot more like a physical product: namely, pre-configured hyperscale servers capable of hosting huge workloads at tremendous speeds.  The management of those workloads would become significantly easier to accomplish with Kubernetes’ new deployment setup tool, called kubeadm (pronounced “koob-adam”) which premieres with version 1.5.

A Datera company blog post published Tuesday explains the problem posed by typical microservices architectures, where small bits of data are replicated among thousands of containers simultaneously.  It’s not an efficient system, and many organizations (for example, ClusterHQ) have been working to deliver alternatives.

But Datera’s solution will have the backing of CoreOS, which produces the Tectonic platform — a commercial implementation of Kubernetes.

“Datera has developed a new data architecture from the ground up with the core tenant to decouple application provisioning from physical infrastructure management,” the company writes.  “Applications data should have zero knowledge of the underlying physical resources.  It is built for highly distributed environments, can be deployed and managed as software storage and is tightly integrated with modern container orchestrators. . . through volume plug-ins.”

Tectonic is already part of select Supermicro servers, delivered since the server maker first partnered with CoreOS in the spring of 2015.  And Tectonic has the blessing of Google to back it up, since Google has been instrumental in the development of the plug-in architecture discussed in the blog post.

All of this makes someone very familiar to Data Center Knowledge readers very happy indeed: Digital Realty.

“I am a huge believer in containers,” announced Chris Sharp, Digital Realty’s CTO and senior vice president for server innovation, in an interview with Data Center Knowledge.

“It really frees up the application lock-in that a lot of companies have been stifled by in the past.  The admin of these containers, and the portability of those applications, is amazing.  But one of the elements that’s overlooked is the interconnection between the two destinations.  So even though your workload is containerized, and it can be ported between Amazon or Microsoft and your private cloud, the interconnections that a lot of people rely on are very prohibitive.”

By Sharp’s measurement, the average lag time between the state of the art in microservices development, and their actual deployment in data centers, is still about two years.  But he says he’s already faced customers who have colossal data containers, burgeoning on tens of terabytes in size, for whom connectivity has posed a major roadblock.

Granted, Digital Realty’s Service Exchange, announced last September, is Sharp’s suggested remedy for the connectivity issue.  But he’s looking forward to the possibility that a solution such as the one made feasible by Kubernetes 1.5 and this expanded partnership, could create more opportunities for customers to bring their workloads closer to DR’s points of presence.

“If you’re moving a 50-terabyte workload that’s in a container across the Internet, that’s a month,” said Sharp.  “That’s not very elastic.  When you look at it as an aggregate, and you need to evolve and move those things around in the aggregate, is where you need high-throughput, low-latency connectivity.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish