Dell Joins AT&T to Move Edge Data Centers Wherever They Should Be

An extended partnership for Airship, which combines Kubernetes and OpenStack, could bring edge computing closer to the enterprise after all.

Scott Fulton III, Contributor

August 15, 2019

6 Min Read
AT&T switching facility
AT&T switching facilityJohn W. Adkisson/Getty Images

What could have become an all-out tug-of-war between the enterprise and service providers over the land rights for smaller, nimbler data centers, may be closer today to having ended in a truce. Two big names that you’d normally think were on opposite sides of this issue, AT&T and Dell Technologies, announced an agreement this morning to collaborate in the production of Airship, the open source platform that integrates the Kubernetes container orchestrator with the OpenStack cloud platform.

Their goal: to enable existing telcos’ virtual network functions (VNF) to co-exist on a platform with a new breed of containerized network functions (CNF) in the same network space, opening up the old architecture to a new means of deployment and management. As a result, enterprise customers may end up with more of a say in the issue of where “the edge” should eventually be located.

“Today, the current use case for AT&T is the Network Cloud, which is a near-edge play that supports the 5G packet core,” explained Ryan Van Wyk, AT&T’s associate VP for Network Cloud. “That said, with Airship 2.0, one of the goals there is to support much smaller footprints and have this notion of an ephemeral under-cloud control plane. So, you can now deploy very small footprints of compute and have the control plane beneath to orchestrate that compute to appear when it’s needed, and then remove itself when it’s not.”

Related:Edge Computing: AT&T Says the Edge Isn’t Where You Think It Is

Dell is highly invested in those smaller footprints.

“If you look at where our Virtual Edge Platform is today — which is actually a customer edge platform, it’s already started in mass deployments today,” Erik Vallone, Dell EMC’s director of service provider solutions,

Where Does the Edge Go Now?

Trials of edge computing deployments are already taking place on customers’ premises today, Vallone added. By that, he means the capabilities to perform core tasks and run primary applications — things for which an enterprise used to rely upon its own data center — are already being realized in branch offices and remote facilities.

Next year, those trials will move into the proof-of-concept phase, he told Data Center Knowledge. That transition may require re-deploying existing edge applications and functions on new grades of hardware. That task may be greatly expedited by the upcoming version 2.0 of the Airship platform, which will include new components that both standardize and automate the deployment of IT workloads into Kubernetes.

“By 2020, I would expect that we would see this playing out in at least some customer locations,” said Van Wyk, referring to where AT&T perceives Airship-enabled edge deployments being located over the next two years.

Both network edge compute (where telcos plan their edge operations to take place) and Mobile Edge Compute (where the 3GPP organization that defines 5G expects edge operations to be) will be enabled by a single distributed network brought together under the Airship platform, Van Wyk said. That’s important for a number of reasons, one of which isn’t too obvious: The issue of whether edge data centers and the servers deployed there are service provider equipment or customer premise equipment (CPE), makes all the difference as to where they can physically — and even legally — be located.

In an earlier interview with Data Center Knowledge, a different AT&T associate VP, Jeff Shafer, representing the Edge Solutions division, explained in a variety of ways why locating edge data centers adjacent to telco base stations and cellular transmitter towers was, as he put it, unworkable. Shafer’s explanation could be interpreted as implying that deploying edge workloads incorporating wireless network data in any location other than inside the wireless provider’s network itself would be equally unworkable.

But we clearly heard the reverse message from both AT&T’s Van Wyk and Dell’s Vallone. Indeed, part of Airship’s purpose is to standardize the way edge workloads are deployed in a highly distributed network — to avoid situations where a single software deployment model is only workable for a single, unalterable hardware configuration.

“Airship itself is essentially the under-cloud for the infrastructure,” explained Van Wyk. “It manages the lifecycle of all the software you need to run a cloud. So from that perspective, it is not a virtual network function, nor does it have any [VNFs] that are a part of it.

“As we move forward,” he continued, “there will be use cases where we run containerized network functions side-by-side with virtual network functions. We will deploy at the edge an OpenStack cloud that is supporting the [VNF], and that OpenStack cloud infrastructure is deployed and managed by Airship. Then in that same environment we can have Airship manage CNFs side-by-side with the VNFs. It’s going to allow us to evolve the platform in-place, as those things become firmed up over the next couple of years.”

Change of Mind

The drive to convince telcos to implement OpenStack at the base layer of their distributed networks began over three years ago at the OpenStack Summit in Austin. There, it was Red Hat (then an independent company, now part of IBM) that spearheaded the movement to orchestrate the deployment and management of network functions virtualization (NFV) operations, using a platform originally created for x86 and x64 enterprise networks.

For about two years thereafter, there was active opposition to the idea, including from telco engineers who argued that Kubernetes and Linux containers represented an altogether different type of function from what telcos required from their VNFs.

Those objections were clearly trounced by AT&T’s own introduction of Airship 1.0 last April. Its inclusion of Helm, an open source package manager that automates the deployment of Kubernetes workloads across a variety of hardware types, was largely what laid those objections to rest.

The new version, to which AT&T and Dell will jointly contribute, will finalize a means for specifying the resources a network function may require using YAML, a common declarative language used for Kubernetes since its inception. That, along with a new OpenStack component for workloads in parallel called Argo, could make it feasible for small, Kubernetes-based workload deployments to be spun up and spun back down on-demand.

Such a deployment option could, if realized, substitute for network slicing, a system of dividing server resources into slices or stripes that are permanently delegated for internal or commercial customer use. The problem with that option was that it doesn’t scale well, either up or down.

Vallone told us that Airship does not, and will not, give any special advantages to any brand or manufacturer of hardware, including Dell. The two companies’ partnership will not extend, we were told, to any physical form factors for edge hardware, or for server racks deployed in edge locations.

About the Author(s)

Scott Fulton III


Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like