The most incredible, and still under-appreciated, aspect of cloud platforms is that together they collect an astounding variety of variously performing systems into a collective cluster that can share processing power and storage. An enterprise data center, logically speaking, has become almost by definition a cloud unto itself.
So is the edge, in this construct, a component of this cloud or an appendage of it?
It’s not a trivial question. It has everything to do with how your enterprise’s edge devices are physically serviced and administered. Yes, edge assets and edge data centers are designed to be centrally managed from a remote location, but if edge processors will be running tasks that are exclusive to edge computing anyway, then is a single management console in one location such a good idea?
The Edge as a Connected Cloud
One set of answers to these questions comes from a company that, just a few short years ago, would have seemed an unlikely candidate for inclusion in any discussion of edge computing: cloud file storage provider Ctera.
“Ctera Edge X Series combines the roles of hyperconverged solutions with file services for the edge in one single solution that’s fully managed by an enterprise management solution,” says Oded Nagel, Ctera’s chief strategy officer.
Ctera Edge X server with HPE SimpliVity 380 HCI. (Image: Ctera)
It looks like a server because that’s what Ctera Edge X is — not a service or a software-as-a-service (SaaS) solution, but a component built by Hewlett Packard Enterprise and repurposed. It’s designed to be deployed in edge locations or branch offices, where the hyperconverged infrastructure functionality would be supplied by HPE’s SimpliVity.
It only makes sense that Ctera would seek to extend its existing object file management system to new and different classes of server deployments. However, any system whose sole purpose is extending the file system would probably be rendered redundant by an existing virtualization platform, such as VMware’s, that already performs that function. So Ctera is trying something bold: bundling a hyperconvergence platform with its shared file system so that remote admins can manage both physical and virtual edge systems.
If an enterprise is operating a few dozen branches and distributing computing power to those offices or sites, Nagel tells Data Center Knowledge, they tend to utilize their own choices of file servers, storage, and often applications. A scan of their legacy NAS systems often reveals they each have their own islands of data, none of which are visible from one location. But in nine out of 10 island cases, none of the unstructured portion of that data is accessed on a daily basis, he says.
So Ctera’s value proposition starts by suggesting that these branches move their island data out of the NAS and into Edge X’s flash memory cache. From there, the system will move that data to Ctera’s object storage, all without altering the existing access control lists restricting access to that data. Any existing storage devices in the branch may now be linked through Edge X’s built-in hyperconverged infrastructure.
Ctera’s management portal is run in the central data center, though Nagel admits that it would not replace existing VMware services for managing hypervisors.
“We believe that the edge today is one of the main pain points in a large enterprise organization,” he says. “It doesn’t make any sense to buy, all the time, more local storage for the remote office. Why not use caching and tiering technology, and only keep the most recent data on the local storage? The rest can be tiered to a cloud solution, which can be on-premises or public, still allowing the users to access data on-demand, case-by-case.”
The Edge as the Same Cloud
“Why would a business want data center servers at the edge in the first place?” asks Kit Colbert, VP and CTO of VMware’s Cloud Platform Business Unit. “Because once you do that, you’re going to introduce all sorts of problems. You’ve got to manage the servers, you’ve got to secure them, upgrade them, blah-blah-blah — lots of stuff there.”
First and foremost in organizations’ minds is the problem of latency. They’ve been convinced of the value of edge computing that takes place close to the generation point of data, so it doesn’t make much sense, Colbert argues, for a management portal to put that edge data into a separate trough, re-introducing all the latency that an organization took the time and trouble to purge.
A food processing plant, for example, has time-sensitive controls in place, tracking the temperatures of both the environment and the mechanisms that come into contact with food as it’s being packaged and prepared for shipping. This was a real problem for one food services customer of VMware, Colbert says. Yogurt, in the process of being packaged, would explode, spreading strawberry, lime, and banana yumminess all over multimillion-dollar equipment.
For this customer, keeping yogurt from exploding was job one, so it wouldn’t make sense for the system that facilitates its packaging to be treated by its management console as an appendage of the main system.
“Given the sorts of business impacts they can have for not handling that properly, it’s very well worth those customers’ efforts to put servers there,” Colbert argues.
Low latency, along with the need for enhanced physical security and concerns about the sovereignty of data collected across countries’ borders (especially in Europe), are what Colbert sees as edge computing customers’ three most prominent concerns. Many organizations end up dedicating these concerns to an exclusive team, he says, whose sole job is to administer all the physical devices with processing power throughout their branches. For retail operations, simply making one round to see to the needs of every node in a circuit can take three years to complete.
“Then by the time they’re done, they’ve got to go back to the first store, because now that store is three years out of date,” he says.
Independent processing units — be they servers or embedded devices — scattered across thousands of square miles of territory present both scalability and manageability bottlenecks for large enterprises.
Last August, Colbert, along with partners Dell EMC and Lenovo, led the team that launched VMware’s Project Dimension. At VMware, services that have not been fully productized are dubbed a “project.” The goal of Dimension has been to build a public cloud-like provisioning system that pools together server processor power in edge deployments and branch offices. But unlike a stand-alone management portal, Dimension would be surfaced through VMware’s existing VMware Cloud portal, which customers are already using for deploying services across their on-premises and Amazon Web Services infrastructure. This April, Dell announced the first productization of Project Dimension: VMware Cloud on Dell EMC.
The architectural chart for Project Dimension. (Image: VMware)
“Dimension is not focused on the core vSphere aspects; we have a whole vSphere team to go do that,” Colbert says. “What we’re focused on is that next level of abstraction: How do you manage at scale across all these different locations, both edge and data center?”
While an organization typically sends its infrastructure team to administer and repair edge deployments, its IT and workload teams remain headquartered on-premises. It’s the latter team that can spin up a new virtual machine for an application and update it several times daily. But the IT team typically has no visibility into edge facilities. This explains VMware’s motivation behind extending its cloud portal visibility to edge servers. This way, people managing workloads have access to functionality throughout the entire global workforce without having to wait until the next bi-annual checkup.
The Edge as Its Own Cloud
The counter-argument to VMware’s proposition goes like this: The applications and functions that run at the edge, for many organizations, will not be the same ones that run in their central or colocation facilities. Your yogurt explosion-proof mechanical temperature sensors, for example, will be running as close to the affected machines as possible, and the processors running those sensor functions may or may not be full-scale Intel Xeon CPUs or their equivalents.
So grafting their management platforms onto a full-scale cloud may not be necessary, or even warranted. It may tie everything into one console (as VMware historically has done), but it won’t create business value.
NodeWeaver is a virtualization platform designed for a small number of potentially very small devices, with the objective being to pool their resources together to run applications as a single cluster. It produces what can be described as the edge functioning as a cloud.
“Whereas in a core data center we’re going to be running on data center-class servers, at the edge we’re running on essentially really small devices that wouldn’t even be considered servers,” says Tom Mays, co-founder of ScaleWize, the exclusive distributor for NodeWeaver in North America. “In the data center, we’re dealing with very high-quality networking equipment with a lot of features and capabilities, and at the edge, we’re probably dealing with a very inexpensive, small switch with very limited features.”
The dream of Project Dimension would be wonderful if everyone at the edge always had Dell EMC PowerEdge or Lenovo ThinkSystem servers. Mays tells us we’re dealing with a reality where these processing devices are expected to perform for between five and 15 years. When it comes time to add a new device to the cluster, there will be no way to even come close to matching the old device’s performance profile.
“We need to either add or replace an existing system with completely dissimilar hardware — not just different generations, but completely different chip families,” Mays says.
A typical NodeWeaver installation at work managing a handful of small edge-class devices. (Photo: ScaleWize)
According to him, IT teams won’t be the ones touching edge-based workloads. It’s not their purview, not because they lack visibility, but because it’s essentially a different system. So rather than expose the edge cluster’s management functions through a portal, NodeWeaver takes a different tack: It exposes nothing at all except status data, automatically managing the clustering of edge-based nodes.
“We need to build a system that’s installable by a tech with a screwdriver and is self-managing, so we don’t have to touch the infrastructure,” he says.
It’s not entirely walled off, though. By means of API calls, NodeWeaver’s virtual resource functions may be exposed to infrastructure automation systems such as Red Hat’s Ansible and HashiCorp’s Terraform.
“Which means that it’s treated like you would treat Amazon, Azure, or Google Cloud” says Carlo Daffara, CEO of Pordenone, Italy-based NodeWeaver. “You ask for something to be deployed, and the platform will try to satisfy your requirements, if it’s possible, with the resources that are available. But the basic idea is that you see it as a single endpoint. You don’t manage the individual components inside.”
NodeWeaver can handle traditional servers with as much as a half-terabyte of RAM, Daffara says. But he believes such a component is only advisable in environments that have the cooling and power to support them — and in the real world today, where “out in the field” means in a field, that’s rarely the case.
“The issue, in my opinion, is that you need to have a look at the economicity of it all,” Daffara says. “If it’s necessary for you to prepare even a small room with cooling and stabilized power to make sure that your server will not fry in one month, that’s a cost that may not be justified. We are considering situations where it’s feasible to deploy a very small edge cluster, like a two-node system, that runs reasonably big VMs and applications for a cost of around $5,000.”
There are many edge equipment vendors that would argue that these three use cases all represent different edges rather than various aspects of one edge. Those arguments are reminiscent of the various types of cloud that cropped up around 2007, when a multitude of new names set forth to challenge what was then perceived as the unbeatable Rackspace proposition. It’s only normal that vendors promote their visions of a new collective context, such as “the cloud” or “the edge,” that most resembles their existing product line. (Remember when “the net” was a fabric of devices all running Windows?)
While enterprises set about experimenting with how best to deploy and manage their edge computing assets, telecommunications companies will invest tremendous sums and immeasurable resources to establish services they hope will be competitive with AWS and other cloud providers at the edges of their wireless networks. Because telcos will hold great influence in this area, how they choose to manage their edge networks may determine how enterprises manage theirs, whether or not they have a preference for one of these three approaches.
So keep your eyes focused on all three options, because one of them may hold the key to the future of your distributed IT. Guessing which one that ends up being may be a shell game.