Skip navigation
network cables art getty.jpg

5 Key Elements of an Enterprise Edge Computing Infrastructure

Three experts tell us what an enterprise needs to make edge computing practical and profitable.

You often hear that there’s more than one edge to a data center network. That’s not all that helpful when you’re an enterprise deciding whether building out your processing, storage and networking assets closer to your customers or communications providers makes sense in terms of cost or quality-of-service.

So, we spoke to three leaders in three different product spaces related to edge equipment: Dell EMC for edge servers, Vapor IO for micro data center chasses, and Schneider Electric for power and service resilience. We heard everything there is to hear about the various edges cropping up in the modern data center configuration and distilled here the common elements in all of their perspectives, isolating five things they perceive as critical to the composition of effective data centers at the edge of enterprise networks.

1. Fast, resilient storage

It wouldn’t make sense to build out archival storage at the edge. But the types of applications for which edge computing makes the most sense requires more addressable data space than even the largest tiers of DRAM would allow — for example, multimedia caching for Web applications geared around rich customer experience (outbound traffic) or machine learning applications spotting suspicious activity in videos (inbound traffic).

Edge computing is, by design, autonomous. It’s not an extension of an application running at the core or subservient to the public cloud, so its data storage needs to be ample enough for the job at hand, including supporting any virtual machines involved. It also needs to be local and readily accessible. The latest annual State of the Edge report, whose collaborative production was co-chaired by Vapor IO Chief Marketing Officer Matt Trifiro, suggests that certain experience-intensive application categories such as augmented reality are only feasible with edge computing components, and the applications in such categories must allow for high volumes of local data storage.

“You may want to process the data in real-time in that [edge] data center and then ship some of it back to the cloud,” Trifiro says in an interview with Data Center Knowledge. “That solves for two problems: It allows you to do edge compute in your factory, for example, and then do responsive data analysis in real-time: ‘We’ve got to turn that device off because it’s about to grind up some gears and cost us a million dollars.’ But also, you can do local processing on that data and reduce the amount of data you need to transport, because transport’s very expensive, especially with the massive amounts of data that a thousand sensors can produce.”

2. Remote workload automation

An edge data center is not an extension of your core data center. In other words, it’s not some kind of reserve processing system for overflow or critical-needs processing — that’s what the cloud is good for.

But you can’t run an edge data center with autonomy — which is what edge computing is good for — without automation. This does not mean tying edge and core compute clusters together into a single network and orchestrating workloads on that network using Kubernetes, as has been suggested elsewhere. Indeed, from the perspectives of the engineers with whom we spoke, the edge is not an extension of the cloud, or of anything else.

Yet it can be automated and allowed to operate autonomously through a centrally managed automation system. In such a system, containerization may still prove valuable.

“Applications have an expectation of a level of resiliency and reliability in the way that they’re designed from a three-tier perspective,” says Kevin Shatzkamer, Dell EMC’s VP for enterprise and service provider strategy and solutions. “Moving from virtual machines to containers serves a very key role in terms of being able to spin up and spin down applications at a more rapid rate, and also more efficiently boot up, start up and use compute resources for applications.”

3. Live, real-time performance analytics

Independent and autonomous operation does not imply being unmanaged or unmonitored. As Steven Carlini, Schneider Electric’s VP for innovation and data center, tells us, the cloud can serve a role here as a kind of neutral platform on which to build a holistic data center network management system.

Such a system, as Carlini describes it, would account for power systems maintenance at all the enterprise’s facilities, incorporating both core and edge assets. Extending further, it should also monitor battery and power levels, atmospheric conditions, and software security and integrity for remote sensors and embedded devices. For Schneider’s micro data centers, some of which are filing drawer-sized cabinets, all this data is aggregated through a hub and streamed to a cloud-based, real-time management system. That system provides a geographic picture of which assets are deployed in what locations.

“Then we want to have very, very accurate analytics for what’s gone wrong and recommending what to do,” Carlini says. “If possible – because there’s no IT staff at most of these locations – you’re looking at ways to remotely unlock the micro data center and guide somebody to do a reboot or a reset.”

4. SD-WAN

From a networking perspective, the data center core (what we used to call “the data center”) is a separate entity from the cloud, from any compute resources at the edge and certainly from any network platforms hosted by telcos. For most enterprises, it takes fairly sophisticated bridges to make cloud and core assets addressable through a single networking plane. VMware vSphere environments sharing space with Amazon Web Services in a hybrid cloud, for instance, requires VMware’s NSX to make both worlds appear seamless to an application that uses IP addresses to communicate.

In recent years, SD-WAN has emerged as a means of bridging the myriad transport protocols that define the various classes of networks — MPLS for the core, TCP/IP for the Internet and the public cloud, LTE for public access wireless, and whatever emerges from 5G RAN in the near future. As Dell EMC’s Shatzkamer tells us, the distinctions that many vendors and engineers draw today between the “different edges” that jointly constitute “the edge” might be erased once SD-WAN resets all their various addressability planes to a single level.

“I have an edge at the access network, where we’re starting to see materialization in the movement away from MPLS/VPN towards SD-WAN,” Shatzkamer says. “But I also see the materialization of hybrid cloud and public cloud providers moving things back on-prem. I see the transition at the telco network edge and the central office of modular data centers being deployed. It’s just standard compute inside and the modular data center gives me the ability to replicate the model of what the cloud providers do.”

The picture Shatzkamer paints for the near-term evolution of edge computing involves cloud service providers (AWS, Google Cloud Platform, Microsoft Azure and IBM Cloud) moving more of their own assets into customers’ premises. Their edges will replace some, though not all, of enterprises’ clouds. At the same time, edge computing will expedite the throughput and delivery of services to and from the enterprise core. These processes will require, if not holistic orchestration, then at least hands-on automation, which Shatzkamer asserts will need software-defined networking beyond what an SDN overlay can provide — a task that only SD-WAN, for now, appears capable of fulfilling.

5. Infrastructure reliability

Even though the optimum model of edge computing in data center networks may involve autonomy in data collection and processing, edge assets must still be perceived as active components — rather than far-flung remote correspondents — in a data center network. Schneider’s Carlini believes, for this reason, the same level of power and availability assurances that apply to the data center core must equally apply to edge assets.

“You should look at your data center not only as a stand-alone, but as an architecture of data centers,” Carlini remarks. “We believe that people are not giving the correct amount of attention and criticality to their edge deployments.”

For a single-UPS power system at the edge, an engineer would assume a tolerance level of about 30 hours of downtime, he says. That’s way too much for an entire data center network, which might be kept waiting even if it’s still powered up when the edge system can’t fulfill its function.

“We believe that whatever criticality you’re using for your main data centers should be used for the edge data centers,” Carlini says. This applies to both power redundancy and cooling. Some allowances may be made for cooling requirements at the edge, he says, if the applications being run there are not data intensive, though such cases are rare and may be soon become extinct.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish