Five Edge Data Center Myths

Uptime Institute CTO Chris Brown shares what businesses often get wrong about edge computing.

Mary Branscombe

December 6, 2017

6 Min Read

With mobile and last-mile bandwidth coming at a premium and modern applications needing low-latency connections, compute is moving from centralized data centers to the edge of the network. But there a lot of myths about edge data centers. Here’s what organizations are typically getting wrong, according to Uptime Institute’s CTO Chris Brown:

Myth 1: Edge computing is a way to make cheap servers good enough

The old branch office model of local servers won’t work for the edge; an edge data center isn’t just a local data center. “An edge data center is a collection of IT assets that has been moved closer to the end user that is ultimately served from a large data center somewhere.”

That makes the edge more like a distributed computing architecture. You already see that in manufacturing, where centralized compute systems handle jobs that don’t need direct input, with day-to-day operations running directly in the manufacturing facility. But if the central computers fail, or connectivity is lost, the factory systems won’t be able to operate on their own for long; Brown mentions auto manufacturing facilities that would fall idle after 24 hours because the systems wouldn’t know what parts to order or what vehicles to build.

Edge data centers might be small, but they won’t be cheap servers on DSL connections. As Brown puts it, “Edge data centers are not going to be the reduction in cost some people thought. You’re not able to just use cheap, unreliable hardware; you still need rock-solid infrastructure and rock-solid networks. They need to be designed, built, run, and operated in the … high-availability manner of a large regional data center.”

Related:What’s Behind AT&T’s Big Bet on Edge Computing

Edge computing is good for offloading the more latency dependent services away from large distant data centers and bringing them closer to the user. In a way, they are like a content distribution network, only for your specific applications and services.

Myth 2: Networks don’t matter

It can be harder to deliver a good user experience on the edge than in a central data center, where you have high-availability connectivity and power systems.

“Network connectivity will become more important, and an edge network is more than just cabling. In the data center design world, people sometimes look at the network as a second thought, but you need a rock-solid network for edge data centers,” Brown points out.

Building out an edge network means changing the way you manage and run your data centers. Your systems are no longer in large, easy-to-access buildings with on-site operations teams. You’re building something that’s more like a cellular network, with hardware deployed in modular housings on remote sites that take time to get to.

Related:Edge Computing is Exploding Because “Speed of Light Sucks”

“You have to think what it means to use multiple network providers and multiple connectivity points, each one capable of supporting the full load required for the business needs out of that edge data center, so that even with a failure or the loss of a single network provider you can still deliver the same high-quality service.” That may mean mixing wired and wireless connectivity to ensure access even when one route is down.

One new option with edge is that the compute could even could be running in cellular base stations or close to metropolitan networks, if that’s the best way to deliver your service to your users.

Myth 3: Managing edge computing is easy

Edge data centers aren’t one-size-fits-all; an installation could be anything from a single self-contained rack to twenty or thirty. Whatever the size, they need the right equipment.

“Instead of thinking of an edge data center as an inexpensive, small, insignificant piece, you have to think of each individual node as a data center. It has to be designed, built, tested, and constructed using commercial-quality equipment. It needs to be fully tested and put into the network to support the business needs,” Brown notes.

Hyperconverged and hybrid cloud systems like Azure Stack are a common solution because they address the other big issue – managing the edge. “We’re seeing more and more companies starting to go that route. As you get more of these smaller edge data centers scattered around, if you're not good at remotely monitoring your equipment and having onboard analytics and automation alert you to a problem -- as well as possibly hand off load from one IT asset to another based on its health -- then it’s going to get very difficult to manage this large amount of IT assets scattered around a country or the world.”

Most operations models are built around having on-site staff who able to maintain equipment as needed work in shifts. That’s not possible with edge computing, where you’re now managing many small data centers in a variety of locations, as well as your data center assets.

The answer is effective remote monitoring and a significant amount of automation. Redundant hardware might also be necessary if access is likely to be a problem. Applications will also need to offer some form of self-healing or failing over to nearby nodes or back to central data centers, ensuring users keep service access even if it’s degraded by added latency. Hybrid cloud application architectures can help here too, allowing functionality to migrate to easier-to-manage core systems in the event of a failure.

Myth 4: You only need to worry about network security

When it comes to edge computing most organizations are concerned about hackers and network security but often forget about physical security.

One of the key ideas with edge data centers is that they are deployed rapidly. “Often they’re self-contained racks, and all you need is to provide reliable power and AC and put them in some sort of structure that keeps the weather off them,” Brown points out. “So, they may not be in a standard data center; they can be deployed in a warehouse.” That means you don’t have the restricted access that usually protects your systems; it’s easy enough for a thief to back up a truck and lift out a palette of servers.

That makes implementing physical security a significant task and a significant expense; again, look at how cellular base stations are secured as a model for how to secure edge servers.

Myth 5: The edge is not a data center

Edge computing is not a branch-office solution. It’s part of your central data center, just on the end of a network connection, and it’s not a short cut to deploying cheaper hardware, although if you do it right, automation will reduce your operating costs. If your edge deployment is going to succeed, then you need to manage it alongside your existing data center, with the same processes, and take the same care planning how to deploy it.

“Companies need to realize they can't take half-measures in their approach to data centers just because they’re going to have a bunch of them. Edge requires a disciplined approach to design, build, and implementation testing,” Brown warns. What it all boils down to is this: don’t think of it as rolling out a handful of servers; you’re designing, building, and managing a customer-facing data center – just in miniature.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like