Skip navigation
Issues for 2017: Is Compute Power Truly Moving to ‘the Edge?’
Photograph by David Hawgood, licensed under Creative Commons 2.0.

Issues for 2017: Is Compute Power Truly Moving to ‘the Edge?’

The Internet of Things, we’re being told, is driving enterprises to move computing power closer to the data, in smaller facilities. Oh, really? Then what’s all these big, new data centers about, then?

From the point of view of people who run data centers, “the edge” is the area of the network that more directly faces the customers.  And from the perspective of people who manage the Internet and IP communications on a very low level, “the edge” is the area of their networks that more directly face their users.  They’re two different vantage points, but a handful of lucrative new classes of applications — especially the Internet of Things — is compelling people to look towards the edges of their network once again, to determine whether it makes more sense now, both in terms of efficiency and profitability, to move computing power away from the center.

It would seem to be the antithesis of the whole “movement to the cloud” phenomenon that used to generate all the traffic on tech news sites.  Cloud dynamics is about a centralization of resources, and hyperconvergence is perhaps the most extreme example.

The Edge Gets Closer to Us

Last year at this time, hyperconvergence seemed to be the hottest topic in the data center space.  Our indicators tell us that interest in this topic has not waned.  Assuming that observation is correct, how can hyperconvergence and “the edge” phenomenon be happening at the same time?  Put another way, how can this reportedly relentless spiking in the demand for data be causing data centers to converge their resources and data centers to spread out their resources, simultaneously?

“I remember when ‘the cloud’ first came out, and we used to talk about, what was the cloud?  And what was going to happen to all the data centers?” remarked Steven Carlini, Schneider Electric’s senior director of global solutions, in a discussion with Data Center Knowledge.

Carlini pointed to the dire predictions from 2014 and 2015 that the cloud trend would dissolve enterprise data centers as they moved their information assets into the public cloud.  The “hyperscale” data centers, we were often told at the time, would become larger in size but fewer in number, swallowing enterprise facilities and leaving behind these smaller sets of components that faced the edge.

But through 2016, while those hyperscale complexes did grow larger, they refused to diminish in number.  As Data Center Knowledge continues to cover on a day-by-day basis, huge facility projects are still being launched or completed worldwide: for example, just in the past few days, in Hong Kong, in Quebec, and near Washington, D.C.

The challenges that builders of these “mega-centers” face, Carlini notes, have less to do with marketing trends and much more to do with geography: specifically, whether sites being considered provide ample electricity and water.  So for years, they avoided building in urban areas, in what he called “the outskirts of society.”

“What started happening was, as more and more applications went to cloud-based, people started to be more frustrated with these centralized data centers,” he continued.  “So we saw a huge migration out of the enterprise data centers.  All of the applications from small and medium companies, especially, that could be moved to the cloud, were moved to the cloud — the classic ones like payroll, ERP, e-mail, and the ones that weren’t integrated into the operation of manufacturing.”

Bog-down

As more users began trusting SaaS applications — especially Microsoft’s Office 365 — Carlini believes that performance started to become a noticeable factor in users’ computing experience once again.  Files weren’t saving the way they used to with on-premise NAS arrays, or even with local hard drives.

This was one of the critical factors, he asserted, behind the recent trend by major firms to center their more regional data center projects closer to urban areas and CBDs — for example, Digital Realty’s big move in Chicago.

That’s the force precipitating the wave Carlini points to: the move to the edge where the computing power is closer to the customer.  In a way, it’s a backlash against centralization not so much because of its structure but its geography.  It introduced too much latency into the experience of everyday work, for organizations that did move their general-purpose business and productivity applications into the public cloud.

Perhaps it’s a bit too simplistic to assert that, because enough people twiddled too many of their thumbs waiting for documents to save, a revolution triggered a perfect storm that moved mountains of data back toward downtown Chicago.

But it may be one symptom of a much larger phenomenon: the introduction of latency into the work process which, when multiplied by the total number of transactions, results in unsustainable intervals of wasted time.  Last summer at the HPE Discover conference in Las Vegas, engineers made the case that sensitive instrumentation used in geological and astronomical surveys are too deterministic in their behavior, and too finely granular in their work detail, to afford the latencies introduced by billions of concurrent transactions with a remote, virtual processor, over an asynchronous network.

Is the Edge in the Wrong Location?

Content delivery networks (CDN), which operate some of the largest and most sophisticated data centers anywhere in the world, came to the same conclusion.  Their job has always been to store larger blocks of data in caches that reside closer to the consumer, on behalf of customers whose business viability depends on data delivery.  So yes, CDNs have always been on the edge.

But it’s where this edge is physically located that may be changing.  In a recent discussion, Ersin Galioglu, the vice president of Limelight Networks (by many analysts’ account, among the world's top 5 CDNs behind Akamai) told Data Center Knowledge his firm has been testing the resilience of its current network by situating extra “edge servers” in the field, generating surpluses of traffic and conducting stress experiments.

“The big distinction to us is the distribution of small objects and large objects [of data],” Galioglu explained.  “With large objects, I can perform a lot more efficiently; small objects are a lot harder from a server perspective.”

Internet of Things applications are producing these small objects: minute signals, as opposed to large blocks of video and multimedia.  The strategy for routing small objects is significantly different from that for large objects — so different, that it’s making CDNs such as Limelight rethink their approaches to design.

Limelight’s lab tests, Galioglu told us, first conduct real-world samples that produce best representations of global traffic patterns.  Those patterns are reduced to a handful for purposes of comparison.  Then Limelight dispatches edge servers into various field locations, to generate specific — albeit artificial — transactions that are then monitored for performance.

“One of the challenges that the CDN industry is having,” the VP remarked, “is that there is limited capacity.  As much capacity as we have been adding, it gets consumed as soon as we add it.  And the challenge there is [when] there’s a very ‘spike-y’ traffic pattern on the Internet, and a lot of times, the customers themselves cannot anticipate what the demand will be.”

Existing networks may be too sensitive to sudden changes in customer demand one way or the other.  And those changes may be having bigger impacts with respect to greater quantities of smaller data objects — the products of IoT applications.

When Edges Collide

So both enterprises and commercial access providers are rethinking their strategies about the locations of their respective edges.  As a result, a possibility that would have been dismissed sight-unseen just a few years earlier, suddenly becomes viable:  Enterprises’ edges and service providers’ edges may be merging into the same physical locations.

It’s not really a new concept, having been the subject of speculation at IDC at least as early as 2009.  And it’s a possibility that some folks, including at HPE, are at least willing to entertain.  Consider something that’s more like a “Micro Datacenter” than an Ashburn, Virginia, megaplex.  Here, a handful of enterprise tenants share space with a few cross-connect points to major content providers.  The “service provider edge,” to borrow IDC’s phrase, would overlap the enterprise edge at that point.

“I think there is the opportunity here to kind of rename it all,” admitted Schneider Electric’s Steven Carlini.  “There’s definitely a lot of confusion a bit like ‘the cloud’ 10 years ago — everyone was all like, ‘Oh, what’s the cloud, what does it mean?’  It’s the same thing right now with ‘the edge.’”

 
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish