Edge Computing is Not Killing the Cloud

Edge computing is the next phase in the evolution of cloud-native applications and will mature to become a critical building block for all applications delivered over the Internet.

Haseeb Budhani is CEO of Rafay Systems. 

Application owners and operations teams clearly recognize that some workloads are best run closer to endpoints, i.e. at the edge. Examples include workloads that are interactive, or those tasked with aggregating and summarizing massive amounts of data from endpoints.

Consider an interactive, voice-driven service that end users can utilize for a variety of things, from checking the weather to ordering groceries based on past preferences. At a minimum, such an application will have the following components:

  • A database to store user preferences, endpoint configurations, etc.
  • A request parser to comprehend the question being asked
  • A response repository
  • A machine learning cluster to constantly improve the responses that are provided to endpoints
  • A secure interface to which endpoints can connect to, and ask questions.

Because user preferences and configuration may not change frequently, it is reasonable to deploy the database cluster in at most two locations in the cloud - the 2nd location provides protection against disaster and ensures availability. Even the machine learning (ML) clusters, used for continuously improving responses, could arguably reside in one or a few core locations, since the ML processing for this use case probably isn't time sensitive.

However, if we want end users to have a great experience interacting with the service and have the responses flow back as quickly as possible, it makes sense to run the secure interface and the request parser closer to endpoints. So how close should these services really run to end users?

It's reasonable to place the secure interface, the query parser and the response repository, which collectively make up the edge stack for this application, to a few hundred miles of every major metro where we may expect the service to be utilized. This model will require the service provider to manage a large number (50 to 100) of locations where their application components may be deployed, and will result in optimal performance. A voice-based user query will be received and processed quickly, and a response could be sent back to the endpoint to deliver a truly interactive experience.

Each edge component stack (consisting of the secure interface, the query parser and the response repository) will independently interact with the core component stack (the database and the ML cluster) to maintain an up-to-date response library and share any local learnings (e.g. heretofore unasked questions) with the core as needed.

This model will deliver a massive improvement in end user experience, which will directly lead to higher engagement and a better topline for the business. However, maintaining such an expansive application footprint on the Internet is a massive task. This type of application footprint will also require a number of application deployment, security and monitoring challenges to be solved, to name a few. 

The example above illustrates my meta point: Edge computing is NOT killing the cloud. Edge computing is the next phase in the evolution of cloud-native applications and will mature to become a critical building block - alongside core cloud computing - for all applications delivered over the Internet.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish