Avinash Lakshman is CEO of Hedvig.
The journey to the cloud is well underway. Market efficiencies, economics and technology have advanced sufficiently, and it is inevitable that virtually all organizational functions and technology infrastructure will lever public clouds in some capacity. In fact, a recent Gartner forecast expects that by 2020 more than $1 trillion in IT spending will either be directly or indirectly affected by the shift to the cloud. Gartner notes that, “this will make cloud computing one of the most disruptive forces of IT spending since the early days of the digital age.”
I don’t disagree, but there are consequences; and here’s one of the biggest: moving all your data and applications to a single public cloud provider represents a massive vendor lock-in. Even moving just a subset of your data and applications introduces significant financial and supplier risk. The obvious solution is to leverage multiple public cloud providers.
That leads to a different challenge: How do you overcome the inherent portability, locality, and availability constraints of moving data among clouds?
Reaping the Business Benefits of Multiple-Cloud Requires Cross-Cloud Replication
As organizations move to multiple public clouds, they in turn will need a way to synchronize data seamlessly across multiple providers — cross-cloud replication makes this possible. Cross-cloud replication enables organizations to move applications easily among different cloud sites and, as importantly, cloud providers. It’s the missing piece that makes the multi-cloud world we hear and read so much about a reality today. It ensures that no matter where you run your app, it will have local access to its data.
Why is this important? The promise of a multi-cloud future is one in which you’re able to move your application dynamically based on business requirements. If you can replicate your data across all of the public cloud services, then you can eliminate cloud vendor lock-in by employing cloud arbitrage, reverse auction, and follow-the-sun scenarios. The bottom line is you run your application in the cloud that provides the best performance, economics, availability, or some combination of these.
Four Trends Will Make it a Reality Within Two Years
We’ve reached a point in the sophistication and evolution of IT where cross-cloud replication is necessary to realize a multi-cloud environment. In fact, I predict cross-cloud replication will be commonplace among medium to larger organizations within two years. This accelerated adoption is fueled by four converging trends:
- Infrastructure evolutions in private cloud. First, let’s start with a simple definition of a private cloud: virtualization of some form (be it VM- or container-based) combined with automation and self-service. Advances in microprocessor and memory architecture (whether HDD or SSD) make the virtualization side of private cloud more cost effective. Couple these with advances in cloud orchestration tools from Docker, Kubernetes, Mesos, and OpenStack and you have the automation and self-service. Building a private cloud with these elements creates an “AWS-like” foundation and cross-cloud replication then allows companies to move apps and services across private cloud data centers with ease.
- Broad usage of multiple public cloud providers. Amazon Web Services (AWS) is the 800-pound gorilla in this space so far, running close to $10 billion a year in revenue. But Microsoft Azure and Google Cloud Platform (GCP) have made impressive strides in the last several years. Wanting to avoid vendor lock-in, organizations will augment private clouds with at least two or more public cloud providers. In fact, a recent survey shows the average enterprise using six clouds (three public, three private). These organizations will either need cross-cloud replication to keep data synchronized or risk the onerous task of lifting and shifting infrastructure silos to a multitude of public clouds.
- The emergence of DevOps talent and processes. While still a relatively scarce skill set, DevOps is no longer the unicorn it used to be. Even mainstream, so-called legacy organizations now have DevOps teams and culture, not just the cloud- and digital-native companies. Another recent survey found that DevOps adoption climbed to 74 percent in 2016 from 66 percent in 2015. That number rises to 81 percent for enterprises (organizations with 1,000 or more employees) where adoption is even more prevalent. DevOps talent ensures companies have the know-how to build, ship, and run applications across these evolving private and public cloud trends. Cross-cloud replication gives these apps access to data regardless of where they run.
- The commercialization of AI and machine learning technologies. We’re now seeing an explosion of interest in and development of artificial intelligence and machine learning for commercial, rather than research, purposes. In fact, large organizations like Facebook, Google, Microsoft, IBM, and Intel are donating open source machine learning code so organizations can better use the intelligence in their own businesses. Machine learning expands DevOps’ value by automating decisions like where applications and services should run. Need to move an app for performance or cost reasons? Machine learning can detect the need and make the decisions while cross-cloud replication ensures data portability.
Because of the above trends, cross-cloud replication is not a question of “if” so much as “when.” Whether this replication arrives in full form in four or 24 months is difficult to say, but the true software-defined storage (SDS) we now see available in the marketplace is a good start.
A Universal Data Plane
Inherent to its design, SDS is already decoupled from the underlying hardware, the intelligence lives in programmable software, and the necessary data protection mechanisms are there. More recent SDS solutions even provide each application with its own unique policy, such as which cloud or clouds on which it should run. But deploying a software version of your old storage array is not the right approach. You need to deploy SDS in an architecture that spans traditional storage tiers, runs in public clouds, and integrates with any of the virtualization and workload infrastructures powering your cloud. Deploying SDS in this architecture is a fundamentally different approach from how enterprises have been doing in storage for the last 40 years.
So what is the right architecture? I call it a Universal Data Plane.
A Universal Data Plane is a single, programmable data management layer spanning storage tiers, workloads and clouds. It replaces the need for disparate SAN, NAS, object, cloud, backup, replication, and data protection technologies. As true software-defined storage, it can be run on commodity servers in private clouds and as instances in public clouds. A Universal Data Plane also dramatically simplifies operations by plugging into modern orchestration and automation frameworks like Docker, Kubernetes, Mesos, Microsoft, OpenStack and VMware.
Perhaps most importantly, a Universal Data Plane veers away from hyperconvergence. By definition it’s a decoupling or disaggregation of distinct tiers in your IT stack. Rather than collapsing application and orchestration layers into the same physical solution, a Universal Data Plane remains as its own unique, software-defined storage layer. It provides APIs into the VM, container, cloud, and orchestration technologies. As such, it’s the right layer to provide cross-cloud replication. Tight coupling, as found in hyperconverged solutions, cannot not provide this multi-cloud foundation.
If you’re looking to go multi-cloud, then the good news is that the necessary software-defined storage and cross-cloud replication technologies make this a reality today. The concept of a Universal Data Plane is not science fiction. It’s the next logical step in your cloud journey.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.