Skip navigation
connectivity
series mainframe in a futuristic representation light streamsseries mainframe in a futuristic representation light streams

What Multicloud Connectivity Requires of the Data Center

The rise of multicloud is partly due to the changes in distributed applications architecture. But those same changes have already had an impact on how your data center runs today.

Maybe you can remember the dawn of bell-bottom blue jeans, the first time you saw someone dance “The Hustle,” and the first day you heard the phrase, “next-generation data center” uttered by someone with a straight face.

Since the turn of the century, the phrase has been taken to mean a number of different things: for instance, the advent of blade server substrates, and the establishment of the first “virtual services” prior to the idea of public cloud.  In 2001, analysts began using it to refer to what VMware was accomplishing: driving up server consolidation by compressing multiple workloads onto a single virtual platform.

Then the phrase saw life again in 2006, this time when engineers and Cisco and elsewhere realized that the barriers separating compartmentalized data centers from one another in the enterprise, were now completely artificial and self-serving.  Why not host everybody’s applications on the same infrastructure?

This was the idea that gave rise to the public cloud.  But it also spawned the colocation market as we know it today, which was founded on the tenets of utilization, centralization, and simplification.

But here’s what’s happening in software architectures that is shuffling the components of the data center landscape yet again:  Modern applications are becoming distributed, with scalable functions delivered on a new kind of virtualization that’s at least one layer deeper than the layer that supports the last wave of “NG data centers.”

Public cloud providers now offer individual services that play into these distributed applications architectures: for example, analytics, streaming, message queueing.  Customers are choosing each service on its own merit, and as a result, they’re demanding multi-cloud connectivity.

Redistribution

The distributed architecture professed by Cisco in 2006 divided data center services into three tiers: front-end or Web content, back-end applications content, and data.  In such an environment, all three categories are replicated, and DNS may serve as a switch directing requests and queries to main or backup units.  DCIM tools ensure that these replicas are active and maintained, in order to make disaster recovery feasible.

But container platforms such as Docker, along with orchestration platforms such as Kubernetes, Mesos, or Rancher, manage and distribute all services on all three of these tiers.  And big data platforms such as Spark are configured with fault tolerance and data redundancy.  Resilience is maintained at a very low level.

Multiverse

A multicloud application architecture recognizes that services and resources rarely share the same location anymore.  Applications can now safely reside on the public cloud, where replication is already managed by the provider.  Large quantities or volumes of data may then reside either on-premises or in leased locations.

Traditional server failover capacity will still be needed for the foreseeable future, at least as long as applications from the client/server era continue to be maintained.  But from now on, any SLA with a data center provider will also need to account for:

·      Next-generation server connectivity fabric that facilitates the software-defined networking (SDN) necessary for new orchestrators to provision redundant and failover resources quickly

·      Direct Level 2 connectivity with cloud service providers, perhaps by way of a content delivery network (CDN), or a cloud exchange where multiple providers may be accessed through a single hub

·      Support for smaller, scalable deployments that no longer have to be over-provisioned for replicating entire servers to handle occasional traffic spikes, or entire data warehouses when data lakes have resilience built-in.

Detour

Now that data centers have greater numbers of moving parts, the speed and capacity of connections between those parts matters much more than ever before.  This is where the Internet actually works against itself.  By design, the Internet sacrificed quality of connection in favor of a reasonable assurance of delivery.  Direct connections bring components closer together, bypassing the public Internet and reducing the number of “hops,” or stops between source and destination, often to just one.

Maybe IT assets are supposed to be flung across the globe.  But smart, selective connectivity choices will help customers minimize the effects of distance on those assets, while at the same time strategically placing them nearer to the places where business transactions take place.  Maybe this makes our planet as a whole the next, next-generation data center.

 

TAGS: CoreSite
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish