Cisco Systems is becoming much more than a network technology company, and its Application Centric Infrastructure software is becoming more than a way to automate the network. This is evident in both the announcement last week of a hybrid cloud partnership with Google and the latest release of ACI.
Cisco wants you to use ACI for linking network management with application management, building on its Nexus 9000 spine and leaf switches and Application Policy Infrastructure Controller, or APIC, along with a virtual network switch that supports ACI policies. With support for VMware and Microsoft’s hypervisor and management tooling, as well as OpenStack, it’s now compatible with most private, public, and hybrid cloud solutions. The latest release, ACI 3.0, adds support for multiple data center sites and for the Kubernetes container orchestration and management platform – which is at the center of the Google partnership, aimed at integrating on-premises enterprise IT with Google Cloud Platform.
This is a way for Cisco to stay relevant in a rapidly changing market, Marty Puranik, CEO of cloud hosting provider Atlantic.Net told Data Center Knowledge, “It’s [bringing] support for more and more platforms under the Cisco umbrella, where they have customers who need servers on-demand” or support specific geographies for compliance purposes. Cisco’s move beyond its traditional markets has been ongoing, evident in acquisitions like that of the application performance company AppDynamics earlier this year and of the cloud communications firm BroadSoft just this month, and in adding support for hybrid cloud platforms, like GCP’s Kubernetes stack or Microsoft’s Azure Stack -- it had to drop its own network stack to deliver the latter. Puranik suggested the aim for Cisco is “a single pane of glass to manage your networks, bringing it all together.”
ACI 3.0 is what makes Cisco’s new tie-up with Google work, based on Kubernetes support and the open source Istio micro-service management platform. While it’s focused on a new generation of hybrid cloud applications that use containers to host microservices, the underlying policy-based framework makes it simpler to move existing workloads to and from the public cloud (something that approaches like gRPC make harder), as well as handling scaling across on-premises and GCP.
ACI 3.0 Kubernetes tooling treats a container host as a virtual machine manager and works with Kubernetes’ own networking policies as well as ACI policies and controls. While this approach lets Kubernetes admins work with ACI and ACI admins control Kubernetes, in practice you’ll want both those roles to work together, using ACI as the networking layer of a DevOps stack. Sites can have multiple ACI fabrics, so you can deploy and manage them on a per-application basis.
Adding support for multiple data centers should simplify complex network management tasks, with a single pane of glass to handle policy and monitor operations. By using policy-driven network fabrics, an approach Cisco calls “intent-based networking,” you may be able to automate many of your day-to-day operations across software-defined data centers. Being able to define cross-data center services (to connect resources in one site to applications in another) could help you comply with data residency regulations by having a front end that sends users from a specific IP range to a database in their own region.
Policies can be applied to a single site before being rolled out across all your managed networks, so you can test changes on one site or in one application fabric, before rolling out across multiple sites. That fits well with modern application development models: enabled by Kubernetes, code can be tested in production before the build replicates across the rest of your network, both on premises and in the public cloud.
Multi-site connections use VXLAN tunnels, treating your sites as a single network fabric. It makes sense to use direct connections over a WAN, or you can use the public internet for low-bandwidth connections, especially if you’re using ACI to manage disaster recovery locations.
Built on top of virtual network appliance controllers and Cisco’s Nexus switches, ACI 3.0 uses MP-BGP to handle routes, handling both IPv4 and IPv6 addresses. MP-BGP support should also simplify working with VLAN interconnects and dedicated MPLS VPNs.
As the big three hyper-scale public clouds work on their hybrid cloud options, we’re going to see more partnerships like Cisco and Google’s of these partnerships. Kubernetes has become the dominant orchestration and scheduling tool for Google, Azure, and more recently also for Amazon Web Services, and it is seeing significant investment. If you already have a big Cisco deployment, ACI 3.0 lets you bridge your on-premises software-defined data center implementation with cloud-hosted Kubernetes, assuming you’re already migrating your existing applications to a containerized microservices model.