Skip navigation
Got Microservices? Consider East-West Traffic Management Needs

Got Microservices? Consider East-West Traffic Management Needs

To build a modern application architecture that answers these challenges, multiple considerations need to be taken into account around load balancers and how to handle the east-west traffic with microservices and containers.

Ranga Rajagopalan is CTO at Avi Networks.

Many network and IT operations today grapple with the same challenge: How do you build a network that can handle disaggregated applications and still deliver the best application performance possible?

This question is a hot topic for operations teams as enterprises move toward more agile development methodologies and ditch the monolithic applications of days past.

The Rise of Microservices

Microservices on containers is the new dev kid in town, offering a brilliant solution for teams that want continuous integration and deployment (CI/CD) when pushing out code.

Because microservices break apps up into multiple parts (services) that work together to make up the full application, developers can update one part of the app without touching anything else. This breakthrough allows apps to be light and manageable instead of static and immovable, which is a win for everyone.

Well, almost everyone. This advancement for development teams brings immediate pain to infrastructure and operations groups that run a traditional data center.

Typically, when monolithic applications are built on virtualized infrastructures, network performance is judged on the speed of the connection between end users and data centers in the WAN link, also known as north-south traffic. At a basic level, this process is linear between one user and one app server, with load balancers determining how to serve traffic to a bunch of app servers.

The use of containers, which adds speed and agility to development, leaves ops teams with the challenge to build a fast, scalable, and elastic architecture that can manage these microservices and apps, especially when it comes to service discovery, traffic management, and security. The birth of the DevOps model was meant to bridge the divide and create a tighter connection between the two groups (development, operations) to support this automation of resource management, but the pressure is still on the operations team to figure out delivery.

Application Architecture Evolution to Microservices

With microservices, individual containers that deliver the different services need to talk to each other (see figure above). In keeping with the directional metaphor, this process is known as east-west traffic. In contrast to the clean north-south highways of traditional end user and data center connections, the visual for east-west traffic is more like the backroads in a cluster of suburban communities where multiple paths can be taken.

As such, a service proxy is necessary to offer load balancing between the services. Add in the complicating factor that microservice applications can be distributed in servers across a data center or across multiple locations, including the cloud (which doesn’t play nicely with hardware in a separate data center), and you are in a real pickle. How do network managers and application architects ensure that traffic is sent to the right place and connecting to the correct containers without overloading servers?

An Architecture for all Directions

To build a modern application architecture that answers these challenges, multiple considerations need to be taken into account around load balancers and how to handle the east-west traffic with microservices and containers.

The best architectural approach accounts for proxy and applications services within a flexible network services framework. By utilizing an elastic service fabric, distributed software load balancers can be managed as one entity, especially with real-time information delivered back to the controller about applications, security, and end users so that administrators can easily troubleshoot issues.

Proxies can serve as a gateway to each interaction that occurs, both between containers within a server and those running across multiple servers. These proxies can resolve DNS lookup requests, map a target service name to its virtual IP address, and spread the traffic load across instances of the target microservice.

With the multiple service instances of a microservice application, service discovery is critical for connecting and consuming information from specific microservices, as well as load balancing for the north-south and, in particular, heavy east-west traffic. Not only do these connections need to happen, but tools that capture the interactions are critical to ensure you can monitor application traffic and performance and troubleshoot problems. Visibility comes in the form of metrics around number of connections, user types, transactions per second and user behavior.

At the end of the day, network engineers, network architects, and load balancing administrators must equip themselves with the knowledge and tools needed to ride the coming wave of microservices. By understanding the needs of east-west traffic with container-based applications and assessing traffic management options, ops teams can build the best architecture to enable their Dev teams and provide end users with seamless and secure services, no matter what direction traffic flows.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish