Marco Di Benedetto is a co-founder and Chief Technology of Officer of Embrane, creators of the industry’s first distributed software platform for virtualizing layer 4-7 network services.
If you’ve ever had the opportunity to deploy a L4-7 network services layer in a data center, you’ve probably dealt with two simple dimensions: you picked the form factor of the appliances you wanted to deploy and you selected the type of tenancy for the appliances. At a high level, you picked between physical or virtual appliances, and you picked dedicated or shared tenancy. You probably spent a lot more time sizing the actual appliances (how big of a physical/virtual appliance and how many tenants) than you did selecting physical/virtual and dedicated/shared.
Plotting the two dimensions on a graph, you end up with four quadrants. These quadrants roughly map to variations of a number of metrics: price per tenant, maximum performance per tenant, service level agreements (SLAs) across tenants, etc. Typical users want the experience of the first quadrant with the prices of the fourth. You can name the metrics you care about, but for the sake of this conversation, we’ll look at two of them: prices and SLAs.
Graphic courtesy of Embrane.
Historically, enterprise IT users have chosen to comfortably sit in the first quadrant, at least for network services that are tightly coupled with the applications they belong to. After all, how many applications design best practices have been built around shared appliances? We live in an IT world that has traditionally associated “sharing” to “trouble.” Sharing is something we instinctively reject, something that can be considered only for use cases at the very low end. Can the cloud change that?
A Servant of Many Masters
Service providers and enterprises alike have to be prepared to serve customers/users that have needs ranging from very small to very high performance. The only trait all customers/users share is that over time, situations and needs will evolve and change.
It’s tempting to try and share the same infrastructure among all classes of tenants, regardless of their needs. After all, isn’t “cloud” about sharing an underlying physical infrastructure among a large number of users to take advantage of the efficiency gained by maximizing utilization of each single piece of hardware in a data center? If you want to build an undifferentiated, best-effort commodity cloud, the answer is “yes”. But if your goal is to offer multiple service levels tailored to different classes of use cases, then the answer is probably more complex.
Sharing appliances is hard. Best practices use knowledge of usage patterns to identify the optimal appliance characteristics for each specific use case. Unfortunately, if you are offering cloud services, your ability to predict usage patterns is challenged by your inability to predict who’s going to use your services (let alone for what).
If you’re dealing with a L2/3 network, you use statistical analysis to define appropriate input metrics for packet size mixes and inter-arrival times, or rely on things like IMIX and cross your fingers. The underlying assumption is that L2/3 processing of individual packets is relatively simple, and basic switching overhead can be defined as just a function of those two parameters. “Usage patterns” are synonyms with “traffic patterns:” if you get a bigger share of large packets your throughput goes one way, if you get more small packets your throughput goes another way; the longer the bursts, the heavier the impact on throughput and latency. Mastering predictions of packet size mixes and inter-arrival times enables you to perfect how efficiently your individual switches and routers are used.
When it comes to L4-7, “usage patterns” are not defined by traffic patterns alone. Features configuration has a dramatic impact on the performance of your device. In other words, processing overhead for a packet (or a flow) is heavily influenced by how you’ve configured the device. While this isn’t earth-shattering news, people tend to forget what that implies when it comes to the exercise of sharing L4-7 network services across multiple tenants. Role-based access control is great, but it’s the data path that will cause the headaches. Each tenant won’t only impact performance of the box by injecting irregular, unpredictable traffic patterns; they will impact performance also by injecting widely different processing overheads on their flows. The idea that you can use L2/3 principles to share your L4-7 devices is flawed by blindness to this latter fact. The workaround is, don’t let tenants configure the “expensive” features.
Long story short, if you’re not content with the risks of best effort, you should demand use of dedicated devices for your L4-7 needs. But, of course, there are catches.
If you take a “static” view of the infrastructure, dedicated physical appliances (the first quadrant) are not economical for most cloud use cases, while dedicated virtual appliances (the second quadrant) have limited applicability due to their inherent CPU resource constraints. The gap between the cost to tenants of a physical appliance and the performance ceiling of a virtual appliance is wide, and that’s one way to look at the discontinuity between the first and the second quadrant.
If you take the “dynamic” view of the infrastructure, supporting customers’ evolving needs is not pain-free with either physical or virtual appliances. While we got used to the idea that to scale an application you just need to add more servers (and forget that the application must have been designed that way for this to work), scaling a L4-7 network service for a specific tenant is not as simple as throwing more appliances at the problem.
Where Do You Go From Here?
Whether you’re a service provider or an enterprise, avoid constraining your network services to a particular quadrant.
Push your vendors for flexible L4-7 network services deployments that remove the discontinuities across quadrants. The same physical infrastructure can host L4-7 network services for different classes of use cases, and adjust as business needs evolve. If you do, you’ll be able to offer your customers/users the performance they need today, and continue to follow their requirements as the change over time.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.