Gary Holland is a Marketing Campaign Strategist for Nokia Enterprise.
As the cloud grows and matures, the focus needs to shift away from cloud data centers to the networks that interconnect them and connect to customers. The variety and growing sophistication of cloud-based applications and services puts demands on today’s networks that they weren’t designed to handle. And, whereas in the early days of the cloud the emphasis was on bandwidth and throughput, today’s and tomorrow’s applications – UHD 4K video streaming, massively multiplayer online gaming, IoT and industrial automation – put the focus on many different aspects of network performance to ensure the best customer experiences.
The biggest cloud providers, the webscale giants such as Amazon, Google, and Tencent, have overcome data center interconnect issues by running their own private backbones between their global facilities. But today, all kinds of businesses are using colocation data centers to deploy cloud services and almost any conceivable application. This creates much more diverse requirements and the need to adopt webscale networking practices more generally, particularly within and between colocation data center facilities.
Gaming companies are the first and most obvious example of this trend because of their emphasis on low latency or ‘ping time’ as most gamers would understand it. In the highly competitive world of online gaming, where new titles are being released weekly, there is a great deal of pressure from gaming customers to optimize reaction time. The biggest gaming companies have responded to this by moving their servers closer to their customers and cutting down on network lag and latency.
This concern about lag and latency is not just a peculiarity of gaming, however. As competition heats up between video streaming networks, they are embracing the trend toward improved webscale performance. Financial trading networks also put just as much emphasis on lag and latency in a business where milliseconds translate directly into dollars. New IoT applications such as self-driving cars, to the extent that they interact with roadside sensors and information, will also need very fast reaction times. This will also be true of many Industry 4.0 and smart city applications.
Webscale architectures were born in the data center, but as they move to the wide area network (WAN) and the network edge, they must confront some challenging but not unsolvable problems. The primary characteristic of software-defined networks (SDN) is their ability to dynamically respond to network loads in an automated or programmed way, based on policies. Another characteristic is almost unlimited bandwidth. In contrast, programmability is not built into wide area networks and bandwidth is always a scarce commodity.
Consequently, there have to be some accommodations and new techniques to make the WAN equally programmable and dynamic. For example, if an application needs very low latency, for instance, routing must be optimized and, possibly, edge compute resources must be assigned instantly to the edge of the network to achieve it. At the virtualization level, this means decomposing and delayering network software to make it fully cloud-native using microservices and containers – again, very much techniques born of the data center.
At the next-level down, routing protocols need to be more intelligent about overall network traffic flows, particularly regarding network latency and congestion. BGP, the routing protocol used for internet peering, still does not account for network lag and latency, or for link capacity and congestion. The internet, which is only a collection of best-effort interconnected networks by design, lacks these analytic and automation capabilities.
What’s needed is the capability to monitor end-to-end service delivery paths and analyze application-level traffic flows across multiple networks and peering points. These analytic insights can then be used by SDN controllers to automate the network by mapping service requests to new routes or re-directing application traffic flows to avoid congested paths. Known as automated peering engineering, this approach helps to better balance traffic flows, ensure application performance and deliver better experiences for customers.
The ability to monitor and analyze end-to-end traffic is also extremely useful for dealing with security issues, especially, distributed denial of service (DDoS) attacks. Volumetric, botnet-driven DDoS attacks tend to overwhelm traditional defenses, such as cloud-based scrubbers, by coming at them from thousands of different IP addresses across the internet. Having an end-to-end picture of internet traffic flow gives better visibility of potential threats and enables routers to be programmed at the network edge to block malicious traffic in real-time, significantly reducing costs compared to using cloud-based scrubbers.
Being able to program routers on the fly to route traffic optimally or, in the case of DDoS, remove infected traffic completely, requires changes to the routing network processors. To be capable of this level of enhanced packet intelligence, filtering and steering at line rate, network processors must be extremely powerful and designed to respond precisely to the SDN controller.
Putting this altogether, let’s return to the gaming example above. Players are no longer simply connected to gaming servers in the cloud across a best-effort network. Using insight-driven automation and peering engineering, gaming traffic is steered via the best peering point to take the shortest possible route with the lowest latency. It can also “pin” that traffic so that it returns the same way. This ensures player traffic takes the best route to and from online game servers every time. The system can also protect players from DDoS attacks, which are becoming larger, more sophisticated and more frequent in the gaming world.
Implementing these webscale network capabilities opens the cloud up to more companies than the Amazons and Googles of the world with their massive private backbones. It provides the advantages of webscale networking to companies in gaming and streaming, as well as companies in financial services, cloud services, co-location and hosting. It also enables smart cities, intelligent transport systems and a host of different Industry 4.0 applications that require very diverse but specific network performance capabilities. As they embrace and integrate the cloud into their operations, webscale networking enables these companies to deliver endlessly different experiences for their customers.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.