Insight and analysis on the data center space from industry thought leaders.

Why You Still Have a Bandwidth Problem

While more capacity marginally aids bandwidth problems, building out infrastructure is time consuming and cost intensive – especially when the goal is for content to reach many regions with no change in user experience.

Industry Perspectives

December 2, 2015

4 Min Read
Why You Still Have a Bandwidth Problem

Dave Ginsburg is CMO at Teridion.

Innovation is moving at electric speeds, bringing us new and exciting technology on a daily basis. We’ve witnessed many things – the inception of self-driving cars, smart watches, fitness trackers and home automation. These innovations hit the market over the past few years and have changed how we fundamentally interact with technology. We can stream videos on our devices regardless of whether we are on a bus or on an airplane, and cloud-based music services know our tastes better than we do. But even with all the innovation, videoconferences still lag, files upload slowly and ads on websites take forever to load. So why do these bandwidth issues still exist in a time of such great innovation, and what can we do to move past them?

Applications Growing Up Fast

The Internet has been unable to keep pace with advancements in personalized, user-driven applications and services, such as unified communications services, social media or even news services that are riddled with multimedia advertisements. The complexity of these technologies is putting a major strain on infrastructure due to their requirement of speed, always-on reliability and a high-quality end-user experience. The sheer volume of Internet traffic flooding our networks only exacerbates the issue. We are barred from exhausting what an application has to offer, and is capable of achieving.

For cloud-based services, latency and packet loss are killers. A major reason this remains an issue is because distributed applications are growing in popularity. Applications and content now need to be served to users in remote parts of the world without sluggish response times or worse, downtime. When users are far from servers, performance issues are much more likely to ensue. For example, a connection that supports 1.8 Gbps between London and Frankfurt on a low-loss connection is reduced to just 2.2 Mbps to Singapore at only .1 percent loss, typical between regions. This is close to a 1,000 times decrease.

Limitations of Content Delivery

The Internet’s routing and transport protocols are no longer sufficient. In an attempt to remedy this ongoing problem, companies have built CDNs and WAN acceleration solutions, but there’s a limit to their positive impact on user experience. Geography, the need for pre-provisioned PoPs, and cloud provider limitations all play a role in end-user experience, or lack thereof.

Content delivery networks are created to, of course, improve how content is delivered. But even these technologies are limited by poor Internet performance across regions when the CDN operator doesn’t control the underlying path. Attempting to stay in stride with the many types, sizes and complexities of new content hitting the market, content delivery providers are turning to traditional methods, such as adding new data centers. While more capacity marginally aids bandwidth problems, building out infrastructure is time consuming and cost intensive – especially when the goal is for content to reach many regions with no change in user experience.

There is a common falsehood in looking at published CDN statistics. Admittedly, the majority of bits will be carried by CDNs in the coming years, a result of the mass adoption of streaming media. However, the number of discrete applications served will move in the opposite direction. Distributed gaming, video conferencing, ad serving and even social networks are all increasingly personalized and non-cacheable. Some providers instead employ the use of a quicker refresh of edge caching, which only adds cost and complexity to the problem – a Band-Aid for a bullet hole.

Networking Optimized in the Cloud

Network architectures are advancing, but not fast enough. The biggest promises of the Internet are within reach but service providers are holding back because they cannot guarantee quality, stability and speed. Networks need to address the vast number of applications and devices, while removing previous geographical and device constraints. Since the volume of Internet traffic feeds the problem, it’s also important to take into account Internet traffic congestion in as close to real time as possible. The network can then be adjusted to handle Internet traffic in accordance with the status of the network to better serve users.

The value of the cloud has always been rooted in its flexibility – capacity on-demand, consumption-based pricing, among more. This elasticity, however, has yet to translate to the extent it should. Compute and storage in the cloud is nothing new, but networking has yet to reap the full benefits of the cloud. If networks can take advantage of the flexibility of the cloud and less tied to physical infrastructure, businesses will receive a similar on-demand approach.

A Proactive Internet

Rather than taking a reactive posture, as is common with legacy solutions, networks need to be proactive. Otherwise, businesses will continue to be stunted by the abundance of traffic coming from users all over the world. We need to know what’s happening at a granular level in our networks in order to enable greater flexibility, and act on that knowledge through innovative cloud-based routing architectures. That way, we are better positioned to solve problems, support these new applications and ultimately bring users an experience they can enjoy.

 

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like