Skip navigation
How To Reduce Data and  Network Latency Where Others Fail

How To Reduce Data and Network Latency Where Others Fail

Ideally, data centers and disaster recovery sites should be placed at a longer distance from each other than is traditionally practiced to literally insure your data from the impact of any man-made or natural disaster.

David Trossell is CEO and CTO of Bridgeworks.

Data is the lifeblood of business. So a slow data transfer rate makes it harder to analyze, back-up, and restore data. Many organizations have to battle data latency on a daily basis, hampering their ability to deliver new digital products and services, be profitable, handle customer relationships, and retain operational efficiency. Data latency is a serious business issue that needs to be addressed. In contrast, network latency is a technical issue; but they both correlate with each other.

Tackling Latency

There is very little you can do to reduce network latency. The only way you can reduce it is by moving data centers or disaster recovery sites close to each other. This has often been the traditional approach, but from a disaster recovery perspective, it can be disastrous because each of your data centers could end up being located in the same circle of disruption. Ideally, data centers and disaster recovery sites should be placed at a longer distance from each other than is traditionally practiced to literally insure your data from the impact of any man-made or natural disaster.

Yet companies are having to move data further and further away at ever increasing network speeds. The latency within the data center is very small these days, and so latency has its greatest impact on data transfer rates when it is moved outside of the data center to the cloud, or the internet for the benefit of customers.

The response by many organizations is to address latency issues with the implementation of traditional WAN optimization tools, which have little impact on latency and data acceleration. Another strategy to reduce latency is to increase the organization’s bandwidth with a high capacity pipe, but again, this won’t necessarily accelerate the data, reduce latency and packet loss to the required levels. Yet conversely, it is only possible to mitigate the effects of latency to accelerate data over a WAN when data is being moved over long distances.

Traditional Response

Traditional WAN optimization vendors give the impression of reduced latency by keeping a copy of the data locally, so the perception is that latency has been reduced because you aren’t going outside the data center. However, real latency still exists because you have two points going out across a WAN. More to the point, things have changed over the last 10 years. Eighty percent of data used to be generated and consumed internally and 20 percent externally. Disasters tended to be localized, and organizations had to cope with low bandwidth, low network availability, and a high cost per megabit. The data types were also highly compressible, involving small data sets. WAN optimization was, therefore, the solution because it used a local cache, compression and deduplication, and locally optimized protocols.

Changing Trends

Today, everyone has moved from slow speed connections where everything was compressed, to using big pipes now that the price has come down. In the past, data was produced faster than the pipe could manage data flows, but now we can accelerate big volumes of data without having local caches. This is why many companies are transitioning from WAN optimization to WAN acceleration.

It should also be noted that the data scenario of 10 years ago has been reversed. Only 20 percent of data is now generated and consumed internally because 80 percent of it now emanates from external sources. Disaster often causes a wider impact today, and organizations are having to cope with ever-increasing data sets, which are created by the growing volumes of big data. Another recent trend is the increased use of video for videoconferencing, marketing and advertising purposes.

Firms are also now enjoying higher bandwidth, increased availability, and lower cost per Mb networks. Files tend to be compressed, deduplicated and encrypted across globally dispersed sites. This means that a new approach is needed that doesn’t require a local cache, offers no data change, and uses any protocol to accelerate and reduce both data and network latency.

Strong Encryption

Converged systems can nevertheless help to address these issues, but strong encryption is needed to shorten the window of opportunity for hackers to intercept data flows. So if you are being attacked, you can quickly move data offsite. This can be achieved with tools that use machine intelligence to accelerate data that needs to travel across long distances without being changed.

With data acceleration and mitigated latency, it becomes possible to situate data centers and disaster recovery away from each other and outside their own circles of disruption. This approach offers a higher degree of security because large volumes of data can be transferred within minutes and seconds, denying a hacker any chance to do serious harm.

Organizations should, therefore, look beyond the traditional WAN optimization players to smaller and more innovative ones that can mitigate data and network latency whenever data is to be transmitted, received and analyzed over long distances. The future competitiveness, profitability, customer relationships and efficiency may depend on it. So it’s time to look anew at latency to accelerate your data no matter where it resides – in the data center or outside of its walls.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish