Digital Realty Study: Direct Connects to Cloud Bring 50X Less Latency

Christine Hall

June 21, 2017

4 Min Read
Digital Realty Study: Direct Connects to Cloud Bring 50X Less Latency
Getty Images

For a company thinking about leveraging the public cloud as part of its data center solution, latency can be a big concern. For some, unfortunately, latency isn't considered until after they've already committed to using the public cloud and quickly becomes a costly issue. Others see the red flag ahead of time and perhaps only utilize public clouds in situations where latency won't cause a problem.

The problem is the internet itself. It's fast, but not instantaneous. Even under the best of conditions, data to and from a server, whether located on-premises or sitting in a carrier hotel, will take enough measurable time to make some processes sluggish or inoperable. If a bottleneck arises somewhere along the way, which depending on location can happen often, the entire system might become all but unusable.

Security can also be a concern as organizations might be wary of sending sensitive data across the public internet, whether that data is encrypted or not.

The big cloud providers recognized the problem early on and offered enterprise users direct wired access -- for a fee of course -- under plans like DirectConnect, ExpressRoute and Interconnect. Data center providers were also quick to understand the issue, with all offering direct-connect services -- with the added bonus of direct connections to multiple cloud providers. Customers like choice, it seems.

While it's pretty obvious that sending data back and forth through a direct connection will be more efficient than having it transit the internet, there have been no metrics -- other than anecdotal evidence -- measuring the difference.

That is, until today, with the release of a study conducted for data center operator Digital Realty by Krystallize Technologies, which shows significant improvements when using IBM Direct Link Colocation to connect with IBM Cloud Bluemix servers located in its data centers when compared with connections made via the internet.

"We really wanted to get some metrics into the industry," Digital Realty's CTO Chris Sharp told Data Center Knowledge, "because I don't believe many companies have really boiled it down to Use Case A and Use Case B and then represent the delta between the two. That's why we think this is drastically different, by trying to educate the market with actual metrics."

The study focused on IBM's Bluemix because the company has at least five data centers where it can connect its customers directly to Bluemix servers located on-premises. Although the study focused on comparing on-premises direct connections with connections made via the internet, the metrics of other direct connection methods made through a metropolitan area network or MAN were also included in the study.

Three issues were examined, starting with "file-read latency," or the time it takes for a packet requested to be transmitted and arrive back at the file system. The second was "file-read throughput," a measure of kilobytes of storage transmitted per second. Finally, the study looked at "application performance," which is described as "how applications actually perform in the tested configurations."

The differences were somewhat akin to comparing driving on a graveled road to driving on a Nascar speedway.

The "file-read latency" test, with a time of 0.3 seconds for the internet connection and .0044 seconds for the direct connect, showed that the direct link cross connect delivers on average 1/50 the latency of the internet. The time for a direct connection utilizing a MAN was .088, still a marked improvement.


The "file-read throughput" test delivered 55.4 times better throughput using the direct connection, or a speed of 413.76 kB/s using the internet, 6,739.10 kB/s through a MAN, and 22,904.26 kB/s through a direct connection with an on-premises Bluemix server.


Boiling this down into "application performance," this means that a 5.5 MB unoptimized page will render in 0.3 seconds with the direct connection or 25.8 seconds when transiting the internet. For a best case scenario, using full caching and parallel processing, the direct connection rendering time drops to 0.2 seconds compared to 13.3 seconds when using the internet.

Although the test didn't attempt to measure security in any way, direct connections are obviously more secure.

"If you do a private interconnection, you no longer have to have any public facing infrastructure to achieve your desired end state," he said. "And so you're not open to any DDoS attacks or any other malicious prying or hacking into your public infrastructure."

The increased security afforded by directly connecting to cloud providers should be especially important to industries, such as health care and financial, in which security is mandated by law.

About the Author(s)

Christine Hall

Freelance author

Christine Hall has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she's published and edited the website FOSS Force. Follow her on Twitter: @BrideOfLinux.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like