Skip navigation

High & Low: Performance & Latency Matter

With higher-bandwidth, lower-latency data centre interconnects, organisations can resolve the apparently contradictory demands of driving down latency, boosting performance and reducing costs by giving organisations access to a wider range of colocation options, and making more space and power available for growth.

Craig Denton is the CEO of data centre Virtual Network Operator Next Connex.

Craig-DentonCRAIG DENTON
Next Connex

In networking, speed matters. Bandwidth-hungry apps and mission-critical systems demand ever more throughput and capacity. While the trail has been blazed in data-intensive sectors such as stockbroking, commodities trading, and the film and media industries, it’s hard to think of any business area that would not benefit from improved wide-area network performance.

This is especially true when the wide-area network links multiple sites, or connects a main office to a data centre. In these situations, fast, reliable network links are key to ensuring overall system performance, not to mention rapid recovery in a contingency that may require failover to other sites.

Getting closer links

One of the first industries to truly embrace ultra-high speed networking was the financial sector, specifically in brokerages and trading exchanges. Here, a performance advantage of a few milliseconds over a rival broker can be worth millions per year.

So over the past decade, brokerages have invested heavily in faster networks, faster servers, optimized Gigabit Ethernet infrastructure and applications to try and gain that vital edge over rivals.

The financial consultancy group, The TABB Group, recently surveyed CTOs at several financial firms, and found their network and data centre priorities included reducing latency and expanding network capacity. Other wish-list items included improving the use of data centre space and colocation.

TABB also found that main drivers behind a brokerage’s choice of a data centre were cost (ranked as "important" to 57% of respondents), proximity to the local exchange (48%), space in the data centre to expand (33%) and reliability of power supply (29%).

These wishes map closely to those of CTOs in other industry sectors – they want the network to be as transparent and to give performance as near real-time as possible, together with lower-cost, reliable data centre hosting.

Colocation contradiction

Traditionally, organisations have improved network performance between sites by simply reducing the distance between those sites. This is why so many data centres are sited close to financial centres, and in some cases, inside stock exchanges themselves.

However, this has created competition to colocate in facilities close to metropolitan centres, putting the squeeze on space, driving up prices, and of course increasing strain on power reserves as demand grows. So rather than exploiting data centre facilities to lower costs, companies have flocked to more expensive facilities in order to try and boost network performance.

Newer, faster networks can resolve this issue, by using the performance to achieve a different end. After all, high speeds can either get you from point A to point B faster than before; or enable you to cover a greater distance, from point A to point C, in the same amount of time (or less) that it used to take to get from A to B. This in turn gives organisations more choice in data centre location, without compromising performance.

From A to B and beyond

Mid-2010 saw the introduction of the new 40/100GB networking standard, with the first rollout of 40/100 infrastructure happening now. This represents an obvious performance advance over conventional Gigabit or even 10GB network links, cutting network latency to less than 2ms, even over long network hops of 100 miles or more.

In turn, these speeds mean that latency in the network connection between head offices and data centres can be minimised compared with other factors in application or data processing.

So using higher-speed, 40/100GB infrastructure has three key benefits. First, cutting latency means faster, more reliable data processing, which as we’ve seen is critical in the financial sector.

Secondly, high network speeds better accommodate the frequent, temporary bursts generated by data intensive applications. If the data rate of these bursts exceeds the capacity of a network link, data will be forced to queue, introducing unwanted delays and even risking crashes. Higher-speed 40/100GB links ensure that the available network capacity always exceeds the highest burst data rate, so the network link cannot become congested.

Finally, such high-speed infrastructure has tremendous implications in terms of widening the available choices of data centre location.

Location, colocation

After all, you don’t need your data centre to be nearby, when higher speeds means you’ve almost eradicated latency in the network link.

This level of wide-area network performance can enable organisations to look beyond metropolitan data centres, taking advantage of the lower space costs and greater power resources available in out-of-town facilities, while still being able to guarantee performance for data-intensive, critical applications.

This has immediate benefits in saving costs. While space in extra-urban facilities can be up to 50% cheaper than metropolitan rates, power availability in particular is critical to data centre choice.

More advanced applications and greater computing muscle demands more electricity and more cooling – both of which are at a premium in existing city-centre sites. It’s no surprise that the Jones Lang LaSalle Data Centre Barometer of Autumn 2010 found that power continues to be the most important factor in choosing a new data centre, with over half of respondents ranking availability of power as their absolute priority.

Newer, purpose-built facilities outside metropolitan areas, made accessible by high-bandwidth connectivity, escape restraints on power availability and can support even the most demanding requirements.

Low-latency data centre interconnects also have implications for international data traffic, for example linking offices in New York and London. By shortening international data transit times, organisations can once again improve their network performance and robustness.

So with higher-bandwidth, lower-latency data centre interconnects, organisations can resolve the apparently contradictory demands of driving down latency, boosting performance and reducing costs by giving organisations access to a wider range of colocation options, and making more space and power available for growth. Speed really does matter.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish