Mattias Fridström is Chief Evangelist for Telia Carrier.
A reduction in the cost of networking hardware is forcing a fundamental change in how data centers and metro networks are conducting business. Any location with fiber can now become a data center, opening up new opportunities for designing, managing, and operating cloud and on-demand computing resources.
When high-speed, high-performance network interconnections were first needed, it was necessary to move data traffic between parties at a common central exchange point. For high-speed data transfer, the physical proximity of a few hundred meters or less was required due to the expense and complexity of proprietary telecom and network equipment for both short distance and long-haul transport, filling up a rack or two of gear depending on the requirements and speeds of the day. Connecting data centers to each other and the rest of the world via networks was, therefore, expensive. This typically resulted in enterprises building one or two major data centers, depending on their needs for geographic diversity and business continuity.
As technology evolved, switches and routers became smaller and faster at the same time. What once took a rack of gear now fits in a "pizza box" form factor with a corresponding reduction in power. Carriers can now offer multi-gigabit speeds to any location served by fiber, drastically opening up a portfolio of data center location options for enterprises within a metro area.
Long-haul data center traffic, however, will still be bound by the cost and availability of long-haul carrier fiber. While there will be some cost impact on long-distance offerings due to new gear, the driving factor on price remains the relatively scarcity of long-haul fiber and the ability of carriers to reliably operate long-distance networks.
Efforts to promote off-the-shelf "white box" standards-based solutions, such as Facebook's Telecom Infra Project (TIP), are delivering tangible results in the real world. Telia Carrier successfully tested 100G and 200G DWDM open-network optical gear developed under TIP on a thousand kilometer route between Stockholm and Hamburg in Europe earlier this year.
Open white box networking hardware provides carriers with a cost-effective capability to deliver data center speeds across metro, country, and continental distances. As a result, the concept of traditional interconnection meeting points is becoming less important. Enterprises can build and operate high-speed networks without the need for interconnection points to shuffle around data traffic.
However, multi-vendor open standards gear provides challenges as well. Carriers will have a more operational responsibility in integrating and operating a multi-vendor solution for both fault analysis and repair. Fortunately, carriers have and continue to accumulate the necessary depth of experience to successfully set up, integrate, and operate multi-vendor networks as open-standards gear moves into their networks. Now more than ever, enterprises should look to carriers to support their network and rely on their carrier partners to manage this sea of changes and to ensure that they are connected to everything, everywhere, without interruption.
Establishing high-speed network connections is now a function of the availability of physical media – fiber -- rather than being dependent upon more expensive proprietary network equipment. It is now almost as easy to set up a 100 Gbps or faster optical connection across hundreds to a thousand kilometers or more as it for a short few hundred meters between racks at an interconnect point.
With new white box standards-based equipment handling basic transport, network functionality is moved out of dedicated hardware and into the software. The combination of Software Defined Networking (SDN), and Network Function Virtualization (NFV) enables greater network control and the ability to scale services up and down as needed. Network designers need to realize SDN/NFV is still in its early days and the ability to scale up and down doesn't negate the need to plan for the maximum amount of capacity necessary at peak usage.
Since the greatest limiting factor for high-speed connectivity is the availability of fiber rather than equipment cost and requirements, enterprises can now put data centers in multiple locations rather than concentrating resources in one or two. Cloud and on-demand resources can be distributed across multiple locations, with the number of sites and allocations now based upon any number of factors, including more robust business continuity and disaster recovery, the relative cost of commercial real estate, and utility costs.
Enterprises are already moving to multiple data centers across national boundaries due to regulatory requirements, with some countries requiring sensitive data such as financial transaction and personally identifiable information (PII) to remain within the country for privacy and security reasons. While the need for multiple data centers, in this case, is being driven due to governmental directives, lower cost networking hardware and the availability of fiber within and between metros makes it practical to put physical data centers in a number of different countries.
With the availability of lower cost hardware and plentiful fiber, data centers are no longer limited to one or two centralized locations. Instead, data centers can be any place that has a fiber connection, with information collected, stored, and transported to where it is needed for business processes and/or regulatory requirements. CIOs and IT staff need to explore all options considering that a data center can reside anywhere there's fiber.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.