Data Center Interconnect technology hasn’t evolved all that much in terms of design in the past decade, though its use cases have almost completely changed. Time was, its main objective was to move VMs between facilities across an optical connection without disrupting workflow. “These were large pipes that carried an aggregate of traffic between the data centers and were terminated at the edges of these networks,” Tom Nadeau, Red Hat NFV technical director and a globally-renowned SDN expert, explained in a note to Data Center Knowledge. “Tunnel technologies such as VXLAN and NVGRE were then used to get traffic from those points all the way down to the VTEP points on corresponding top-of-rack switches, or even all the way down to the servers themselves.”
Today, more distributed, automated workloads — particularly those orchestrated by Kubernetes — are driving data center connectivity. It’s the application that’s defining the network today, Nadeau said, “due to the nearly completely automated and loosely policy-driven approach that project has taken.” In their zeal, however, the born-at-Google open source container orchestration project’s architects “overlooked the need for robust networking under the platform and instead went for a single interface model.”
Automating network operations and automating the application should, in a perfect world, be handled through the same interface. That’s never been the case. As a result, data center operators need their data center interconnects to provide synchronicity, which requires both low latency and high reliability. These five DCIs seek to fulfill these requirements:
Nokia 1830 PSI-2T
“Better programmability and optical performance equate to a network with more capacity, lower costs, and more flexibility,” Kyle Hollasch, director of product marketing for optical networking at Nokia, said in a note to Data Center Knowledge. “This allows carriers to offer their customers a lower cost per bit per kilometer, and more responsive reconfiguration and failure recovery.”
Nokia’s 1830 Photonic Service Interconnect platform consistently evaluates every “constellation point” in its 64QAM modulation and dynamically shapes each wavelength to the perceived optical characteristics of the route that point follows. This way, a switch manufactured for a maximum separation distance of 3,000 km can adjust its wavelength dynamics for shorter distances to take full advantage of higher signal-to-noise ratios. The technique is called probabilistic constellation shaping, or PCS. It’s part of the bounty Nokia received from its 2016 Alcatel-Lucent acquisition, which gave it Bell Labs, the storied New Jersey-based research and development company.
Edgecore Wedge 100-32X
The proliferation of open standards in the networking space has forced even Cisco to change its approach to delivering connectivity. Facebook directly and intentionally disrupted the market when it successfully submitted its Wedge 100 design to the Open Compute Project in 2016.
Edgecore Networks — which isn’t a name at the top of most peoples’ minds, let alone their racks — is making a name for itself in the DCI space with its Wedge 100-32X implementation. It’s based on the reference specification that Edgecore’s parent Accton Technology validated for OCP.
Facebook is looking for affordable, adaptable technology that can be implemented in bulk in hyperscale data centers. Its switching silicon is Broadcom’s BCM56960 Tomahawk 3 chipset, built to deliver 100/400 Gbps Ethernet at cloud scale. And its CPU is the readily available Intel Atom E3800.
The benefit of using common hardware, as MarkoInsights analyst Kurt Marko told us, is that its abundance in the market, coupled with the application of open standards such as Facebook’s Open/R router protocols, lead to greater programmability. “The motivation is programmability and extensibility,” he said, “since large data center operators need to completely automate as many tasks as possible, particularly provisioning, configuration, and updates.”
The push to reduce latency both in and between data centers has led to a flattening of their network architectures. In place of the traditional core routers, aggregation routers, and access switches, more modern facilities such as “metro data centers” are adopting spine/leaf switch configurations, removing one tier, shortening traffic paths, and ensuring all traffic passes through a spine switch on its way to its destination.
In this situation the spine switch may have to peer with both packet switching devices and coherent optical devices (modulating the phase and amplitude of light). This is where Ciena believes it has an edge. Its 8180 Coherent Networking Platform is a 2U box that supports both non-blocking packet switching and its WaveLogic optical modulation, utilizing a selectable gigabaud rate of 35 or 56 to increase capacity per wavelength where feasible.
The 8180 represents Ciena’s strategy for addressing what Nokia’s Hollasch calls the “rate/reach trade-off” — the drop-off in the signal-to-noise ratio as line distances increase, partly because of the inclusion of signal amplifiers.
Mellanox SN2000 series
One of Mellanox’s key value propositions since its inception has been connecting storage: making disparate volumes act as single units. With containerization via Docker and Kubernetes altering the meaning of storage volumes in the data center, Mellanox is seizing the opportunity to distinguish itself again. Its Spectrum line of switches, including the SN2000 series, are equipped with an operating system called Onyx, which provides a hardware-based infrastructure foundation for storage volumes. While Onyx perceives these volumes as separate physical devices, it gives containers the means for perceiving them as data containers, without forcing developers to re-compile their applications or rebuild their containers.
Mellanox asserts its SN2000 series features a maximum switching capacity of 6.4 Tbps, while maintaining a 1U form factor.
Huawei CloudEngine 12800 series
You rarely hear about a device like a network switch contributing to a data center’s airflow and cooling strategy. But Huawei’s value proposition for its CE12800 series switches concerns, in no small measure, airflow. Its fans are unidirectional, drawing cool air from the front of the unit and expelling it through the back, before it can recirculate around its open spaces.
Taking a page from SDN architecture, Huawei separates the processors allocated to the service channel from those on the control channel. With traffic on two planes, the control plane processors can freely optimize service quality on the data plane without in the process adversely affecting data plane traffic.
Given the fact that the rate/reach trade-off function is inversely proportional, would certain DCI switches be better suited for longer distances and others for shorter ones?
“The key to meeting the extremes of possible data center distances comes down to performance and flexibility,” Nokia’s Hollasch responded. “The latest generation of digital signal processors support up to 600Gbps per wavelength, yet only at relatively short distances — perhaps two hundred kilometers. Future DSPs will explore higher-order modulation (such as 1024 or 4096 QAM) to increase spectral efficiency even further. However, once distances become sufficiently short, it can make more economic sense to simply deploy multiple fibers compared to sophisticated DWDM [multiplexing].”