Insight and analysis on the data center space from industry thought leaders.

Is a Fabric Architecture in Your Future?

High-traffic data centers need a robust, flexible, automated network to support virtualization, cloud computing, and a diverse end-point ecosystem - the move to fabric architecture is addressing these needs.

Industry Perspectives

August 4, 2011

5 Min Read
Data Center Knowledge logo

Shehzad Merchant is vice president of technology for Extreme Networks, where he drives strategy and technology direction for advanced networking including LANs and Data Centers. With more than 17 years of industry experience, and several patents, Shehzad is a veteran of wired and wireless Ethernet and communications.



Extreme Networks

Virtualization technology has truly transformed IT. However, despite its benefits, virtualization has also created strains on data center networks. Computational density and the number of virtual machines (VMs) per physical server are rapidly increasing. Traffic patterns within the Cloud are assuming an east-west characteristic, in addition to the traditional north-south, as VM-to-VM traffic increases. In response, high-traffic data centers need a robust, flexible, automated network to support virtualization, cloud computing, and a diverse end-point ecosystem. Additionally, storage and LAN convergence is driving the need for more predictable, high performance network architectures. These inflection points have led vendors to develop a ‘Fabric’ Ethernet architecture that molds to the new network requirements.

Defining a Network Fabric

While there are many definitions floating around, a data center switching fabric ultimately provides:

  • High-speed, low-latency interconnectivity

  • Non-blocking/non-oversubscribed interconnectivity

  • Layer 2-type connectivity

  • Multiple active paths with fast failover

  • Mesh connectivity rather than a tree-type topology

  • Simple management, configuration and provisioning

Is a Fabric For You?

As virtualization ratios increase from 8 VMs per server to 16, 32 or more VMs per server, the increased need for low latency server-to-server communication and higher bi-sectional bandwidth will drive the need to flatten the network architecture. The goal is to have a network architecture with the minimal number of hops between any set of end points, be it VM to VM, server to server, or initiator to target. Traditional network architectures with multiple tiers will need to move towards a flatter fabric-based network architecture to address the scale, traffic forwarding efficiency and latency requirements of most next generation cloud demands.

How to Build a Fabric for the Cloud

In evaluating many of the cloud providers, their cost of goods sold is their infrastructure. It follows that many cloud providers seek to avoid proprietary technologies that lead to vendor lock-in and lack of pricing leverage. One could argue that a non-proprietary, open, interoperable, standards-based approach would be a good way to go.

The second requirement, again going to the cost argument, is that 40GbE pricing compared to 100GbE pricing is more palatable for cloud providers. Given that servers are moving to 10GbE, the access layer of the network is going the 10GbE route as well. That means the interconnectivity or fabric to tie the access layer switches together will move to 40GbE as the more cost-effective technology. So start with a high density/high fan out, non-blocking 40GbE base; it will provide a good, standards-based approach to providing high-speed, low-latency interconnectivity.

Taking Down the Tiers

One of the key concepts in building fabric based architectures is the notion of eliminating tiers in the network. Traditionally three network tiers – Access, Aggregation, and Core- were commonplace. However, with the broad adoption of virtualization, the virtual switch tier adds yet another switching tier. And as blade servers gain traction, blade switches are adding a fifth tier to the end-to-end network architecture.

As such the first step is to see how one can move from this 5 tier model to a true 1 or 2 tier model. For example technologies such as VEPA can help eliminate the virtual switch. Using pass through modules in blade servers in conjunction with cabling solutions such as MRJ21 or QSP+, can help eliminate blade switches while reducing cabling challenges. Finally going with high fan out network switch solutions such as a high density End-Of-Row chassis that supports high density 10GbE and 40GbE Ethernet ports, can help eliminate traditional network switch tiers by providing the fan out necessary for the access layer without adding the access switching tier.

Another important consideration is to move away from Spanning Tree Protocol (STP) type approaches, — for two reasons. One is to avoid taking multiple hops up and down a traditional tree-type architecture which creates inefficiency and latency. The other is to utilize bandwidth on all available paths. Approaches such Multi-System Link Aggregation (MLAG) which enable active-active redundancy in a flat fabric type architecture provide a simple, cost-effective alternative to STP.  Since MLAG is available from many vendors and works on many existing platforms, it provides a simple migration path for a fabric type architecture. Additionally, other standards-based approaches to multi-path forwarding such as IEEE‘s Shortest Path Bridging (SPB) and the IETF‘s Transparent Interconnect of Lots of Links (TRILL) are viable approaches as well. However, both TRILL and SPB require new packet encapsulation as a result of which they will require a newer generation of network equipment.

In order to support Storage and LAN convergence, Data Center Bridging (DCB) technology provides a mechanism to segregate traffic on a common Ethernet fabric into different traffic classes which can then be flow controlled individually. Bandwidth guarantees can also be applied to the individual traffic classes. In effect DCB provides a way to provide more predictable performance for different traffic classes such as storage traffic on a common Ethernet Fabric. DCB technology is typically available on newer 10G and 40G solutions.

Open Standards Necessary

Lastly, the requirement for simplified management is being met once again through an open standards approach. OpenFlow holds great promise as an up and coming, breakthrough technology for provisioning, configuration and administration. The Open Networking Foundation (ONF), which will drive the OpenFlow effort, is backed by a large cross-section of both consumers and providers of this technology. OpenFlow provides a centralized approach to building intelligence in the network and reducing the complexity associated with distributed management and control planes.

The combination of these technologies, i.e. high density, non-blocking standards-based 40GbE interconnectivity, active-active redundancy using technologies such as MLAG, reducing tiers and flattening the network, enabling convergence via standards based DCB, and a standards-based provisioning solution such as OpenFlow should provide a flexible foundation to meet the demands for Layer 2 switching fabrics for cloud architectures.

Impact on Cloud Adoption

Fabrics will drive cloud adoption to the next level supporting heavy traffic volumes and demand. There are a handful of fabric architectures out there on the market – Extreme Network’s Open Fabric Architecture, Juniper Network’s QFabric, HP’s FlexFabric etc. Gauge whether it is a fit for your businesses’ needs and build on your already existing network.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like