Network Convergence: Challenges and Solutions
March 21st, 2012 By: Industry Perspectives
Tim Lustig is Director of Corporate Marketing at QLogic. With more than 15 years of experience in the storage networking industry, Lustig has authored numerous papers and articles on all aspects of IT storage.TIM LUSTIG
With the enticements of convergence promising delivery of the holy grail of networks, enterprises are evaluating implementations of storage and data traffic over a single network. Advantages include reduction in deployment costs, reduced Capex and Opex expenditures and simplified network management. In this article, we’ll look at the challenges and solutions for convergence in the data center.
Convergence means using a common cabling and switching infrastructure to replace what are now disparate server and storage networks. Data center convergence is in its infancy, but convergence has been well demonstrated in IP voice networks, where telephony and Ethernet data traffic share the same infrastructure for reduced costs and simpler management. The goal of data center convergence should be to enable IT to share, manage and protect data assets more strategically and efficiently.
A converged network incorporates Ethernet and Fibre Channel traffic over a common infrastructure. Ethernet is the foundation for converged networks due to the ubiquity of its presence for connecting data traffic between computers. With the introduction of 10Gb Ethernet, the bandwidth required to converge is now available.
NAS subsystems utilize network file sharing protocols to transport data over Ethernet networks, so storage is not new to Ethernet and the larger pipe that 10Gb offers will assist in combining traffic. Fibre Channel is the basis for the storage area networks (SANs). The difference between NAS and Fibre Channel is that Fibre Channel handles data in blocks and is a lossless protocol while NAS moves it in files over a lossy Ethernet network.
Advantages of Convergence
Convergence takes the best of both and allows for lossless Ethernet traffic to allow block-based transmission and allows applications that lend themselves to block-based I/O – for example, larger structured databases, which will be ideal candidates for this new converged Network. Convergence simplifies the infrastructure, which facilitates the deployment of high availability solutions and provides the underlying foundation for service-oriented, utility-based computing.
To implement a converged network, the network will be run over converged network adapters (CNAs), which support Ethernet TCP/IP, FCoE (Fibre Channel over Ethernet) and iSCSI. These adapters send traffic to a switch that supports four new capabilities that were added to the Ethernet protocol:
- Enhanced Transmission Selection for classifying traffic types and requesting lossless services,
- Priority-based Flow Control for controlling the amount of traffic that is allowed onto the network,
- Congestion Notification for monitoring the network and triggering flow control events, and
- An exchange protocol called Data Center Bridging.
Typically, this transformation begins at the SAN edges, where the cabling densities are the highest. It also allows for a slow migration to convergence utilizing the existing network infrastructure and end devices.
Converged networks present unique technological challenges. The storage network has emerged as a primary component of the IT infrastructure that has, as its primary goal, the delivery and protection of information. Network convergence “transfers” several security risks.
- Converged networking encompasses a mix of computing platforms, communication protocols, storage devices, and network topologies. Various standards, communications types, file system protocols, and interface buses exist to connect hosts to storage devices and form a storage network or LAN. Data center operators must decide which traffic is best suited to run over which technology and protocol.
- Since storage is no longer on a segregated network, there are inherent risks over a shared network. SANs typically enforce a one-to-one relationship between the application and the storage location which appears to be directly attached from the server’s perspective and gives the administrator a greater level of manageability. In a converged environment, data and storage traffic will be combining, but it is imperative to separate traffic and apply QoS. If this is missed, high priority traffic will not have the bandwidth it needs when it’s needed and the performance will suffer.
- Virtualization enables one-to-many relationships. Data center operators use network partitioning at the adapter level (NPAR) or Switch Independent Partitioning to resolve the one-to-many relationships by enabling each physical port to be logically divided into four logical ports with flexible allocation of 10GbE bandwidth. This eliminates the cost of installing multiple, physical networking and storage adapters that are dedicated to specific server applications or other tasks.
- QoS over one physical hardware item. The network must deliver consistent content in real time. Now that you’ve logically divided the pipe, you need to assign dedicated bandwidth, for example: virtual NICs will be required for clustering of Virtual Machines, a Fibre Channel HBA for storage access and a dedicated migration path on another logical device for Virtual Machines to migrate in the event of a failure, and QoS will guarantee bandwidth is available to the high-priority devices if it is needed. As you segregate the network, QoS will guarantee that applications have the bandwidth they need for real-time delivery.
- Multiple path networks. Data networks traditionally use the Spanning Tree protocol to connect switches, but Spanning Tree opens up too many connections in a converged network – it broadcasts from one switch to the next to create a congestion point. TRILL (Transparent Interconnection of Lots of Links) is the recommended method of managing multiple links, so switches in the converged network that support TRILL will make your journey much smoother.
Network and data center managers must not simply provide bandwidth. They must ensure that each system, function and application on the network has the amount of bandwidth and network quality of service it needs while attaining interoperability and avoiding bottlenecks.
Legacy architectures constrain today’s data centers due to an exponential increase in applications, servers, storage, and network traffic. Converged networks present challenges, but they provide customers with a long-term, future proof strategy to develop a single, converged data center fabric with the flexibility and performance that scales.
Convergence is the basis for cloud computing and paves the way to harness, scale, and dynamically allocate any resource – including routing, switching, security services, storage systems, appliances and servers – without compromising performance.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
Mark McClurePosted March 22nd, 2012
It may be simplified network management to the C-suite but the Net Ops folks will need tools that intelligently monitor and also help them understand quickly what might be happening when problems occur.
I’d love to see a ‘before’ and ‘after’ assessment of a data center when a critical mass of convergence has been achieved – it would make a great case study. Much more likely will be years of support requirements for pre and post-convergence storage worlds in the same data centers.