Kevin Wade is Senior Director of Product Marketing, Force10 Networks, where he is responsible for the company’s go-to-market strategies across all product portfolios.
We often use the term “data center” as a catch-all that addresses any computing environment, but the reality is that the architecture of a data center varies considerably depending on the applications that it is designed to serve. And because one size does not fit all when it comes to data center networks, it can be difficult knowing how best to evolve the data center architecture to ensure that it scales to handle the increasing load as end users continue to demand higher performance for rich media or other applications.
Let’s take a quick look at different types of data center applications and architectures, how their challenges differ, and how their criteria for adopting networking technologies and selecting equipment are impacted by these differences, today and in the future.
Data Center Variations
There are three distinct types of data center architectures, each designed to support a specific business model and each with its own operational goals and challenges:
- Enterprise data centers
- Hosting or IaaS data centers
- Portal or Web 2.0 data centers
Some factors that can vary significantly between types of data centers include
- whether traffic stays primarily within the data center (east-west), is more client-server oriented (north-south), or is mixed;
- the use of Layer 2 (L2) and-or Layer 3 (L3) for traffic management in the core and at the ToR (Top of Rack);
- the storage technology employed;
- the degree to which server virtualization is being used; and
- the overall size of the data center (in number of servers).
Form Follows Function
Depending on which type of data center you have, you may make different architectural choices when it comes to networking technologies and equipment.
An enterprise data center typically serves many applications to a user base. The enterprise data center may have fewer than 200 servers in a smaller company, or more than 1000 servers in a larger company. While most enterprise data centers are built to be internally facing for optimizing IT applications and services, some vertical industries (oil & gas exploration, biotech or others where IT is strategic to the company’s competitiveness) may incorporate high-performance cluster computing (HPCC) in their data centers for detailed scientific analysis. A number of enterprise data centers are also public facing for serving customers. In these environments, there is a mixture of north-south and east-west traffic, depending on the applications being served.
Virtualization, Converged Networks Are on the Rise
Two major trends at work in enterprise data centers include the rising use of server virtualization to make more efficient use of resources, and the push for a converged network that combines the Ethernet-based LAN and the Fibre Channel-based SAN. Enterprise data centers have traditionally used a blend of L2 and L3 networking services, and a mix of virtualized and non-virtualized servers.
Portals, or web 2.0 companies (whose data centers are typically public-facing to provide an online user experience such as search, gaming, or social networking), have very different needs. As the largest data centers that are purpose-built for a specific application, portals are required above all else to scale. Portal data centers primarily run east-west traffic within the data center as users access applications and content from multiple sources, and tend to keep traffic within the ToR, cluster or data center whenever possible. Typically non-virtualized environments, portals use L2 at the ToR and L3 in the core, and they typically rely on direct-attached storage (with a limited use of IP storage in some deployments).
Web Hosts Have Unique Needs
Hosting data centers can range in size from huge operations spanning dozens of global data centers to small operations with a few dozen customers. Application-agnostic by design, these data centers offer different levels of infrastructure as a service (IaaS) to multiple customers based on defined SLAs and run primarily north-south traffic. They scale via extensive use of virtualization, so they rely on the use of flat L2 topologies for VM (virtual machine) migration, which does tend to also create more east-west traffic. These data centers meet storage needs with iSCSI, NFS over IP, or other IP-based storage protocols.
With these variables in mind, we can draw some conclusions about network capabilities that are most important in each environment.
Virtualization is important because it improves resource utilization. However, it also makes the management of network moves, additions, or changes more complex. Within the enterprise and hosting data centers that rely heavily on virtualization, this is driving the need for technologies that simplify virtual network management by adding VM-awareness and automated configuration and provisioning capabilities to the network fabric. In addition to efforts from leading switch manufacturers, standards activities such as EVB (Edge Virtual Bridging), which addresses requirements that server virtualization imposes on the network, will also increase in importance over time.
The rapid growth of server virtualization in enterprise and hosting data centers is also accelerating the need to adopt flat L2 network topologies that consist of a ToR tier and a core tier. This simplified, two-tier network architecture better accommodates VM mobility while also lowering costs and improving network uptime. Important related standards that will enable more scalable L2 networks include TRILL (Transparent Interconnection of Lots of Links) and VPLS (Virtual Private LAN Services).
Within enterprises, the push for the convergence of disparate data center networks into a single Ethernet-based fabric is driving the need for DCB (data center bridging) and FCoE (Fibre Channel over Ethernet). These standards and related initiatives will become increasingly relevant in purpose-built ToR switches over time.
Being the largest and one of the most specialized types of data centers, portals can be seen as a major driver for improved data center network performance, increased bandwidth and higher densities. Faced with the requirement to architect their networks for high levels of east-west traffic and the lowest possible application latency, portals are driving the need for a network that delivers fully non-blocking, line-rate switching and low latency from end-to-end. Again, this goes hand in hand with the adoption of a flat, two-tier L2 network topology that also offers adequate buffering capacity to handle spikes in network traffic caused by the prevalence of bursty applications. Further, portals are driving the requirement for high-performance, high-density data center core platforms optimized for line rate 40- and 100-gigabit Ethernet (GbE) switching and scalability to hundreds of line rate 10 GbE ports.
This brief look at data center variables shows that one size or architecture does not fit all. It’s important to have a clear understanding of the data center’s mission and key requirements so the network architect can optimize its design for peak performance.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.