Andy Ingram is vice president of product marketing and business development for the Fabric and Switching Technologies Business at Juniper Networks.
Cloud computing represents a new way to deliver and consume services on a shared network and IT infrastructure. Previously, IT hardware and software were acquired and physically provisioned on site. With cloud computing, the value of these same software and hardware products are delivered on-demand in the form of services over the network. Cloud computing is not only relevant to network service providers or internet-based service providers offering cloud services to customers, enterprise or public sector IT organizations are becoming acutely aware of cloud computing’s relevance to their own internal operations.
It is now possible for IT to build out private clouds or augment their resources with public clouds that enable their data centers to benefit from this powerful computing model. The lessons learned from cloud computing can vastly improve the scale, agility, and application service levels of enterprise data centers as well as reduce costs. Achieving these results requires close examination of the network itself, which is the foundation of the cloud-ready data center.
It can be daunting to interconnect a growing number of virtual and physical devices while trying to simplify the network to manage these resources at scale. Management complexity increases exponentially as more devices are added. This often necessitates physical segmentation, which runs counterintuitive to building large, shared resource pools that maximize economies of scale.
Overcoming these obstacles requires a fundamental shift in the way enterprise IT organizations build-out their legacy data center networks. Success in building a scalable, cloud-ready data center network requires following three critical steps: (1) simplify, (2) share and (3) secure.
Simplification starts with reducing the number of autonomous devices. In the future, a single logical switch will be able to scale securely and reliably across the data center to connect all servers, storage and appliances. Until that happens, interim measures can be taken to consolidate network layers, increase scale and performance without adding complexity and reduce costs:
- Leverage device density to reduce the number of physical devices.
- Employ technologies that enable multiple physical devices to act as one logical device.
- Reduce layers of switching to two or less.
- Ensure reliable routing connections into and out of the data center.
- Maintain a common OS and a single point to monitor and manage the network with open APIs.
With a simpler, scalable network to support large resource pools, the next step enables the dynamic sharing of resources for greater agility. This necessitates virtualization at two levels:
- The virtualization of servers, storage and appliances
- The virtualization of the network itself
Virtualization minimizes the need for physical segmentation, allows capacity and bandwidth to be shared efficiently and flexibly for multi-tenancy and high quality of service. VLANs, zones, MPLS and VPLS offer effective ways to virtualize the network within and between enterprise data centers.
Another challenge involves maintaining trusted environments and scaling security for pooled resources. To complement the simplification and sharing of the cloud-ready data center, the security services also should be consolidated and virtualized. It is vital to secure data and services at rest and in transit using these and other security measures:
- Secure flows into the data center. Authenticate and encrypt connections to network endpoints (SSL) and enterprise devices (IPSec) while reducing device proliferation. It is also essential to prevent denial-of-service attacks and deploy firewalls to guard the edge and perimeter.
- Secure flows within the data center. Segment the network with VLANs, zones, virtual routers and VPNs, and use firewalls to protect application-to-application traffic – between servers, between virtual machines and between pods. Also employ application aware and identity-based security policies.
- Set network-wide policies from a central location to ensure security compliance. Centralized reporting engines provide historical and real-time visibility into applications and data, and enable IT to perform scheduled vulnerability assessments.
By rethinking traditional legacy approaches and preparing for the advent of cloud computing, it is possible for IT organizations to build data center networks that offer greater economies of scale, improved application service levels, simpler management and lower costs. Simplifying, sharing and securing the network are critical to achieving success in building-out cloud-ready data centers. As Moore’s Law ensures that technological advances continue to make cloud-ready data center networks a reality, IT organizations can take decisive steps today that drive businesses closer to the promise of tomorrow.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.