You’ve seen all of the big statistics around cloud growth. It’s clear that demand for new kinds of data center services continues to grow. Through it all, cloud providers and data center partners are working around the clock to make their environments as efficient as possible. Why? To maximize their bottom line and to stay competitive.
The reality in today’s very competitive data center and cloud market is the one who can run most optimally and cost-effectively while still delivering prime services is a leader in the market. To accomplish this goal, there are a few things to consider. First of all, getting ahead doesn’t always mean adding more gear. Smart data center and cloud providers learn to use what they have and make the absolute most out of every resource. There are new kinds of questions being asked when it comes to new data center efficiency concepts. Is there a new technology coming out that improves density? Does the ROI help improve long-term management costs? Does a new kind of platform allow me to achieve more while requiring less?
In many cases, creating better efficiency and a more competitive data center revolves around consolidating data center resources. With that in mind, we look at three key areas that managers should look at when it comes to data center consolidation. This includes your hardware, software, and the users.
There are so many new kinds of tools we can use to consolidate services, resources, and physical data center equipment. Solutions ranging for advanced software-defined technologies to new levels of virtualization help create a much more agile data center architecture. When it comes to hardware and consolidation, you have several options:
- Network, route, switch: We have officially virtualized the entire networking layer. If an organization chooses, they can run on an entirely commodity networking architecture and still provide enterprise capabilities. For example, Cumulus Networks has its own Linux distribution, Cumulus Linux, designed to run on top of industry-standard networking hardware. Basically, it’s a software-only solution that provides the ultimate flexibility for modern data center networking designs and operations with a standard operating system, which is Linux. Further capabilities revolve around direct network virtualization integration with the hypervisor. When working with networking components, look for virtual services that can consolidate networking functions and reduce the need for more gear.
- Storage and data: Much like networking, you now have the ability to create and control you own storage architecture. Software-defined storage goes much further than just virtualizing the storage controller layer. This logical component allows you to aggregate siloed storage resources and control all of it under one management layer. You no longer have to worry about lost storage resources and can now control all data points from an intelligent storage management platform. Furthermore, new kinds of app-level policies allow you to maximize storage resources, like flash, by point applications to specific repositories.
- Blades, servers, and convergence: Within the actual compute layer – data center architects have quite a few options. Convergence allows you to create powerful environments which couple several data center functions into a node-based architecture. Even traditional rack-mount servers now come with better resources control mechanisms and improved density. Still, new kinds of blade architectures allow for direct fabric backplane integration and even more throughput. Furthermore, hardware policies allow you to dynamically re-provision resources. This allows new sets of users to take on entirely new hardware policies on the same blade chassis. Creating a “follow-the-sun” data center model allows you to add less gear, while still supporting a diverse set of users.
- Managing your rack: Cooling, power, and airflow are all critical considerations when you examine the overall data center consolidation spectrum. How much power are you drawing? Do you have hot spots? Are you servers running efficiently? Are you utilizing some of the latest mechanisms around air flow management? Creating an ideal data center and rack architecture can go a long way in helping control how much gear you actually need. Remember, user density and workload performance are directly impacted by the health of your data center environmental variables.
The software piece of the data center puzzle is absolutely critical. In this case, we’re talking about management and visibility. How well are you able to see all of your resources? What are you doing to optimize workload delivery? Because business is now directly tied to the capabilities of IT, it’s more important than ever to have proactive visibility into both the hardware and software layers of the modern data center.
Having good management controls spanning virtual and physical components will allow you control resources and optimize overall performance. In working with various management tools, consider the following:
- How well are you able to monitor everything ranging from chip to chiller?
- Can you see virtual workloads and how they’re distributed?
- Are you able to see hardware resources utilization?
- Can you control load-balancing dynamically?
- Is your DCIM solution integrated with your virtual systems and the cloud?
- Can you proactively make decisions around resource utilization?
Visit the Data Center Knowledge DCIM InfoCenter for guidance on DCIM products on the market, as well as help with selection, deployment, and day-to-day operation of Data Center Infrastructure Management software.
The first iPhone was released in 2007. Over the course of just eight years we’ve seen the vast adoption of cloud, consumerization of IT, and now the Internet of Things. Behind the scenes, the data center is churning away to support all of this new data and so many new users. These users are requesting applications, services, and a variety of other critical functions that allow them to lead daily lives and be productive. Still, at the core of it all sits the data center, churning.
Data center consolidation must never negatively impact the user experience. Quite the opposite; a good consolidation project should actually improve overall performance and how the user connects. New technologies allow you to dynamically control and load-balance where the user gets their resources and data. New WAN control mechanisms allow for the delivery or rich resources from a variety of points. For the end-user, the entire process is completely transparent. For the data center, you have less resource requirements by leveraging cloud, convergence, and other optimization tools.
Moving forward, careful control of data center operations will mean involving users and the business process. It also means that data center managers must look at new options to consolidate their data centers while still supporting next-gen use-cases.