Boost Rack Densities Without Racking Your Brain
Gary Bunyan is Global DCIM Solutions Specialist at iTRACS Corporation, a Data Center Infrastructure Management (DCIM) company. This is the seventh in a series of columns by Gary about “the user experience.” See Gary’s previous columns include: Unlocking the Data, DCIM Simplified, Why Flexibility in a DCIM Tool is So Important and 3-D Is Great But Insight Is What Counts.GARY BUNYAN
What is the value of using real-time power data vs. faceplate or derated estimates in Data Center Infrastructure Management (DCIM)? It seems I have heard that question quite a bit recently. Customers ask about this because there’s been a lot of buzz about real-time data and they want to know – how significantly does it help you manage your physical ecosystem?
I can tell you that real-time data makes a quantitative impact. I’m seeing it firsthand with a U.S.-based media company that’s leveraging our partnership with Intel to utilize real-time data in their DCIM deployment.
Intel Data Center Manager provides real-time data about power and environmentals at the device level. It’s ideal for monitoring and managing large blocks of servers and other intelligent devices in mission-critical infrastructure. Because Intel goes right to the device to gather the data, it’s agentless and extremely cost-efficient – there’s no need for expensive intelligent power strips or other hardware-based intelligence. This keeps costs low and also, because the data collection is directly from the device, it gives you a more accurate view of what is happening. You can see the power draw at the individual device level, not just the power strip or branch circuits.
Intel’s live data is automatically and continuously fed into the iTRACS DCIM solution and modeled to evidence its contextural impact on the infrastructure leveraging Interactive 3-D Visualization. The result? Data from Intel does not remain isolated at the device level. Rather, it’s given context within the interconnected physical infrastructure, unlocking its true value. Because the value of live data is not in the data itself, it’s in how the data is used to create contextural understanding of the data center – one of the most complex entities on earth, featuring millions of points of interconnectivity between assets, systems, and resources, always in a fluid state of change and evolution.
This is where the DCIM vendor must have expertise in two key areas:
- Interconnectedness – an intuitive understanding of how every asset impacts every other asset in a complex web of inter-relationships and inter-dependencies
- Visualization – the ability to visualize this interconnectedness in a virtual model that makes it instantly meaningful, understandable, and actionable
Expertise in both interconnectedness and visualization work together to unlock the data that had been buried at the device level, giving you a clear, holistic picture of the entire interconnected infrastructure:
- What is happening in terms of power usage and inlet/outlet temperatures at each server – not guestimates, but live data
- The alignment of physical capacity (servers and other IT assets) to logical demand (throughput required by the applications) and how that impacts energy consumption and the power chain
- How to leverage real-time visibility into the infrastructure to improve energy efficiency, optimize capacity utilization, and conduct other value-generating DCIM tasks
Here’s what I mean:
Meeting the capacity needs of the business
Let’s say one of your business clients has a new initiative and they come to you with a request for more capacity – but your hands are tied, you cannot bring additional power into your facility. You must add the capacity within your existing footprint. This means finding racks with available space and power and then filling them with more servers, safely, without putting power or cooling at risk. Because if you over-commission, the whole environment could come down.
Here’s how real-time data can help you meet the challenge:
Confirming maximum number of servers per rack
(1) You establish a threshold of <45% of available power capacity per rack so full redundancy is assured.
(2) FINDING THE RIGHT RACKS: Using live power readings, you determine current power utilization within your existing racks and confirm the remaining power available to you. It turns out you have more racks with “stranded” (available) power than you thought.
(3) CONFIRMING THE RIGHT SERVERS FOR THOSE RACKS: Using hardware profiling, you confirm which server models currently on your floor deliver the best energy efficiency based on live power readings (their actual performance) – these are the models you’ll want to replicate.
(4) You correlate information about both – the racks that you KNOW have available power based on live data, and the servers that you KNOW offer the highest work-per-watt, also based on live data.
(5) What you confirm is this – you can deploy many more servers in these racks than you originally thought possible. While the manufacturer’s faceplate values indicate only 3 servers per rack, and derated values imply you could maybe increase that to 5 –– Intel’s real-time data confirms that you can put 8 servers into each rack. Since the servers are drawing less power than you thought, the racks are safe to fill without exceeding the <45% power threshold.
You now have a clear plan of attack.
[...] Gary’s monthly DCIM “Notes from the Road” column for Data Center Knowledge, he offers a pretty cool [...]
[...] what’s the relationship between real-time power data and capacity planning? I referred to the Data Center Knowledge column written by Gary Bunyan from iTRACS. In Gary’s article, he walks through a scenario in [...]