Skip navigation

Boost Rack Densities Without Racking Your Brain

There’s been a lot of buzz about real-time data and everyone wants to know – how significantly does it help you manage your physical ecosystem?, asks Gary Bunyan of iTRACS. This column describes how DCIM users can leverage real-time data.

Gary Bunyan is Global DCIM Solutions Specialist at iTRACS Corporation, a Data Center Infrastructure Management (DCIM) company. This is the seventh in a series of columns by Gary about “the user experience.” See Gary’s previous columns include: Unlocking the Data, DCIM Simplified, Why Flexibility in a DCIM Tool is So Important and 3-D Is Great But Insight Is What Counts.

Gary-Bunyan-iTRACSGARY BUNYAN
iTRACS

What is the value of using real-time power data vs. faceplate or derated estimates in Data Center Infrastructure Management (DCIM)? It seems I have heard that question quite a bit recently. Customers ask about this because there’s been a lot of buzz about real-time data and they want to know – how significantly does it help you manage your physical ecosystem?

I can tell you that real-time data makes a quantitative impact. I’m seeing it firsthand with a U.S.-based media company that’s leveraging our partnership with Intel to utilize real-time data in their DCIM deployment.

Intel Data Center Manager provides real-time data about power and environmentals at the device level. It’s ideal for monitoring and managing large blocks of servers and other intelligent devices in mission-critical infrastructure.  Because Intel goes right to the device to gather the data, it’s agentless and extremely cost-efficient – there’s no need for expensive intelligent power strips or other hardware-based intelligence. This keeps costs low and also, because the data collection is directly from the device, it gives you a more accurate view of what is happening. You can see the power draw at the individual device level, not just the power strip or branch circuits.

Intel’s live data is automatically and continuously fed into the iTRACS DCIM solution and modeled to evidence its contextural impact on the infrastructure leveraging Interactive 3-D Visualization. The result? Data from Intel does not remain isolated at the device level. Rather, it’s given context within the interconnected physical infrastructure, unlocking its true value. Because the value of live data is not in the data itself, it’s in how the data is used to create contextural understanding of the data center – one of the most complex entities on earth, featuring millions of points of interconnectivity between assets, systems, and resources, always in a fluid state of change and evolution.

This is where the DCIM vendor must have expertise in two key areas:

  • Interconnectedness – an intuitive understanding of how every asset impacts every other asset in a complex web of inter-relationships and inter-dependencies
  • Visualization – the ability to visualize this interconnectedness in a virtual model that makes it instantly meaningful, understandable, and actionable

Expertise in both interconnectedness and visualization work together to unlock the data that had been buried at the device level, giving you a clear, holistic picture of the entire interconnected infrastructure:

  • What is happening in terms of power usage and inlet/outlet temperatures at each server – not guestimates, but live data
  • The alignment of physical capacity (servers and other IT assets) to logical demand (throughput required by the applications) and how that impacts energy consumption and the power chain
  • How to leverage real-time visibility into the infrastructure to improve energy efficiency, optimize capacity utilization, and conduct other value-generating DCIM tasks

Here’s what I mean:

Meeting the capacity needs of the business

Let’s say one of your business clients has a new initiative and they come to you with a request for more capacity – but your hands are tied, you cannot bring additional power into your facility. You must add the capacity within your existing footprint. This means finding racks with available space and power and then filling them with more servers, safely, without putting power or cooling at risk. Because if you over-commission, the whole environment could come down.

Here’s how real-time data can help you meet the challenge:

Confirming maximum number of servers per rack

(1) You establish a threshold of <45% of available power capacity per rack so full redundancy is assured.

(2) FINDING THE RIGHT RACKS: Using live power readings, you determine current power utilization within your existing racks and confirm the remaining power available to you. It turns out you have more racks with “stranded” (available) power than you thought.

(3) CONFIRMING THE RIGHT SERVERS FOR THOSE RACKS: Using hardware profiling, you confirm which server models currently on your floor deliver the best energy efficiency based on live power readings (their actual performance) – these are the models you’ll want to replicate.

(4) You correlate information about both – the racks that you KNOW have available power based on live data, and the servers that you KNOW offer the highest work-per-watt, also based on live data.

(5) What you confirm is this – you can deploy many more servers in these racks than you originally thought possible. While the manufacturer’s faceplate values indicate only 3 servers per rack, and derated values imply you could maybe increase that to 5 –– Intel’s real-time data confirms that you can put 8 servers into each rack. Since the servers are drawing less power than you thought, the racks are safe to fill without exceeding the <45% power threshold.

You now have a clear plan of attack.

Deploying servers based on live data, not guesswork

(6) Before commissioning, you need to be sure. So you run what-if scenarios to confirm maximum rack densities and to predict impacts on power, cooling, space, and connectivity. This includes analyzing aggregated impacts along the power chain to assess potential impacts (based on live data, not guess-timates). You confirm there is no adverse impact to filling out the racks.

(7) You create Work Orders for the various operations teams, with visual diagrams showing exact placement of servers and provisioning of power and connectivity. iTRACS outputs these Work Orders automatically. Each work order includes explicit instructions for both power and network connectivity. For power, the work orders graphically explain which power strip outlets to plug the left and right power supplies into, and which slots on the PDU to place the breaker to maintain phase balance. For network, they detail the entire patching scheme for each port on each server.

(8) You dispatch teams to complete the project free of errors or delays, since the Work Orders tell them exactly what to do.

The bottom line – 60 percent higher rack densities

Real-time power data gives you new levels of insight and informed decision-making. The benefits, as I’ve said, are palpable:

  • Increased capacity – you now have 60% more servers in your racks than if you had settled for using inaccurate derated values.
  • Higher energy efficiency – you’ve eliminated “stranded power” in under-utilized racks.
  • Extended data center life – you’ve extended the life of the data center and delayed or eliminated the need for capital expenditures.


1. Nameplate values indicate max of 3 servers in these racks


2. Derated values indicate max  of 5 servers


3. Intel real-time readings show max of 8 servers – 60% more. (Images courtesy of iTRACS.)

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish