(Photo by Sean Gallup/Getty Images)

“Right-Sizing” The Data Center: A Fool’s Errand?

Overprovisioned. Undersubscribed. Those are some of the most common adjectives people apply when speaking about IT architecture or data centers. Both can cause data center operational issues that can result in outages or milder reliability issues for mechanical and electrical infrastructure. The simple solution to this problem is to “right-size your data center.”

Unfortunately, that is easier to say than to actually do. For many, the quest to right-size turns into an exercise akin to a dog chasing its tail. So, we constantly ask ourselves the question: Is right-sizing a fool’s errand? From my perspective, the process of right-sizing is invaluable; the process provides the critical data necessary to build (and sustain) a successful data center strategy.

When it comes to right-sizing, the crux of the issue always comes down to what IT assets are being supported and what applications are required to operate the organization. However, with the variability in computer load and the ability to load-balance and shift loads within the data center without any disruption to operations, let alone the ability direct these IT loads to other data center, picking the size of mechanical/electrical infrastructure is the challenge.

When it comes to poor performance of IT applications, too often the knee-jerk reaction is to “throw hardware” at the problem. This becomes the challenge for the facilities team, as we begin chasing IT phantom loads.  Let alone, when identifying the IT load for a data center, whether build or colo, many times, the IT architecture can be sloppy and over projected.  Meanwhile facilities engineers knowing this, over project the mechanical electrical infrastructure which exacerbates the problem.

For example, my team was recently commissioned to analyze an application that was underperforming. Users were complaining of slow response times, inability to use the application during peak load time, and general system underperformance. Before we could even complete our results, the underlying opinion from the IT department was that the system was under-provisioned from server standpoint and we were just there to validate their assumption. However when the analysis was completed the results showed a different perspective. In fact, from the server standpoint there was plenty of capacity. The root-cause of the problem resided in how the application leveraged memory available to the system. Paging to spinning disc at a slower bit rate was what causing issues with the end user experience. The issue really boiled down to how the virtual machine was configured. This analysis proved worthy as we kept IT from throwing additional hardware at the issue.

This example is all too common in industry; slow applications – slam hardware at it. But IT isn’t always to blame. Data Center architects also can overprovision from a mechanical and electrical systems standpoint. Albeit, we get our data from IT and sometimes the load never shows up, but it should be our passion to allow our big M, big E systems to be flexible, scalable and handle low-low conditions. The fault is can reside on both sides of the table (IT and Facilities) if we do not design for these variations and changes in technology.

Reliability is almost always the number one goal, but efficiency of the operations is a close second. When it comes to efficiency, the biggest part of the equation is right-sizing the equipment to match the load. But you also need to factor in growth potential. Not provisioning enough capability in your data center (IT or facilities) and then being forced into an early and unplanned capital expenditure could be fatal to an organization.

The same can be said if you heading into a colo solution. The process of right-sizing allows the colo to plan better and ensures that you are not reserving capacity that you may never see. This type of over-provisioning hurts not only your bottom line, but also impacts the colo for stranded power and/or space.

When it comes to approaching the right-sizing process, here are some key steps to consider:

#1 Identify and Assess

Get the IT inventory. Do an analysis on this inventory. Go beyond the name-plate rating; do your homework on how it’s truly operating. Assess how IT plans to use this load and know the applications being demanded by that specific data center’s functions.

#2 Know your Data Center Architecture

If this is your data center, understand the minimum load required to provide a stable environment from both IT and facilities considerations. Know the bare minimum for IT requirements and know the maximum. Work within those bounds. And by all means, collaborate with the teams responsible for those decisions. Too many times we see over provisioned chiller plants that cannot sustain load, especially in a dual-path configuration. This can cause all sorts of issues from corroded pipes to frozen cooling towers. Same goes for generators with low loads – these types of situation can be detrimental to a data centers reliability.

#3 Know your Colo

If it’s a colo situation, understand their system limitations for minimums. This is a question that is too infrequently asked. Many times all that is asked or considered is the maximum density allowed. As a good customer to the colo, this question will no doubt impress them, but also provide an opportunity for your team to work with them to come up with a scalable contract. In others cases be sure to set up a contract that works with your actual demand.

#4 Think about Efficiency

It’s not here yet, but the proverbial target will be on data centers to reduce energy. Arguably, that time is already here. It all begins with IT provisioning and matching this load. The IT equipment in our data center has some of the best technology to “scale” load (think of it as a VFD for IT processing).  Big M and Big E equipment is starting to follow suit as we start adopting variable refrigerant flow technology, improved IGBT technology, and DC power provisioning.

Despite being seemingly a fool’s errand, right-sizing your data center is a critical step before determining your strategy – whether it’s build, colo or cloud. And while right-sizing can help provide a more efficient operation, it is also critical for ensuring the overall reliability of data center operation.

About the Author:

Tim Kittila is Parallel Technologies’ (www.ptnet.com) Director of Data Center Strategy. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Add Your Comments

  • (will not be published)