Lisa Rhodes has had a long-time career with Verne Global.
There are two types of people in this world. Those that were excited for report cards at the end of each semester and those that were not. It was usually contingent on how much value, work and time you put into your courses and how much consistent attention you paid to the daily grades you received leading up to the end of the semester. All of these things combined were a telltale sign of your own personal success…or failure.
However, what if there was a way for your teachers to predict your success or failure in the course merely weeks into the class? There’s a CIO from Marist College in Poughkeepsie, NY who figured out predictive analytics can tell if a student is likely to fail a course by the third week in a semester. This happens by analyzing class performance, online activity, and collecting student data from more than a dozen digital sources, including participation in class-related online forums. By analyzing in-class behavior and the underlying participation and engagement online, teachers are able to identify potential issues and address them before a student becomes in danger of failing a course.
Let’s apply those same predictive analytics to a company. Based on data-intensive projects that were planned or underway, like HPC clusters or data analytics, these analytics could determine whether or not a company was at risk of failing. What would the criteria be? Surely there would be something about server density, application security and network reliability. Considering the importance of the data availability, uptime factors would need to go beyond the software and the network.
One hidden risk starting to present itself is power, or more precisely, the stability, reliability and availability of power from the electrical grid. Many CIOs might see this as outside of their control, but in reality, it needs to be factored into any decision that relates to power-hungry, data-intensive applications they are implementing for their businesses.
A Data-Intensive World
Enterprises worldwide are finding long-term, strategic business benefits by better analyzing and extracting more value from the data they create and gather. With data volume expected to double approximately every two years, there is enormous potential economic value to society, worth trillions of dollars. For this reason, the data center is playing a more strategic role in a company’s IT strategy as quick access to critical information is becoming more important than ever before.
The Crippling Cost of a Power Outage
Data centers rely on a continuous feed of power from their electric utility, which has an immediate financial impact for many companies should the grid go down. Data center outages are no longer just an inconvenience; there is a true business cost to the organization. As a result, the demand and risks on the data center are higher than ever before.
Consider the Power Grid Profile of a Data Center Location
While data center power outages could be a random phenomenon, there is a direct correlation between an increase in electricity demands in data centers and a decrease in the necessary power grids and infrastructure to support this growth. Many of these grids are functioning on aging infrastructures and facing increasing reliability issues and cost pressures, as well as a mandate to decarbonize electricity supply resources. As data centers put more stress on already brittle power systems, it’s time to ask not only ‘will there be enough electricity?’ but ‘will it be there when my data center needs it?’
So, what is a forward thinking CIO to do? First, make sure you understand the full scope of HPC projects currently underway in your company. There may be a special project or two hidden away in a different department and you need to be aware of these projects as they now likely impact your budget and data center resources.
Next, get a report card on the power grid for any location where you have data centers. The utility contract may not be part of the CIOs usual purview, but everything the office of the CIO is responsible for – including business continuity of the organization – relies heavily on the power infrastructure.
Third, think about the cost of downtime associated with the applications in those data centers based on a grid outage and how that impacts your business in terms of operational and opportunity cost. A 2016 survey of 63 US data centers that reported an outage within the past 12 months indicated the average cost of a data center outage is over $740,000, up 7 percent from 2013 and 38 percent from 2010.
Finally, think about the applications you have running at each location. Some applications, like financial trading, will dictate location based on latency, resiliency and other requirements, but many won’t. Other applications have high compute power requirements, but low latency or resiliency needs. Applications such as data analytics, HPC and scientific computing might be ideal to move to a location with a more stable power grid as a way to minimize your total risk exposure from a fragile or limited capacity grid.
Going back to the original idea of creating a model for predicting a CIO’s ability to succeed or fail with the projects they know are driving their business forward, it only works if they factor in all the variables, those specific to the application and the underlying support systems (including power) that make it possible. The CIO at Marist College created a system that looked at a wide variety of factors, both overt and underlying to identify students at risk. Enterprise CIOs need to do the same.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.