Data centers and other cloud computing operations are now thought to constitute up to 1% of global power use. The carbon expended in running these massive server farms — and especially, in cooling them — is far from insignificant. Some 50% of electricity use is thought to relate to basic operational costs and up to 40% is attributable to cooling costs.
Data centers are searching high and low for solutions, from leveraging more renewable energy to plopping data centers under the sea in order to save on cooling costs.
Some of the most parsimonious and practical solutions involve implementing artificial intelligence to locate and correct inefficiencies. A report by Gartner estimates that AI will be operational in half of all data centers in the next two years. A 2019 report by IDC suggests that may have already happened. Workloads are set to increase 20% year-on-year, so this is an urgent problem.
Ian Clatworthy, director of data platform product marketing at Hitachi Vantara, and Eric Swartz, VP of engineering for DataBank, speak about the possibilities and limitations of AI solutions in data centers.
Collecting the Proper Data
In order to create and calibrate useful AI instruments, data centers must collect and input the proper data. This has proven challenging because certain types of data that have not been historically useful in day-to-day operations have simply been ignored. Some may be collected but unused. And some is not collected at all, meaning that operators have to start from scratch or extrapolate from existing data.
Necessary hardware data includes: the available storage, the ease of access, the number of machines running at a given time, and the machines to which traffic is directed under any given circumstance. Data relating to energy expended on powering machines and cooling is also essential, as is related data on environmental conditions inside and outside of the center.