Michael McNerney is VP of Marketing and Network Security at Supermicro.
The short answer: Yes.
Adoption of new technologies, like smartphones and wearables, may have slowed down significantly in the last few years, but data usage is only continuing to grow—massively. In 2012, there were only 500,000 data centers worldwide to handle global traffic, but today there are more than 8 million according to IDC. The rapid rise in smartphone usage, IoT adoption, and big data analytics have led to a massive growth in data centers, and they come with a cost.
Every year, millions of data centers worldwide are purging metric tons of hardware, draining country-sized amounts of electricity, and generating carbon emissions as much as the global airline industry. Technological advancements are difficult to forecast, but several models predict that data center energy usage could engulf over 10% of the global electricity supply by 2030 if left unchecked. Such growth would indicate similar increases for both gas emissions and e-waste produced. Data center researchers, including Britain’s foremost expert Ian Bitterlin, note that the amount of energy used by data centers continues to double every four years.
Additionally, Informa recently surveyed hundreds of IT leaders on their data center practices and the findings are intriguing. While data centers use 3 percent of the worldwide electrical supply, energy efficiency ranked fourth on the priorities list when building or leasing a new data center. Furthermore, most respondents did not know their data center Power Usage Effectiveness (PUE), the primary measure of data center efficiency, and often kept their data centers at needlessly cold temperatures – wasting large amounts of power.
All-together, this paints a challenging picture for the future of our environment. Luckily, some forward-thinking industry leaders have been innovating their way around this conflict.
Long Answer: Not Anymore
For the last half-decade, the U.S. Department of Energy found that rapidly increasingly Internet traffic and data loads were being countered by a wide swath of new technologies and designs – limiting increases in data center energy consumption. The Lawrence Berkeley National Laboratory estimated that if 80 percent of servers in the U.S. were moved over to optimized hyperscale facilities, this would result in a 25 percent drop in their energy usage.
For the enterprises that don’t need or can’t afford to establish a hyperspace data center, a new category of resource-optimized systems for data centers have arisen on the market. In the last few years, many new server technologies and data center architectures have focused on maximizing resources and efficiency while minimizing energy needs. These solutions look to new design improvements, rethinking how standard data centers are built to achieve breakthrough performance and efficiencies.
One big area of improvement is to develop superior cooling techniques. A popular solution is to simply locate data centers in cold or windy climates. Another is leaving fewer servers on so as not to waste time idling: Facebook invented a system called Autoscale in 2014 that reduces the number of servers that need to be on during low-traffic hours, leading to power savings of about 10–15 percent. Some companies, like Google, have turned to AI to optimize their internal cooling systems by matching weather and operational conditions, reducing cooling energy usage by almost 40 percent.
A contrary approach growing in popularity is to simply design server systems to perform at higher temperatures. Instead of cooling the systems to a certain temperature, newer hardware can run at higher temperatures without impacting reliability. Naturally this requires significantly less cooling—and thus less electricity—for the systems.
Another area of focus is to make the power usage more efficient. A recent study from Control Up found that up to 77 percent of the 140,000 servers they reserached were overprovisioned with hardware, which increased the power consumed when active. To counter this issue, pooled resources can be incorporated into the design allowing servers to share computing resources between systems, which can be shared across multiple servers rather than being limited to each individual device.
Another recent innovation, disaggregated system design, breaks the 3-5 year “forklift upgrade” data center model by enabling a modular, sustainable infrastructure that enables the upgrading of only the lacking elements of the system. By composing a server with independently upgradeable sub-systems, they allow enterprises to be much more selective and effficient in preserving hardware that doesn’t need to be replaced. For example, Intel has been heavily deploying disaggregated system designs with their latest generation of CPUs, contributing significantly to e-waste reduction.
The Story Isn’t Over Yet
NASA’s Center for Environmental Research has been implementing data center solutions that are in line with green computing efforts. Lesley Ort from NASA’s Global Modeling and Assimilation Office noted that “[NASA] doesn’t want to be creating the problem of greenhouse gas pollution at the same time that we are studying it.” While organizations like NASA are making strides in researching and tackling the environmental dillema of data centers, many technology companies have yet come to grips with the environmental impact associated with their products and services.
The most imporant next step right now is simply education – and getting companies to realize that the importance and benefits of more eco-friendly data centers. The technologies to counter this growing data center dilemma are available and ready to use, and they deliver the double advantage of optimizing performance while also reducing environmental impact. Our data centers don’t have to harm the environment, if we take the proper actions today.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating