Critical Thinking, a weekly column on innovation in data center infrastructure. More about the column and the author here.
The idea of a “lights-out” data center is not new, but it is evolving. Operators such as Hewlett Packard Enterprise and AOL have been long-term proponents of remote monitoring and management to reduce, or entirely replace, the need for dedicated on-site staff. The most well-known current advocate is probably colocation provider EdgeConneX that has integrated a lights-out approach into the fabric of its business.
However, despite the efficiency benefits, lights-out, or “dark,” sites are still viewed with skepticism in some quarters; not having staff readily on-hand to deal with outages is deemed just too high-risk. Data center certification body Uptime Institute, for example, recommends that one to two qualified staff are needed on-site at all times to support the safe operation of a Tier III or IV facility.
But while lights-out may be a niche option now, developments in remote monitoring, analytics, AI, and robotics could eventually see it taken much further.
These technologies combined with the elimination of all concessions to human comfort will enable ever more efficient and available data centers, some experts argue. Technology analyst firm 451 Research recently coined the phrase “Datacenter as a Machine” (subscription required) to define unstaffed facilities that are primarily designed, built, and operated as units of IT rather than buildings. “As data centers become more complex, with tighter software-controlled integration between components, they will increasingly be viewed as complex machines rather than real estate,” the analyst group argues.
A facility designed and optimized exclusively for IT, rather than human operators, could enjoy a range of advantages over more conventional sites:
Improved cooling efficiency: There is good evidence that facilities could be operated at higher temperatures and humidity without impacting the reliability and performance of IT equipment. Progressive operators have made efforts to move into the upper reaches of ASHRAE’s recommended, or even allowable, temperature ranges. But the approach isn’t more pervasive due in part to its impact on human comfort. IT equipment may be functional at 80F and up, but it’s not a pleasant working environment for staff. Other highly efficient forms of cooling could make things even more uncomfortable. For example, close-coupled cooling technologies, such as direct liquid immersion, capture more than 90 percent of the IT heat load in a dielectric fluid but make no concession for the human operator. For the technology to become widely deployed in conventional sites additional, inefficient, perimeter cooling would be required in some locations just to keep the operators cool.
Better capacity management: Everything from rack height to access-aisle width is designed to make it easier for staff to install and maintain equipment rather than to optimize for efficiency. But if this space requirement was eliminated, equipment (power and cooling permitting) could be fitted into a much smaller footprint with, for example, potentially much higher, robot-accessible racks.
Reduced downtime and improved safety: According to a 2016 study by the Ponemon Institute, human error was the second-highest cause (behind power chain failures) of data center downtime. Electrocution – via arc-flash or other causes – also remains a real and present threat without the correct safety precautions. Use of hypoxic fire suppression – lowering oxygen levels – also has benefits for fire safety but again makes for a difficult working environment. A facility that was essentially off-limits to all but periodic or emergency access by qualified specialists could reduce the potential for human error and minimize the risk of injury to inexperienced staff.
But if on-site staff were effectively designed out of facilities, who or what would replace them? The kind of pervasive remote monitoring platforms already used at lights-out sites -- such as EdgeConneX’s edgeOS -- would likely play an instrumental role. Emerging tools, such as data center management as a service (DMaaS), which is effectively cloud-based data center infrastructure management, or DCIM, software – could also enable suppliers to take remote control (including predictive maintenance) of specific equipment or even an entire site. Eventual integration with AI/machine learning could also lead to more IT and facilities tasks being automated and self-regulated. Robotics is also likely to play a greater role in future data center management. Indeed, if facilities are designed to optimize space, then so-called dexterous robots may be the only way to access some parts of the site.
But despite the potential, a number of impediments will need to be overcome before unstaffed data centers become widely adopted. The biggest of these is obviously the perception that such designs would introduce additional risk. As such, early adopters would probably be limited to companies that are already comfortable with some form of lights-out approach. Facilitating technologies, such as DMaaS, AI-driven DCIM, and advanced robotics, are also still very nascent.
But there are still good reasons to think that, in specific use cases, unstaffed sites will eventually become the norm. For example, new micro-data center form factors to support edge computing are expected to proliferate in the next five to ten years and are likely to be monitored remotely and only require periodic visits from specialist maintenance staff.
Ihe prognosis doesn’t necessarily have to be all bad for facilities staff. To be sure, there will be fewer in-house positions in the future, but specialist third-party facilities management services providers – capable of emergency or periodic visits -- could expand headcount to meet the expected growth in new colocation and cloud capacity.
Ironic as it may sound, the future looks rather bright for the next generation of lights-out data centers.