The tech staffing crisis has been an ongoing problem for large enterprises, and with a record number of Americans leaving their jobs in 2021, it is reaching critical levels. Research from Korn Ferry shows that the talent shortage could result in $8.5 trillion in unrealized revenue by 2030. At the same time, as edge computing and IoT devices require more bandwidth and increase the number of access points touching a network, data center management is becoming more and more complex.
Taking steps to simplify data center automation to make it as accessible as possible to new staff members is critical to make sure data centers continue to function unabated.
In their quest for greater speed and optimization, technology leaders have historically focused on application requirements and performance above all other concerns. As a result, the underlying infrastructure was bent and molded to fit whatever the application requirements were. This bespoke infrastructure worked for a time.
However, in a modern environment affected by the rise of video, hybrid work and IoT devices, there has been an explosion in users (both people and devices), applications, data, domains (private and public), and, due to the pandemic, the locations over which all of this must work. All of these factors compound to change the operational scale requirements. It’s no longer feasible to maintain bespoke operations without a huge increase in cost.
As a result, leaders have turned to the cloud. Cloud providers have prioritized building a sustainable infrastructure at scale. They forced applications to conform to standard architectures so that operations could scale. Subsequently, the economics of managing these vast cloud properties are far better than any reasonably complex enterprise. However, moving to cloud computing presents its own challenges regarding ownership, security and compliance.
So, what should IT leaders do? They need to build architecturally-sound data centers. That starts with open protocols that have been tested in the most demanding environments available — specifically, EVPN and BGP. But building with better frameworks is only step one. Streamlining setup, design and management via the operations layer is critical as well.
One of the major factors complicating data center management is a lack of reliability. Networks are notoriously fragile, and operators are conservative when it comes to change. The interfaces required to automate changes in data center networks, which can radically simplify and streamline processes for both new and existing employees, have been in existence for almost two decades, but there has been reticence from IT leaders to adopt them for fear of potential problems.
If operations could trust that changes would always work as intended, or at least not cause problems, they wouldn’t need to be beholden to change controls that can slow processes down, complicate procedures for new employees and make it more difficult to leverage AI tools. If IT leaders want to improve efficiency and speed, they must understand that building out their operations layer to prioritize reliability above all else is step one. Many data centers shut off changes over major holidays to avoid complications – but nothing is slower than stopping completely!
Technology requirements should also be examined closely by leaders looking to improve and simplify operations. If the current supply chain mess has taught us anything, it’s that we need choice. The problem with choice from a data center perspective, is that it comes with a cost. If each supplier of hardware and software in a data center has its own management story staffed by its own operations teams, costs scale linearly with the number of suppliers. This is not a sustainable structure, and it’s a big reason that a lot of the multivendor strategies have been difficult to execute in the past.
However, vendors are beginning to design software solutions that work, regardless of whose boxes and switches a data center uses. If enterprises are intelligent and thoughtful about how they’ve designed their operations – to prioritize reliability over speed and infrastructure over applications – and intentional about who they choose to work with, they can find the right mix of vendors to suit their needs. But those initial steps are critical.
Data centers of 2022
Data centers are becoming too complex and too distributed to stick to old-fashioned operating processes and disjointed management tools. And as more and more data centers open their doors to employees without traditional tech experience, they will need to rely on automation to manage the day-to-day. To build a data center that can be run without experts, leaders need to think about what it means to have an “expert data center” differently.
Simplifying processes, focusing on reliability instead of custom functionality and leaning on the newest technologies with an open, extensible fabric to simplify and automate processes can solve the staffing crises – and make experienced technicians’ lives easier, unifying the work required to make it work while preserving the flexibility to optimize for the physics and the economics.
Mike is VP of Data Center Product Management at Juniper Networks. Mike spent 12 years at Juniper in a previous tour of duty, running product management, strategy, and marketing for Junos Software. In that role, he was responsible for driving Juniper's automation ambitions and incubating efforts across emerging technology spaces (notably SDN, NFV, virtualization, portable network OS, and DevOps).