Malicious hacking against enterprise and government computing infrastructure is relentless. Cyberattack statistics are widely available, but in one sense irrelevant. The number of attacks has never stopped increasing, and neither has the severity of the potential damage. The attacks always get more sophisticated, and malicious hackers are increasingly prone to coordinating their efforts. Hacking tools are also more accessible and easier to use.
Data centers and large server clusters are particularly enticing targets. If a hacker's goal is to steal information, data centers are virtual treasure chests of valuable data. If a hacker's goal is to cause disruption, however, then interfering with data centers can be just as damaging as cutting off a public utility or service.
The industry's response is becoming more sophisticated and coordinated as well. It was once mostly up to individual companies to address cybersecurity, but the National Institute of Standards & Technology (NIST) and trade group the Open Compute Project (OCP) now perform lead roles, encouraging the coordinated development of cybersecurity countermeasures—technologies, techniques, and best practices—and disseminating information about them.
The recommendations from NIST and OCP have proven effective enough they are now being adopted in contracts for new equipment, conferring de facto standards status on their specifications.
NIST and OCP address how to mitigate against both data theft and denial-of-service (DoS) attacks, but with the largest and most advanced data center operators largely resistant to the former, hackers seem to be attempting the latter more frequently.
Denial of Service
DoS attacks that succeed are usually highly visible, almost always embarrassing, frequently costly and sometimes dangerous. That all makes it especially important to be cognizant of the best approaches for fending off DoS attacks.
There are different forms a DoS attack can take, but the concept underlying most of them is to try to corrupt or disrupt some system device, process, or procedure, usually by injecting malicious code. There are plenty of places to hide malicious code. It's bad enough to corrupt an operating system but malware that makes its way into firmware can be particularly insidious.
Data centers can be inoculated against most DoS attacks by protecting several key areas that hackers are known to probe most frequently for vulnerabilities: devices and the code for those devices, along with data addresses and data movement.
The equipment in data centers is perpetually being supplemented or upgraded. Operators add new cards and new drives to servers all the time. It is important to verify that each of these devices is authentic.
The device might be authentic, but it may still have been compromised with malicious code. Therefore, it is critically important to verify the authenticity of device code.
Devices that are swapped out of the most advanced data centers may no longer be cutting edge but are typically still fully functional, and they often go to other data centers. The next owners should be just as diligent adopting measures to verify those devices and their code are authentic when they get them.
Every single transaction in a data center is an opportunity for hijacking or corrupting data. As data is moved, it is important to verify that the addresses that data is being saved to are secure, and that the devices that are being read from, and written to, are also secure.
Lastly, do not neglect to consider all the equipment in a data center that is not integral to the main function of processing data. A poorly defended connected device can also become a source of attack for other more critical nodes. A security camera connected via Wi-Fi could be used to create infinite pings to a financial server, thus rendering it useless. That’s only one example of a DoS attack coming from a seemingly innocent direction. Paying attention to every connection is better security.
Cybersecurity starts with a root of trust. The idea is that if you start with a completely trustworthy reference in a cryptographic system, then every check in a chain of checks building off that reference should be trustworthy too.
The process should start with the microprocessors (MPU) and microcontrollers (MCU) themselves. MPUs/MCUs should start from a boot source that is immutable. Because peripherals are a potential attack vector, it is recommended that peripherals, including debug, are disabled during the boot process.
The security technology should be set up to subsequently verify (or authenticate) each subsequent code block prior to its execution.
That can provide adequate security in many cases but adding an attestation process adds yet another layer of protection. In this case, the device itself needs to go through an attestation process before it is allowed to participate in its electronics ecosystem.
Different companies supply integrated circuits or trusted platform modules that provide that root of trust. NIST and OCP keep track of levels of cryptographic strength that should provide adequate cybersecurity. Suppliers are free to exceed recommendations, either to satisfy customer requests or simply to differentiate their products.
The chips and modules that establish a root of trust can also be used to perform a variety of other checks, for example authenticating boards and cards, measuring platform configuration registers at various steps during boot sequences and providing real-time firmware monitoring and protection, among other tasks.
Chips and modules of this nature are available not only for servers themselves, and not only for add-in cards, but also for peripheral devices, which can include anything from attached storage to power supplies.
Security is a Complex Issue
The front lines of data center cybersecurity may be in servers and their peripherals, but potential vulnerabilities can exist almost anywhere.
Consider the humble timing chip. Servers generate timestamps for files and all transactions. However, most computer timing devices tend to drift over time. That makes it tempting to rely on internet-based clocks. But all data on the public internet, including clock data, is subject to packet manipulation, opening a data center operator to DoS attacks.
Windows, for example, relies on Kerberos as its default authentication protocol, and Kerberos uses workstation time as part of its authentication ticket generation process. If system timing is off, the authentication/logon process between domain controllers and clients may not succeed, which can impair the proper operation of the network.
Data center operators can inoculate themselves against this type of DoS attack by making sure they have their own clocks or dedicated clock servers, which should be as reliable and accurate as possible.
The reason to offer the preceding example has less to do with alerting network operators of that specific vulnerability than it does with conveying that cybersecurity is a vastly complex issue, and that security vulnerabilities can be inherent in even the most basic functions of a data communications network. It is always advisable to consult with partners with established and extensive expertise in cybersecurity technologies, techniques, and best practices.
NIST and OCP have emerged as two of the most authoritative sources for techniques and technologies to guard against cyberattacks of all kinds.
NIST offers its Cyber Security Framework (CSF), a set of recommendations that cover everything from servers, smartphones, and Internet of Things devices to the networks that connect them.
NIST published its original CSF in 2014. The agency has issued several modest revisions and was working on CSF 2.0 at the time of writing.
The OCP was formed with the specific goal of moving away from closed systems. A closed system has the potential to be a particularly secure system because the manufacturer can thoroughly understand and control every aspect of that system, both in terms of how it operates and in terms of what devices can or cannot be attached, and under what circumstances.
On the other hand, an open computer system allows data center architects to mix and match equipment from different suppliers as suits their needs. Moving to open systems almost always serves to drive costs down.
OCP members were aware that moving to open systems would likely create security vulnerabilities and made sure that cybersecurity was an intrinsic component of the open compute specs.
The OCP's open specification for a Data Center-ready Secure Control Module (DC-SCM) with a standardized Datacenter-ready Secure Control Interface (DC-SCI) is only one example of this. Server management, security and control features commonly used to reside on a motherboard; the DC-SCM spec moves all of that into smaller module. A single DC-SCM can cover more platforms. DC-SCMs are also upgradeable, so if more sophisticated security technology becomes available, data center operators do not have to swap out entire modules.
As noted above, the recommendations from NIST and OCP are now being adopted as requirements in bids for data networking gear and in contracts for data networking services. Companies vying for such business are increasingly demanding that their equipment and semiconductor vendors comply with OCP and NIST specs.