Technical Debt: A Data Center Security Risk You Can’t Afford

Legacy applications can leave massive security holes and must be dealt with, no matter how critical they are.

Maria Korolov

November 6, 2019

6 Min Read
Windows Server 2003 running in a data center

Legacy applications, old operating systems, and other past-expiration-date technologies pose big risks to data centers, and the risks grow bigger with each day that passes. But many of these legacy applications play critical roles for companies.

"Did you know that the persistence layer of the SWIFT network often runs on AIX and Solaris systems?" Matt Glenn, VP of product management at Illumio, a cloud computing security company, asked.

Companies delay resolving their technical debt for different reasons, the most common ones being that it’s time consuming and expensive and will take away from other, presumably higher-priority projects. But it’s not a problem data center teams can continue to ignore.

Both legacy applications and the legacy operating systems they run on often have significant security vulnerabilities. If someone does get into these environments, security managers might not even know it, because logging is often woefully inadequate in old systems.

It’s a double threat, Chris Kennedy, CISO and VP of customer success at AttackIQ, said. "Attackers are taking advantage of the flaws in the applications, and the lack of logging makes it difficult to understand if they're being attacked."

Some organizations deal with the problem by building a big wall around their insecure legacy systems. "But they have to put a bunch of holes in the wall so that the applications can continue to work," Kennedy said. "And the attackers take advantage of the holes."

According to a recent survey, only five percent of security operations centers can see everything they need to see. The biggest blindspot? Legacy applications that don't produce events that can be fed into a security information and event management system. That’s according to 45 percent of the security pros surveyed for Exabeam's 2019 State of the SOC report.

Another security survey, released by the Ponemon Institute in October, 56 percent of companies said lack of visibility was the reason behind continuing breaches.

Some IT teams don't want to touch old but business-critical systems for fear of breaking them. Some don't have the resources to tackle these projects and cannot convince senior management to make it a top priority. Yet others may be simply unaware of the legacy systems in their infrastructure and the associated risks.

"I don't think the boards, or the senior executives of companies, understand the risks they're taking from letting these legacy applications age," Kennedy said. "Look at the EternalBlue WannaCry series of attacks – most of those exploits were against legacy Windows operating systems."

It’s Time to Fix What's Not Broken

Some older software was created before security was as much of a concern as it is now. Some was designed for environments that were not exposed to the public internet.

To fix the problem, someone would need to dive into the applications and add the required logging infrastructure, Rohit Dhamankar, VP of threat intelligence products at Alert Logic, a Houston-based cybersecurity company, said.

For older systems, the original developers might no longer be around, and messing with the code could break it. "Say you have mission-critical applications that are used by a small group of people in the data center," he said. If the original developers are gone, nobody is going to want to touch the software.

The mechanics behind neglect of older operating systems are similar. Since upgrading an operating system might break an old but critical application running on top, nobody wants to touch it. That's part of the reason why there are so many obsolete operating systems still being used in data centers, Dhamankar added.

Most companies are still using operating systems that will be out of support in 2020 somewhere in their environment, he said. "People are still running stuff on 2008 and 2012 Windows servers. That should be an area of major concern. Microsoft is not going to patch them. You have nothing to protect the Microsoft application stack you are building." Some of the most typical culprits here are financial applications, old payroll systems, and legacy web apps.

"And we have the same on the Linux side," Dhamankar said. "There are applications that are built on Apache or Jboss but are running on a version of Linux that is really outdated, like 2.6, which has been out of support for three years now."

Meanwhile, these operating systems may have known, easy-to-exploit critical vulnerabilities. "That is what data center managers are dealing with."

Moving away from legacy apps can be “like another engineering project," Dhamankar said. "You can rearchitect the whole thing or create a new app and make sure the new functionality is working before you move away from the legacy app."

But that can be a hard sell. "Everyone wants to be on the latest greatest thing. They don't want to invest in recreating something that's already existing, such as a legacy app. Plus, people are looking at ROI, or what gets them more customers. When it comes to legacy applications, if the data center or security person is not very pushy and can present the business case to the highest executives, those projects tend not to get funded."

If You Can't Fix, Mitigate

Eventually, legacy apps will have to go. But until then, there are steps data center managers can take to minimize the risks.

For example, if a legacy app is only being used by internal users, then the system it's on should be isolated.

"Make sure that that particular system is not reachable from the internet or from segments of the company that don't need access to it," Dhamankar recommended. "That's the strongest mitigation you can have – access controls around who can touch the legacy app."

If the legacy app requires internet access, companies can use firewalls to block the most common types of attacks, like SQL injections or cross-site scripting, or apply white-listing rules so that only a small set of approved communications can get through.

It may also be possible to implement white listing on a host system. That could be in the form of a local agent that only allows the legacy application to run in that environment, only certain commands can execute and nothing else.

"But that can be risky," said Dhamankar. If the legacy application is fragile enough, adding new functions to its environment – like logging or white listing – has the potential to break things.

Other steps can include, in some cases, purchasing third-party support for discontinued operating systems, such as Microsoft's extended security patching support, AttackIQ's Kennedy said.

At the end of the day, the security problem of legacy applications shouldn't rest only on the shoulders of the security team.

"Organizations put too much pressure on SOCs and not enough on the people that own the services on legacy applications," Illumio's Glenn said.

Obviously, it's the security team's responsibility to step in with forensics when something goes wrong, but it's the owner of the application who's responsible for its operation, he said.

To understand how these applications function, and to get the logs needed for security, these two groups should work together.

"They should be closely aligned," Glenn said. "Especially since many of those legacy applications are some of the most critical."

About the Author(s)

Maria Korolov

Maria Korolov is an award-winning technology journalist who covers cybersecurity, AI, and extended reality. She also writes science fiction.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like