The amount of systems and applications data center security managers have to keep attackers out of nowadays is staggering. One way to make sure all the safeguards are working, and all potential attack vectors are closed off is penetration testing.
Traditionally, this has meant "white hat" hackers sitting around trying to get in or running automated scripts to launch a variety of attacks. But neither people nor scripts can try everything that’s possible.
Imagine, for example, that an application crashes after a user types more than 1,000 characters into a text field, ending with a particular set of characters. That's more potential combinations than atoms in the known universe.
"If our average input is 10 bytes long, then you'd have to try 25610, which is an enormously large number," said Daniel Crowley, IBM's Security X-Force Red research director. "If each attempt takes half a second, it takes months or years just to fuzz a ten-character string."
It's not practical to test everything, he said. "You have to try things at random."
People can also try to guess where problems could come up based on experience and knowledge of common attacks and vulnerabilities. They could then write scripts to test those attacks. But, again, they can't test every possible version of an attack.
Plus, creating such scripts takes a long time. And if anything in the infrastructure changes, all the scripts must be rewritten. As a result, many companies skimp on this kind of security testing.
But today all that is changing, with machine learning and artificial intelligence now being applied to the problem. AI-powered tools can figure out potential avenues of attack and generate likely test cases. If a test case offers promising paths to explore, some of the new tools will even follow up and delve deeper to see if a problem in one area of an application leads to exploitable vulnerabilities elsewhere.
And it's not just smaller niche vendors like PeachTech and FuzzBuzz offering this technology. There’s now a web fuzzer in the Microsoft Security Risk Detection product, and Google has recently open-sourced its ClusterFuzz tool.
The Microsoft tool is an AI-powered fuzzing platform, said Adam Kujawa, director of Malwarebytes Labs, a San Jose-based cybersecurity vendor. "They roll out all these virtual machines, put the applications in there, and have an AI act as a user would do," he said.
That's good news for security managers; the bad news is that attackers can do the same thing.
"It's the future of fuzzing," said Kujawa. "Doing it manually doesn't make sense anymore, when you can have an AI do it for you. I can guarantee that it will be a big focus and push on the cyberscrime side and may become a service thing in the future."
The technology is new, and there's no evidence that attackers are using it yet. "But it's better to get ahead of it, in my opinion," he said.
Malwarebytes put out a report in mid-June outlining some of the potential problems.
More broadly, "examples of using AI for malicious purposes have already emerged," said Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies. "Cybercrime proves to become more and more technologically advanced, and there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios."
Today, if your systems are under attack, chances are it’s impossible to know whether the attacker is using an AI-powered fuzzer or a traditional fuzzing system operated by a very smart, or very lucky, human being. With AI, however, every hacker could become both smart and lucky, and as they old saying goes, they only have to be lucky once.