Skip navigation
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a photograph. Smith Collection/Gado/Getty Images
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a photograph.

How AI Is Used in Data Center Physical Security Today

While there is still a lot of baseless hype around AI, it is already making a real difference in some areas, including physical security.

Machine learning and artificial intelligence are touted as the cure-all for everything that ails a data center. While much of it is hype and baseless optimism, AI-powered tools are already useful and practical in some areas. Those areas include data center physical security, where AI is making a difference on three fronts: image and sound recognition, anomaly detection, and predictive analytics.

Image and Sound Recognition

Image recognition is one of the big success stories in AI, and the technology is quickly being embedded everywhere. And so is its close cousin, sound recognition.

In physical security, obviously, image recognition is most often used for facial authentication.

But it's about more than just confirming that someone is who they say they are when they enter a building. Image recognition can also be used to find out whether there are people in a certain room during a fire or another emergency. It can be used to tell whether motion being detected is a branch swaying in the wind or an intruder trying to climb over a fence.

Image recognition can also be used to identify people carrying guns or other weapons – or not wearing masks.

Speaking of health-related issues, once a data center visitor has been diagnosed with COVID-19, image recognition can be used to identify all the locations the infected person visited (so the places can be thoroughly cleaned) and everyone that person had contact with, so they can get tested.

For large data centers with highly specific needs, there are many commercial and open source image recognition algorithms and training sets available. For smaller data centers, ones that don’t have the resources for a dedicated AI development team, or for those with very common issues, vendors are increasingly including these features in their security products.

According to Stockholm-based research firm Memoori, AI analytics will become a standard feature of video surveillance solutions over the next decade.

"There is a critical need to make full use of the massive amounts of data being generated by video surveillance cameras and AI-based solutions are the only practical answer," Memoori managing director James McHale said in a recent report.

Video surveillance cameras generate a massive amount of data, McHale told DCK, and AI is the only practical way to process it all.

AI systems can also be used to analyze thermal images. "Thermal cameras have been a significant growth area this year as a direct consequence of the COVID-19 pandemic," he told us.

Today, many thermal cameras are just thermal information, but customers are increasingly looking for systems with cameras that can collect both thermal and traditional images and apply neural network algorithms for processing them.

But there's a general lack of understanding about how to use this technology appropriately for pandemic controls, he added. Plus, the pandemic is negatively affecting some sectors of the economy, impacting spending and changing the way that companies buy technology.

"Customers will be demanding more value from their investments and will be less willing to commit to upfront capital expenditure," he said. "This is making Access Control as a Service and Video Surveillance as a Service even more attractive."

Anomaly Detection

Another very common, and practical, use of machine learning is for anomaly detection. The system is trained on a baseline of data, identifies common patterns, then looks for unusual events that don't fit those patterns.

So, for example, it might be normal for various random cars to drive past a facility, but if the same car has driven by several times in the past hour, and has slowed down each time, then speeded away again, a guard gets an alert that something suspicious is going on.

Similarly, someone being in an area of the data center where they don’t normally go, or at a time when they aren’t normally at work could be a sign of trouble.

"It can help people focus on potential areas of compromise," said Michael Perreault, senior security architect for cloud and data center transformation at Insight, a Tempe, Arizona-based technology consulting and system integration company.

It is used to help data centers spot problems that are happening that could be missed by security teams otherwise.

Predictive Analytics

Pattern recognition can be used in another way – to predict events. In data centers today this capability is mostly used for predictive maintenance.

So, for example, if a piece of equipment heats up to an unusual level, an AI system can flag the problem and get a service request out before the equipment fails completely.

Currently predictive analytics isn’t used much in data centers outside of maintenance, said Perreault.

But there are some vendors working on technologies that can help spot security problems before they happen, by combing in-house data such as emails or video surveillance with external data such as arrest reports or social media posts.

For example, if someone connected to a data center or one of its employees has been arrested in a violent incident, has sent threatening emails to the company, and authored aggressive social media posts, those could all be signs that this person might be about to escalate further. Guards could be advised to keep an eye out for that individual.

Of course, using AI to predict when a machine may break down and using it to predict when a person might break down are two different things, the latter one raising some thorny ethical questions.

How much information gathering is too intrusive? Is the intrusiveness worth it, if it helps keep employees and critical infrastructure safe?

Today, the biggest problem with these kinds of systems is that they're still relatively new, not very accurate, and subject to bias, as police departments around the world have found out.

But the thing about AI systems is that they keep getting more accurate all the time. Soon, the predictive security technology will be broadly available, inexpensive, and easy to set up. There are already vendors offering this as a service, where they correlate internal company data with external sources to predict human behavior, and the technology isn't going away.

Data center security managers and senior executives should put some effort into putting an ethical framework in place ahead of time. How intrusive should security systems be when it comes to employees, their family members, or the general public? What data sources are appropriate harness? What is the threshold for action?

Minority Report-style predictive security technology isn't science fiction anymore, and the moral issues the movie raised are no longer theoretical.

TAGS: Security
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish