Skip navigation
unlocked data

Bugs in the data center: How social engineering impacts physical security

A closer look at how one penetration testing firm was able to breach the physical security of a data center in less than a week, posing as the pest control company.

One data center management team learned the hard way that bugs can be a menace -- or, to be more specific, the people who hunt them. And we're talking about real, six-legged bugs, not the computer kind.

It started last November when NetSPI, a Minneapolis-based penetration testing firm, was hired to do a test by a company that owned several colocation facilities. NetSPI's job was to use social engineering to physically breach the data center, with the objective to get into one of their facilities and into a position where they could access the networks.

"This was a highly secured facility," said Dalin McClellan, senior security consultant at NetSPI. "All the doors have retina scanners and badge readers. And there are man traps. You go through the door into a small room and wave to wait for the first door to close before you can open the second door and come in." That means that McClellan’s team couldn't just follow someone into the building. Worse yet, there are only two employees who work at the facility, plus a security guard. Strangers would immediately stick out. "Plus, we only had a week to prepare," said McClellan.

Normally, what NetSPI would conduct deep research on the facility, find out about all the external visitors who are allowed in, collect copies of stationary and get sample email, and connect with the employees via social media or other channels. They typically start with Google, the company's own website, LinkedIn, and then proceed to learning anything and everything they can about the facility and about the people who work there.

"And we would do physical reconnaissance, where we sit in a car outside the building and watch employees go in and out, and watch vendors go in and out," he said. "Normally, this could take up to several weeks."

But the client only gave them a week.

"Our first thought, as far as pretexts go, was that we found that this data center does give tours to prospective new clients that you can arrange through their website," said McClellan. "So we wanted to bring several people on a tour and see if we can split off and see what we can get access to."

NetSPI ran this idea past their customer, but it was a non-starter. "They said that, in this particular instance, the sales person we’d be paired up with has a history of taking security into his own hands," said McClellan. "And the likelihood of detainment or injury would be high. That’s not something we particularly want to hear. There’s always some risk involved but our appetite for that is not unlimited."

So the client met them halfway, by providing the background information that NetSPI would normally dig up on their own, such as the names of the employees, their contact information, names of vendors commonly used on site, and hours that people worked, he said. "And general information about what communication looks like, what do email signatures look like, what the letterhead looks like, what terminology they use."

Then NetSPI's team went back to the drawing board. Now there were only a couple of days left.

Using social engineering to breach physical security

"We started looking at the vendor list and noticed that they use a very well-known national pest control brand," said McClellan. "One of the consultants we work with just had that same company in to work on their home and they had all the confirmation emails. We took those emails and modified them and then we sent a spoofed email that looked like it came from one of the employees at the data center and sent it to the other employee at the data center."

The email said that the pest control company was coming on Friday, that it had been difficult to get them on the schedule, and to make sure they had everything they need. "The next day, we got an email back saying 'Great, sounds good.' They hadn’t recognized that the email was fraudulent," he said.

McClellan and his team spent the next couple of days running around. They made authentic-looking shirts. They rented a truck that was the right make, model and color and got a magnetic company logo to put on the side. Then they drove to a hardware store and rented a ladder, tool bags, and other equipment that a pest control person might use.

"We showed up looking fairly legitimate, even though we spent under $200 for everything," he said.

They pulled up at the entry gate to the facility, which was surrounded by an eight-foot fence. "But the security guard had been alerted that we were going to be there and opened the gate for us, '' said McClellan. They were asked to show identification, and they just showed their real drivers' licenses.

"Then the guy who we emailed met us outside, scanned his retina, and escorted us through the entire building," he said.

All the computers were inside cages, and they were not allowed to access them directly. "We tried, but the employee said no -- but he did let us get into the ceiling tiles to check for pests, where it would have been easy to install microphones, video cameras, or splice a device into the cables." McClellan said that his team didn't do any of those things. The goal of this particular engagement was to test the interactions with people on the site, not to actually hack into the networks.

"We do do red team assessments where the social engineering is just the first step," he added. "Had this been a red team assessment, we would have done that, dropped a device in there, gotten on their network, then gone on to do more technical hackery things. But for this test, the focus was just on the people."

The team spent a couple of hours walking around the building, looking inside the ceilings, under the floorboards, in every nook and cranny of the building.

"We finally left the data center, were sitting back in the truck, and decided we wanted to push this a little further," he said. "So we called our contact again and told him that we had some paperwork for them to sign and asked if they had a printer we could use. He let us back inside, gave us credentials to get on the WiFi network."

The guest WiFi network was fairly well segmented, he added. There was no direct access to customer data, for example. "But it was still a foothold and a place to start."

The guest network also didn't have printer access, so they wound up sending the employee an email with documents attached, so that the employee could open them and print them out. "Had this been a red team engagement, there could have been something on there to trigger malware and get access," he said.

That marked the end of the test, said McClellan. "Then we contacted our customer, let them know how things had gone, packed up our stuff and left the site without anyone at the site actually knowing that anything unusual had happened."

 

The post-mortem on physical security breach

The big take-away from this engagement was that the client had good policies in place for employees and for customers, but not for vendors. In addition, some email security controls would also have helped.

McClellan's team faked the email from one employee to the other by registering a new domain name very similar to the client's actual domain name. "If they had looked closely, they would have seen an extra letter in the domain we were using," he said. "But that's easy to miss."

There are email security tools, however, that could help recipients notice if an email is from an external source.

In addition, there are services that look for domain name squatting, alerting companies if anyone registers a similar-sounding domain name.

"These controls can definitely slow an attacker down but they can’t stop an attacker," said McClellan. "If an attacker is focused and dedicated enough, they will eventually land an email into employee inboxes."

So the most important take-away, he said, was to put a solid vendor policy in place. Preferably, a separate system for tracking on-site visits by vendors that includes credential checks. "That is much more difficult to spoof," he said.

Finally, someone should have probably called the pest control vendor directly to confirm.

"That is a time-consuming step," McClellan admitted. "But when you’re talking about a highly secure location, one where you need retina scanners and man traps, you should probably be spending the extra time to verify the identification of everyone coming into the building."

 

The next steps for data center security

The most important thing for clients to do after a test like this is not to punish the employees who fell for the scam, said McClellan. "The immediate thought is -- ooh, that person who helped you is going to get fired. But that's the worst thing you can do at that moment."

Penetration tests should not be punitive, he said. "The purpose of these tests is to find flaws in your controls, in your policies, in your training processes, and to improve your security," he said. "And you’ve just paid a lot of money to provide really valuable training to the employees who interacted with your penetration testing, in dealing with an unauthorized attacker trying to come into the building. Knowing how that feels like, how that sounds, isn’t something a person can learn on a computer very well. Experience is the best training you can get."

If a company then fires that employee, the employee who just got some very valuable experience, then the employee will take that experience to their next job, instead.

"And you’re not going to get any value out of that," he said.

Instead, employees should see the results of the test and understand what happened, but in a respectful way

"Show dedication to the employees," he said. "Make them an advocate for security. This person is now the most well-trained person in your organization in how to spot social engineering. Make that person your advocate, your ambassador for security, particularly around people. To me, that is the best possible outcome of a social engineering test."

The next time something like this happens, he said, red flags will immediately go up and the employees will know how to respond appropriately.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish