It was a rough couple of weeks for cloud, networking, security and application professionals. The Heartbleed bug impacted everyone, vendors from Cisco to Juniper, and a variety of online services. As the dust settles, security engineers must analyze what happened and how to make sure something of this nature doesn’t happen again.
The issue was that so many different types of services were utilizing a very popular cryptography library. OpenSSL is still a widely adopted security tool, and as we all change or update our passwords and read the numerous software releases, it is important to know what some shops did right during the fallout caused by the vulnerability. There were several organizations which were prepared (to some extent) to deal with this type of issue.
Here’s what they did:
- Effective security policies. Good security policies, user controls and general infrastructure best practices can help control or mitigate a situation very quickly. Here’s the important piece to understand: even though Heartbleed was a software vulnerability, security policies must span the physical aspect as well. Numerous breaches happen because of unlocked doors or poorly monitored systems. Remember, when creating a good security policy, take into consideration your entire infrastructure. This will span everything from passwords to locked and monitored server cabinets.
- Proactive monitoring across the entire platform. Many organizations will have monitoring set up on their local as well as cloud-based environments. Physical appliances monitor traffic and report malicious users. New types of monitoring systems are able to aggregate firewalls, virtual services and even cloud components. There are so many new aspects to consider within the modern infrastructure. The logical layer is only continuing to grow and monitoring it is become even more critical. With that in mind, it is important to ask yourself a couple of visibility questions around your cloud and data center platform. How well can you see data traverse your cloud? How secure is your data at rest and in motion? Can you effectively monitor traffic extending out to your end-users? Proactive monitoring can help find spikes, anomalies and even security holes in your environment.
- Using next-gen security services. This is where it gets interesting. There are powerful physical appliances that can sit at the edge or internally within an environment. One security professional working at a large enterprise told me how he was impacted by Heartbleed. Although they had vulnerable services, their IPS/IDS solution spotted the bots and alerted the engineers to shut down services which were being impacted. Although they still released a bulletin to alert their users, the ramifications were much smaller. Virtual security appliances can be application firewalls, virtual firewalls or just security services running within your infrastructure. These powerful agents can create a very good proactive system capable of advanced security monitoring.
- Logging and event correlation. The expanse of the cloud has created a bit of a logging issue. Organizations must take event correlation and security logging into their security planning methodologies. Powerful engines can see issues before they actually happen and alert administrators to change or update specific settings. Here’s the other reality: if a breach happens, this will be your best piece of trackable documentation. In the case of Heartbleed, many organizations were able to see the source of a bot or tracking tool on their network. Not only were they able to block the sources, they were able to quickly mitigate access into corporate resources.
- Vulnerability testing. How well is your system running? How secure are your virtual servers? What about your physical infrastructure? When was the last time you ran an application vulnerability test? For some it’s an easy answer, while for others it’s a bit more eye-opening. The only way to stay ahead of the bad guys is to find problems before they do. The process is helped by all the technologies mentioned earlier. However, finding faults in scripts, ports, security updates and even user actions can proactively help you fix problems before they become breaches. Large organizations have a healthy vulnerability testing cycle. Some do testing on a cycle, others have random on-going testing, while others include specific application and data testing protocols. Regardless of the scenario – you’ll be much better off finding the issue before anyone else.
Ultimately, there is no silver bullet for every security issue out there. New types of advanced persistent threats are taking their aim at the modern data center. Remember, as cloud computing adoption and IT consumerization continue, there will be more data targets for the bad guys to go after. Staying proactive means continuously testing your own systems and ensuring effective infrastructure monitoring. Regardless of the industry, ensuring data security and integrity are critical pieces of the overall IT infrastructure plan.