Heartbleed Happened – What You Can Do to Stay Proactive

It was a rough couple of weeks for cloud, networking, security and application professionals. The Heartbleed bug impacted everyone, vendors from Cisco to Juniper, and a variety of online services. As the dust settles, security engineers must analyze what happened and how to make sure something of this nature doesn’t happen again.

The issue was that so many different types of services were utilizing a very popular cryptography library. OpenSSL is still a widely adopted security tool, and as we all change or update our passwords and read the numerous software releases, it is important to know what some shops did right during the fallout caused by the vulnerability. There were several organizations which were prepared (to some extent) to deal with this type of issue.

Here’s what they did:

  • Effective security policies. Good security policies, user controls and general infrastructure best practices can help control or mitigate a situation very quickly. Here’s the important piece to understand: even though Heartbleed was a software vulnerability, security policies must span the physical aspect as well. Numerous breaches happen because of unlocked doors or poorly monitored systems. Remember, when creating a good security policy, take into consideration your entire infrastructure. This will span everything from passwords to locked and monitored server cabinets.
  • Proactive monitoring across the entire platform. Many organizations will have monitoring set up on their local as well as cloud-based environments. Physical appliances monitor traffic and report malicious users. New types of monitoring systems are able to aggregate firewalls, virtual services and even cloud components. There are so many new aspects to consider within the modern infrastructure. The logical layer is only continuing to grow and monitoring it is become even more critical. With that in mind, it is important to ask yourself a couple of visibility questions around your cloud and data center platform. How well can you see data traverse your cloud? How secure is your data at rest and in motion? Can you effectively monitor traffic extending out to your end-users? Proactive monitoring can help find spikes, anomalies and even security holes in your environment.
  • Using next-gen security services. This is where it gets interesting. There are powerful physical appliances that can sit at the edge or internally within an environment. One security professional working at a large enterprise told me how he was impacted by Heartbleed. Although they had vulnerable services, their IPS/IDS solution spotted the bots and alerted the engineers to shut down services which were being impacted. Although they still released a bulletin to alert their users, the ramifications were much smaller. Virtual security appliances can be application firewalls, virtual firewalls or just security services running within your infrastructure. These powerful agents can create a very good proactive system capable of advanced security monitoring.
  • Logging and event correlation. The expanse of the cloud has created a bit of a logging issue. Organizations must take event correlation and security logging into their security planning methodologies. Powerful engines can see issues before they actually happen and alert administrators to change or update specific settings. Here’s the other reality: if a breach happens, this will be your best piece of trackable documentation. In the case of Heartbleed, many organizations were able to see the source of a bot or tracking tool on their network. Not only were they able to block the sources, they were able to quickly mitigate access into corporate resources.
  • Vulnerability testing. How well is your system running? How secure are your virtual servers? What about your physical infrastructure? When was the last time you ran an application vulnerability test? For some it’s an easy answer, while for others it’s a bit more eye-opening. The only way to stay ahead of the bad guys is to find problems before they do. The process is helped by all the technologies mentioned earlier. However, finding faults in scripts, ports, security updates and even user actions can proactively help you fix problems before they become breaches. Large organizations have a healthy vulnerability testing cycle. Some do testing on a cycle, others have random on-going testing, while others include specific application and data testing protocols. Regardless of the scenario – you’ll be much better off finding the issue before anyone else.

Ultimately, there is no silver bullet for every security issue out there. New types of advanced persistent threats are taking their aim at the modern data center. Remember, as cloud computing adoption and IT consumerization continue, there will be more data targets for the bad guys to go after.  Staying proactive means continuously testing your own systems and ensuring effective infrastructure monitoring. Regardless of the industry, ensuring data security and integrity are critical pieces of the overall IT infrastructure plan.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the Vice President of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)


  1. Ulf Mattsson

    I agree that “you’ll be much better off finding the issue before anyone else” and “mnew types of advanced persistent threats are taking their aim at the modern data center.” The 2014 Verizon Data Breach Investigations Report concluded that enterprises are losing ground in the fight against persistent cyber-attacks. We simply cannot catch the bad guys until it is too late. This picture is not improving. This is how I read the recent Verizon reports: The Verizon 2013 and 2014 reports concluded that less than 14% of breaches are detected by internal security tools. Detection by third party entities increased from approximately 10% to 25% during the last three years. Specifically, notification by law enforcement increased from around 25% to 33% during the last three years. Specifically theft of payment card information 99% of the cases that someone else told the victim they had suffered a breach. This is no different than in years past, and we continue to see notification by law enforcement and fraud detection as the most common discovery methods. One reason is that our current approach with monitoring and intrusion detection products can't tell you what normal looks like in your own systems and SIEM technology is simply too slowly to be useful for security analytics. Big Data security analytics may help over time, but we don't have time to wait. We need to protect our sensitive data itself. Studies have shown that users of data tokenization experience up to 50 % fewer security-related incidents (e.g. unauthorized access, data loss, or data exposure) than non-users. Ulf Mattsson, CTO Protegrity

  2. After the bug the only solution is to monitor the whole system and the sooner you start the more chances you have not to be damaged at all. There are nice opportunities for monitoring right now e.g. Anturis, the tool for the whole IT infrastructure monitoring where you can check every part of the system.