Matthew McKenna is chief commercial officer for SSH Communications Security.
Two-thirds of all websites rely on OpenSSL for encryption and user information across the web. Launched in 1998, the protocol arrived just in time to solve the dilemma of how to safely and securely transfer personal data – including financial information – from end-user to website. Ultimately, OpenSSL—an open project that any coder or programmer can work on—makes e-commerce and other types of online transactions and interactions possible.
Though the majority of websites use OpenSSL as their encryption standard, the OpenSSL project has a tiny budget with just one full-time employee and a handful of part-time workers and volunteers. Deploying software so widely with so little supervision creates significant security issues, which of course were exploited by 2014’s notorious Heartbleed vulnerability. Heartbleed woke up everyone to the risks that open source software can present if its management, development and design lack strong oversight and budget.
As a defense mechanism, Google last year created BoringSSL, an offshoot of OpenSSL proprietary to Google. The company really had no other choice: It had been managing more than 70 patches to OpenSSL, and that figure was growing. This constant patching was making it difficult for Google to maintain consistency across multiple code bases and causing security concerns. With the rollout of BoringSSL, Google is seeking to create an encryption solution that interfaces more securely and efficiently with its Chrome and Android products.
BoringSSL was born from the realization that open source vulnerabilities can pose serious security risks. Another case in point is the hacker group that took up a challenge by Cloudflare and exploited Heartbleed to steal private Secure Shell security keys, which can be used to gain access to an organization’s most sensitive assets.
The theft of Secure Shell keys is a serious issue. Secure Shell works quietly behind the scenes in virtually every network worldwide to encrypt connections and access the organization’s network. Associated with each key is an identity: either a person or machine grants access to information assets and performs specific tasks. Because they are often used to secure remote administrator access to the network, Secure Shell keys provide access to some of the most critical information within an organization.
As such, it’s obvious how important it is to manage these keys properly. In a recent report, IDC listed how mismanaged Secure Shell keys can cause the following identity and access management (IAM) risks:
- Unused keys that still grant access to critical hosts
- No visibility into the purpose of key pairs
- Limited control over the creation of Secure Shell keys
- Secure Shell key usage that circumvents IAM controls
- Ease of copying and moving private keys
- Limited ability to identify and remove revoked, orphaned and unauthorized keys
These risks must be taken into account when creating a security plan. In the wake of today’s popularity boom in machine-to-machine (M2M) activity, ensuring that Secure Shell key management best practices are identified and followed is more important than ever.
The Challenge of M2M Transfers
IAM is a critical part of any comprehensive security plan that helps organizations control access to cloud infrastructure, applications, servers, and both structured and unstructured data. IAM solutions are good at managing the identities assigned to human users, but aren’t good at managing the identities assigned to the automated processes that drive computing in large-scale data centers. As these M2M non-human identities grow in number, it’s becoming clear that traditional IAM solutions aren’t able to manage the identities performing the bulk of operations.
Because a secure, encrypted channel is needed for M2M data transfers, most identities that enable these processes use Secure Shell for authentication and authorization. However, gaps exist in the IAM governance processes for identities that use Secure Shell. Instead of taking the secure route of centralizing key provisioning, for example, application developers, application owners and process owners might all have privileges to create and assign identities. Taking this approach results in a lack of proper control and oversight over creation of identities and their authorizations. Without central management and visibility, enterprises cannot be sure how many identities have been created, what these identities are authorized to perform, and which are no longer needed.
Many companies have begun to re-evaluate how they use and manage open source technologies, both in their products and within their organization, as a result of the Heartbleed vulnerability. That’s a good thing. The point here is not that open source is bad. Rather, it is an opportunity for technology executives to take another look at the necessary but often-neglected infrastructure that their businesses run on, especially when it is something as ubiquitous and critical as encryption technologies like SSL or Secure Shell.
When evaluating the security level of infrastructure, ask these fundamental questions:
- Do we know who is creating keys?
- Do we know who has access to what?
- Can we tell if someone has acted maliciously?
- Are our enterprise open source technologies properly managed?
- Can we rapidly respond to vulnerabilities by rotating keys or updating to new versions?
- Does either a vendor or internal resources properly support our open source software, or are we relying solely on someone’s good will?
Best Practices for a Safer Network
Untold numbers of individuals and organizations have used OpenSSL for more than 15 years. It provides encryption and a safe channel for sending sensitive information in general. Vulnerabilities exist within any software, but if they are discovered in the software that encrypts your data, that vulnerability becomes a call to action. That call becomes more urgent in light of hackers’ ability to steal Secure Shell keys by exploiting OpenSSL vulnerabilities like Heartbleed.
Organizations need greater visibility into the use and authorization of keys, and stronger IAM controls in light of increased M2M activity and centralized provisioning will help to keep track of all the keys. These practices comprise a comprehensive security plan that will help close the door on outside threats.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.