In our previous article, we took a look at how cloud providers are actively creating policies that ensure optimal cloud security. With so much cloud growth, it’s becoming very clear that more organizations are adopting some kind of cloud model to optimize their own businesses. Still, where big cloud service providers are doing a good job around security, there are still areas for improvement within the private data center. Also, smaller cloud providers must always ensure the integrity of their client base.
Consider this very recent Ponemon study looking at data breaches. Although the study looks at a number of different security elements, here are three important points to consider:
- The cost of data breaches increased. Breaking a downward trend over the past two years, both the organizational cost of data breaches and the cost per lost or stolen record have increased. On average, the cost of a data breach for an organization represented in the study increased from $5.4 million to $5.9 million. The cost per record increased from $188 to $201.
- Malicious or criminal attacks result in the highest per capita data breach cost. Consistent with prior reports, data loss or exfiltration resulting from a malicious or criminal attack yielded the highest cost at an average of $246 per compromised record. In contrast, both system glitches and employee mistakes resulted in much lower average per capita costs at $171 and $160, respectively.
- The results show that a probability of a material data breach over the next two years involving a minimum of 10,000 records is nearly 19 percent.
With that in mind, what are areas that need improvement when it comes to cloud security; and what are some overlooked security aspects that should be considered when creating a cloud platform? Let’s look at a few areas that have challenged organizations when it comes to cloud and multi-tenancy.
- Checking for port openings. If you’re a small organization, this might be a bit easier for you. But what if you’re a large data center or cloud organization? What if you have multiple data center points and different firewalls to manage? How well are you keeping an eye on port controls, policies and how resources are distributed? Most of all, what if you decommission an application using a specific port; do you have policies in place to shut that port down? Port, network and security policy misconfigurations are potential causes for a breach. Even if you have a heterogeneous security architecture, know that there are tools that will help you monitor security appliances even if they’re from different manufacturers.
- Improperly positioning hypervisors and VMs to be outside-facing. I still see this happen every once in a while. In some cases, a VM must face externally facing or a hypervisor needs to be positioned in the DMZ. However, it’s critical to take extra care with these kinds of infrastructure workloads. Are they interacting with other internal resources? How well are network policies controlling access to that hypervisor and the VMs? Remember, your hypervisor has access to a lot of critical components within your data center. Even host-level access can be dangerous if not properly locked down.
- Not properly locking down portals, databases, and applications. You can have the best underlying server, hypervisor and even data center architecture; but if your applications have holes in them, you’ll have other problems as well. Some pretty big breaches have happened because a database wasn’t properly locked down or an application wasn’t patched. This is a critical piece which can’t be overlooked especially if these applications are being proved via the cloud.
- Not ensuring critical data is locked down properly. There are powerful new tools around IPS/IDS and data loss prevention (DLP). Are you deploying them? Do you have policies in place for monitoring anomalous traffic hitting an application? Do you know if a user is accidentally (or maliciously) copying data from a share or network drive? How good are your internal data analytics? These are critical questions to ask to ensure that your environment is locked down and that data isn’t leaking. Big cloud providers go out of their way to ensure that multi-tenant architectures stay exactly that – multi-tenant. Your data must be isolated when needed and have very restricted access. Furthermore, that information must regularly be tested and truly segmented using next-generation networking and security policies. If not, the results can be similar to what Sony, Target, or even Anthem experienced.
- What are you monitoring externally vs internally. Visibility and monitoring are critical to keeping a secure cloud and data center architecture. Log correlation and even management allow you to catch issues quickly and even isolate them to network segments, VMs, or even a physical server. New security tools allow you to control the flow of information very granularly within your own ecosystem. So much so that you can specify that just one server communicates over a specific vLAN pointing to a specific port on a unique switch. And, you can encrypt that data internally and externally. The key is being able to monitor all of this in the process and automate responses. This not only creates better visibility, but allows your security model to be even more proactive.
Remember, the cloud has a lot of moving parts. Much like gears, these parts all work together to allow complex workloads to be delivered to a variety of users spanning the world. It’s important to note that cloud adoption will only continue to grow. By monitoring and testing your own cloud and data center environment and applying security best practices, you will be prepared for whatever comes your way.