Today, many organizations are taking a look at cloud from a new lens. Specifically, organizations are looking to cloud to enable a service-driven architecture capable of keeping up with enterprise demands. With that in mind, we’re seeing businesses leverage more cloud services to help them stay agile and very competitive. However, the challenge revolves around uptime and resiliency. This is compounded by often complex enterprise environments.
When working with cloud and data center providers, it’s critical to see just how costly an outage could be. Consider this – only 27% of companies received a passing grade for disaster readiness, according to a 2014 survey by the Disaster Recovery Preparedness Council. At the same time, increased dependency on the data center and cloud providers means that overall outages and downtime are growing costlier over time. Ponemon Institute and Emerson Network Power have just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).
Throughout their research of 63 data center environments, the study found that:
- The cost of downtime has increased 38 percent since the first study in 2010.
- Downtime costs for the most data center-dependent businesses are rising faster than average.
- Maximum downtime costs increased 32 percent since 2013 and 81 percent since 2010.
- Maximum downtime costs for 2016 are $2,409,991.
With this in mind – it’s important to understand that there are indeed a number of different types of cloud options. Arguably, one of the dominant forms is a hybrid cloud.
From its starting point, cloud computing has evolved into distinct models, delivering a range of services. A hybrid IT architecture typically consists of an interconnected combination of:
- Public cloud computing – where computing power, applications and services are provided wholly by a third party and typically delivered over the Internet.
- Private cloud computing – where the principles and technologies behind the public cloud model are dedicated to a single enterprise, either within a company’s own data center or hosted by a third party within a secure hosting or colocation center. Management may be handled internally or outsourced.
- Legacy infrastructure – consisting of all existing mainframe and client-server infrastructure, running bespoke line-of-business applications that would be difficult and expensive, if not impossible to migrate to cloud today.
Public + Private (Colo) + Legacy = Hybrid Cloud
Services offered through cloud models are often divided into three groups:
- Infrastructure as a Service (IaaS), referring to provision of compute, storage and network resources.
- Platform as a Service (PaaS), involving services and frameworks that can be built into applications.
- Software as a Service (SaaS), referring to typically web-based applications for corporate and personal use.
In your own environment, you may be utilizing one or two of these types of services; or a combination of them. However, there are key steps you have to take to actually create an enterprise cloud model. These include:
- Bandwidth between the organization and the Internet. Because of the massive levels of distribution happening with data, applications, and critical workloads – you will absolutely need to take bandwidth and Internet requirements into consideration. There a couple of aspects to pay attention to here when creating an enterprise cloud. First of all, you need to have redundant links supporting your most critical applications. Whether you’re doing this through SD-WAN (software-defined WAN), or through a provider – make sure you test out your resiliency levels. Link aggregation, data duplication, and WAN controls are becoming critical management components for enterprise cloud environments. The second point is WAN optimization.
New types of technologies allow for single resources (like applications) to leverage multiple WAN links if needed. Most of all, you can leverage powerful data compression and reduction features to optimize the flow of information cross-cloud and down to you users. Finally, you can specific protocol acceleration as a part of your cloud strategy. This means placing priority in specific data streams – VoIP for example. So, in creating an enterprise cloud which is hosting your critical apps, always plan around your WAN and its supporting technologies.
- Resourcing, technical skills, and knowledge available in-house. Technology aside, let’s talk skill levels and capabilities. First of all, you don’t need to do this on your own. In fact, I highly recommend that you don’t. In designing an enterprise cloud environment, you’ll need to take into consideration several key design points. This includes, business level integration, security, data planning, network design, compute resource considerations, user experience, and much more. You absolutely should work with a cloud partner which can carefully guide you through the enterprise cloud process.
Remember, you’re not deploying just another enterprise cloud; you are deploying your own, unique, enterprise cloud platform. Ensure that you have good partners in place who can support both technical as well as business requirements.
- Security constraints and data compliance criteria. It goes without saying that this will probably be the lengthiest section. Consider this, a recent Ponemon study, analyzing the cost of data breaches, found that the average cost of breaches at organizations have jumped past $4 million per incident, a 29% increase since 2013 and 5% increase since last year. The study found that average dwell time for breaches stands at 201 days, with organizations requiring another 70 days to contain breaches once they’d been identified.
Furthermore, the report pointed out that the average cost per record equaled about $158. However, being prepared for a cyber security incident can diminish that cost. For example, having an incident response plan and team in place can reduce that figure by $16 per record. There was a great IDC study of U.S. businesses which reveals a wide spectrum of attitudes and approaches to the growing challenge of keeping corporate data safe. While the minority of cybersecurity “best practitioners” set an admirable example, the study findings indicate that most U.S. companies today are underprepared to deal effectively with potential security breaches from outside or inside their firewalls.
Simply put, every aspect of your enterprise cloud design must have security incorporated into it. There are several important considerations here. First of all, you can absolutely deploy compliance-bound workloads into the cloud; as long as you’re working with the right type of provider. Now, you can leverage a variety of cloud services specifically aimed at regulation and compliance. From there, security from a holistic perspective will be critical. Within the realm of cloud security, there are almost countless technologies which can help you stay secure. This includes everything from securing cloud-to-cloud traffic, to leveraging cloud access security brokers (CASBs) for specific workload access.
Remember, the cloud itself can be very secure. However, how you deploy workloads and utilize that cloud can create vulnerabilities. Gartner, recently pointed out that through 2020, 95% of cloud security failures will be the customer’s fault. This means organizations must be constantly vigilant in their design of cloud-based workloads. Your unique deployment methodology will require you to look at very granular security methods. This could range from very secure cloud-to-cloud connections, to leveraging internal virtual firewalls for better security.
- Performance and availability stipulations. Although your enterprise cloud will help you optimize performance based on specific metrics – you can also leverage outside help. There are some powerful application and network monitoring solutions (New Relic, AppDynamics, Dynatrace – just to name a few) which can help with performance, end-user issues resolution, and resource forecasting. From there, you can leverage powerful load-balancing technologies which will help you even the load of your cloud. Most of all, your enterprise cloud must be designed around availability. In that sense, make sure you conduct a business impact analysis (BIA) around your cloud platform. This helps you understand dependencies, how long certain resources can remain down, and which systems must come up first. Never take anything for granted and never assume availability. You should always test out your systems; especially in an enterprise cloud environment.
- Flexibility requirements and service level guarantees. This is a very important consideration. Creating your service level agreement must be a careful process focusing on current and future states. In creating your SLA, work with a provider which can be flexible and adjust to your business needs. If there is a huge penalty or change fee for modifying SLAs, think about it before you sign. Enterprise cloud environments are dynamic, critical, systems requiring room for flexibility. The last thing you’d want to do is have to move your entire enterprise cloud ecosystem because of a bad SLA.
- Available budgets and procurement constraints. Does your provider have a stock of spare parts for the servers hosting your critical apps within your enterprise cloud? Can you cloud vendor procure the proper components for your ecosystem on time? What if your cloud experiences a burst and requires more resources – do you have budget to support this? There have been situations where an enterprise cloud required more resources for a given quarter; but there wasn’t budget for it. In those situations, money has to be pulled from somewhere, and it can become a messy situation. In creating an enterprise cloud – make sure you align with your business and ensure that there is enough money to support your environment. The dynamic nature of cloud requires there to be some flexibility when deploying critical cloud applications.
Such criteria can be used as a checklist for decision making. However, an important point to note is that ‘hybrid’ does not mean a “cloud free-for-all.” Rather, it means working proactively at the architectural level, ensuring workloads are in the right places and that the right (high bandwidth, low latency, high availability, high security) communications paths exist between them.
The final, and possible most important step, is finding the right provider to work with when it comes to enterprise cloud deployment. If you require national or regional distribution, work with a partner which can support your distributed needs. If you require certain levels of uptime, make sure you sign up with a partner which can design an SLA which meets your needs. Most of all, an enterprise cloud strategy will require enterprise-level communication with your provider. Create a proactive support and management model will ensure a healthier cloud ecosystem. One which will be less costly to maintain, and one that will help the business compete in today’s digital economy.