Next-Gen Agility Avoids Unplanned Downtime

Reducing planned downtime, testing 2.0 and migration can create an approach enabling IT to overcome challenges and deliver next-gen agility.

Rick Vanover is Director of Product Strategy for Veeam Software.

These are crazy days for IT staff at all levels. The amount of change we face is unlike what has been seen previously. Data volume is at an all-time high and continually growing. Expectations for IT are dizzying. Organizations need agility, yet, one wrong move can cause unplanned downtime and jeopardize service levels.

Take heart. There is a way to handle the challenges of today and unforeseen ones of tomorrow via a significant set of settling qualities that should appeal to all organizations. This revolves around three vital areas – reducing unplanned downtime, next generation “2.0 testing” and migration – resulting in an approach that delivers the necessary agility IT needs to stay ahead of issues and increase competitiveness.

Anything IT pros can do to avoid planned downtime is a good idea and the best way to plan. What I mean by this is leveraging the data and resources already in place, or with some isolation and automation, to simulate changes and accommodate developments in the data center.

How many times have you asked for a window to make a change, only to have to cancel it because it ended up taking longer than anticipated due to unforeseen requirements? Simulating a scenario in a data lab is a great way to go into a change in order to be completely prepared for how it will impact production systems. What’s better about this approach is it also increases the confidence of success when downtime is sought. After all, the worst that can happen is asking for a maintenance window again because the first time didn’t go as planned.

Given all of the changes in the data center and capabilities of the cloud, we need a next-gen approach to testing in order to ensure things go as planned. For example: when a company needs to make a critical change to a multi-tiered application, such as upgrading from SQL Server 2014 to SQL Server 2016, as well as changing the version of Windows to Server 2016 to meet new requirements for a line of business application.

There are a lot of things going on in this instance. What’s more, some of these changes are hard to reverse.  In fact, I’ve seen situations where the Active Directory domain functional level needs to be changed, which is even more difficult to correct once you’ve started.

How do we test in a way that’s non-disruptive to production, one allowing organizations to tackle the upgrade challenge easily? As it turns out, there’s a robust, easy way to test changes to applications, ensuring organizations won’t be stuck with “dinosaurs” in the data center or prohibited from going to the cloud. This can be accomplished with an automation layer that provides isolation from network traffic to production systems (and even reproduces the production network in isolation), yet also uses production-class compute, storage and memory resources. This agility to upgrade and be modern with line of business applications will pay dividends in the future.

Migrating applications and systems is the last piece of dexterity necessary to complete the puzzle. But first, ask yourself a few questions: If a change is tested completely, do we have what we need to migrate it to production? If it makes sense to run a workload in the public cloud, do we have the technology to restore workloads to Microsoft Azure or Amazon Web Services? Keep in mind, this also includes moving workloads around to different on-premises platforms.

One thing I’ve noticed over the years is that organizations don’t necessarily move to the cloud to save money; they do so because it is the right platform. The same goes for applications that are not virtualized today. There is likely a good reason those systems are still running on bare metal.

At a higher level, these advanced technical capabilities equate to a broader data management strategy that allows companies to adjust to conditions with ease. Yes, it’s critical to reduce unplanned downtime, but it’s also essential to utilize new testing and migration capabilities to fully vet changes and eliminate production disruption. These capabilities are aligned to the expectations placed on IT services today and the best set of options to be ready for tomorrow. It’s all about next-gen agility.

What do you see as the biggest challenge to managing downtime, testing and migration in your environment? 

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish