New Tools Are Simplifying Backup and Replication

Add Your Comments

cloud-storage-470

As data is moved from one site to another, numerous variables come into play.

Creating a truly distributed environment which is capable of resiliency and agility isn’t always easy. Site-to-site replication is certainly much more possible now than it was before. However, there are still considerations around deploying such a solution. It can get even more interesting when large amounts of data have to be either backed up or replicated over the Wide Area Network (WAN). Fortunately, increased hardware performance and more bandwidth availability make creating such an environment a clear possibility.

As data is moved from one site to another, numerous variables come into play. Aside from just bandwidth, infrastructure components must be in place to help support this type of initiative. In designing this type of environment, administrators must be aware of the tool sets that they are using.

Using native tools

In deploying any type of new technology – especially when it comes to storage controllers and virtualization solutions – administrators need to start with the native tool set. Probably the first step in deploying a system which will be used in a replication scenario will be to granularly understand the environment. For storage platforms, native tools can deliver almost all of the necessary functionality that an environment may require. However, it’s not as easy as that. Vendors like NetApp, EMC, IBM, and VMware all release native tool sets which are extremely diverse and powerful. The learning process will become the challenge for administrators. Before jumping into any sort of third-party tool, admins first need olid knowledge of what they have in front of them. In cases where the technology is very new for the company, it’s recommended that engineers and even IT managers take some training courses to help them better administer their platforms. From a virtualization perspective, native tools can accomplish the following tasks:

  • Isolate the data which needs to be replicated.
  • Set up replication scheduling.
  • Help with bandwidth control.
  • Copy/Clone workloads as needed.
  • Interface directly between storage controllers and virtual platforms.
  • Capability to connect to remote storage systems for replication.

Always take the time to learn what your technology has to offer and how your organization can best utilize it.

Using Third Party Tools.

Once a solid understanding of the native tool set has been established, administrators can look outside of their existing tool bag for some help. In some cases, organizations may need to back up large amounts of data or transfer it over longer distances. Although native tools can help with that, there may be a need to control that process at a very granular level. In those cases, third-party tools which can directly connect into your environment can really help. Some examples include product like Veeam or even Microsoft’s SCCM/SCOM platform. In using third-party tools, administrators are able to have more granular control over certain portions of their environment.

  • Enhanced data migration, replication, and control policies.
  • Enhanced data distribution.
  • Better backup capabilities – onsite and remote.
  • Better control over security.
  • Better resource management and dynamic resource allocation.

There are numerous other third-party tools which can promise to enhance your existing environment. In selecting the right tool set, make sure that it’s able to directly tie into your current infrastructure and can support your organization’s growing needs. Some solutions can have a bit of power behind them – but lack support over the long run. Take appropriate planning steps and test out the software, if possible.

Using Management/Monitoring Tools

Both native and third-party tools are able to offer good amounts of visibility into an environment. However, in some cases, native tools may just not be enough. In the backup and replication process, there are a lot of components which need to be monitored to ensure that the entire job completes properly. If the native tool set isn’t enough, find a good third-party solution that can help. Either way from a monitoring and management perspective, it’s important to have visibility into the following:

  • Bandwidth usage.
  • Data usage.
  • Latency and transmission operations.
  • Resource utilization.
  • Alerts, alarms, and administrative notifications.

Having a proactive eye on a backup and replication process is essential since catching a problem early in the process can mean the difference between a hiccup and downtime.

Combining It All Together

It’s important to understand that native and third-party tools aren’t an either/or matter. Good administrators will leverage the power of both to better accomplish their duties. A solid backup and replication plan will certainly include both approaches. Where the native tools fall short, the third-party provider can help take over. In combining the two together, administrators should always have knowledge of how the solutions operate. Storage, bandwidth and computing power are all very precious – and often very expensive – IT resources. Because of that, using a variety of tools to best control an environment can help save time, money, and reduce management overhead.

The power of technology comes in the flexibility of its design. This means that if engineers deploy a well-planned storage replication infrastructure, they’ll have more control over what they need to do. Flexibility doesn’t just mean growth capabilities. An environment which is capable of integrating with third-party tools is able to stay agile. Although there are many great products out there which help control replication, backup and recovery – using native tools should never be disregarded. For organization to truly be elastic, there has to be ability to adapt to both the needs of the business and the demands of the market.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the National Director of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)