Planning to Succeed in the Face of Failure: Big Data and the Data Center

What defines Big Data is still up for debate, writes K.G. Anand of Avocent. Regardless of its definition, data must be kept both secure and available in your data center.

K.G. Anand is director, Global Solutions and Product Marketing, Avocent Products and Services, Emerson Network Power. He leads these efforts globally for the company’s data center hardware, software and services offerings.

In our high-tech world, it feels as though a new acronym (or “buzzcronym”) springs up each and every day, and it occurred to me that the most successful terms are those we use without a second thought. I believe this happens when a term becomes more than marketing lingo and instead, represents something of real value.

Cloud computing, for example, was just a term several years ago. Today, like it or not, it has come to represent an entire realm of functionalities (each with their own acronym, of course).

Likewise, other buzzcronyms have come to represent things we take for granted today, like SAN/NAS in storage, IDS/IPS in security and yes, Big Data. This term is seemingly everywhere.

Data security no matter the size

What defines Big Data is still up for debate. One prominent analyst firm says all data is Big Data to the companies that need it to operate. Others talk about enormous amounts of unstructured data, like video. I would suggest, however, that regardless of the size of your data or even what you call it, it must be kept both secure and available in your data center.

As such, it is incumbent upon you to increase the efficiency of your data center to improve the availability of the underlying infrastructure that provides access to your data. Centralized management has proven itself to be a reliable method of achieving this goal, allowing you to monitor and access the devices throughout your center, regardless of their type, vendor or location.

Another way to reduce your risk is to optimize your Disaster Recovery and Business Continuity strategy. Remote access to your network assets enables you to make things right when they go wrong, in a fraction of the time it would take to physically visit the site of the failure. Conversely, remote management also lets you shut things down in a hurry, if that is the best course of action.

Finally, the remote monitoring of your devices allows you to determine how each is operating, its resource consumption, even identifying potential power overloads before they happen. To do that, you need to monitor down to the individual node level; automatic updates are needed by zone, rack, PDU, outlet and more.

When all is said and done, it comes down to delivering a consistent level of service that your business demands. In a world filled with buzzcronyms that actually mean something, that is doubly important.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish