Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets.
A “smart and safe city” initiative was recently launched in Kazan, Russia. The goal of this initiative is to transform the city gradually by creating a network of Internet-connected sensors and devices that will serve its population with greater efficiencies and better quality of life. As an example, connected cameras have been installed in the famous Gorky Park to enhance security and safety.
Kazan provides a real-world example of what the Internet of Things means for the data storage industry. Imagine all the new data the city’s interconnected devices and sensors will generate, and the storage it will require. Current storage approaches are already bursting at the seams, and the requirements for scaling up have proven costly. Service providers need to think about how to accommodate the incoming data deluge at a price they can afford.
Redundancy and Bottlenecks in the Hardware Age
Appliances are the primary architecture of most modern-day data centers. Storage appliances come with proprietary, mandatory software that is designed for the hardware and vice versa, and come tightly wedded together as a package. The benefits of this configuration include convenience and ease of use.
Redundancy is built into the appliance model as backup for failure caused by reliance on a single point of entry. Traditional appliances typically include redundant copies of expensive components. This model is effective but expensive. These redundant extra components also bring with them greater energy usage and additional layers of complexity. When companies, in anticipation of growth events like the Internet of Things, begin to consider how to scale out their data centers, costs for this traditional architecture skyrocket.
These standard appliances also suffer from vertical construction. All requests come in via a single point of entry and are then re-routed. Think about a million users connected to that one entry point at the same time. That’s a set-up for a bottleneck, which prevents service providers from being able to scale to meet the capacity needed to support the Internet of Things.
Freedom from Appliance Dependency in the Software-Defined Age
Another option in data center architecture is software-defined storage (SDS). By taking features typically found in hardware and moving them to the software layer, a software-defined approach to data center architecture eliminates the dependency on server “appliances” with software hard-wired into the system. This option provides the scalability and speed that the Internet of Things demands.
Because software and hardware do not have to be sold together as a package, administrators can choose inexpensive commodity servers. This provides a real cost savings. When coupled with lightweight, efficient software solutions, the use of commodity servers can result in substantial savings for online service providers seeking ways to accommodate their users’ growing demand for storage.
In addition to choosing commodity servers, administrators can also choose the specific components and software that best support their growth goals; they are no longer bound to the software that’s hard-wired into the appliances. While this approach does require more technically trained staff, the flexibility afforded by software-defined storage delivers a simpler, stronger and more tailored data center for the company’s needs.
Storage at Scale
Software-defined storage offers the benefit of scalability as well. A telco servicing one particular area will have different storage needs than a major bank with branches in several countries, and a cloud services host provider will have different needs still. While appliances might be good enough for most of these needs, fully uncoupling the software from the hardware can extract substantial gains in economy of scale.
Using a software-defined approach eliminates the potential for bottlenecks caused by vertical, single-entry-point architecture. Its horizontal architecture streamlines and redistributes data so that it is handled faster and more efficiently, and this non-hierarchical construction can be scaled out easily and cost-effectively.
To accommodate the ballooning ecosystem of storage-connected devices all over the world, service providers, enterprises and telcos need to be able to spread their storage layers over multiple data centers in different locations worldwide. With millions of devices needing to access storage, the current storage model that uses a single point of entry cannot scale to meet the demand of the Internet of Things. It’s becoming increasingly clear that one data center is not enough to meet the storage needs of the Internet of Things; storage must instead be distributed such that it can be run in several data centers globally.
Technology has produced a veritable Cambrian explosion with the vast web of interconnected sensors and devices known as the Internet of Things. It will touch every industry and organizations of every size, requiring greater storage capacity than ever before. Because traditional data center architecture has been so expensive, service providers in search of a more budget-friendly alternative are finding the answer in software-defined storage. By uncoupling from the hardware and using a horizontal architecture, software-defined storage enables cost-effective scalability and speed, both of which will serve customer needs for the long haul.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.