Stefan Bernbo is Founder and CEO of Compuverde.
A recent data forecast from Cisco predicts that mobile data traffic will grow ten-fold globally from 2014 to 2019 – a compound annual growth rate of 57 percent. Another 57 percent of mobile connections will be “smart” connections by 2019, up from 26 percent in 2014. Add the growth of mobile devices and cloud-based services, and it’s enough to make an database administrator’s head spin trying to figure out where all that data is going to be stored.
It is obvious that more storage will be required than traditional architectures can provide.
These architectures have bottlenecks that, while merely inconvenient for legacy data, are simply untenable for the scale of storage needed today. To adapt to this exponential growth trajectory, major enterprises are deploying web-scale architectures that enable virtualization, compute and storage functionality on a tremendous scale.
Overcoming the Single-Point-of-Entry Challenge
A bottleneck that functions as a single point of entry can become a single point of failure, especially with the demands of cloud computing on Big Data storage. Adding redundant, expensive, high-performance components to alleviate the bottleneck, as most service providers presently do, adds cost and complexity to a system very quickly. However, a horizontally scalable web-scale system designed to distribute data among all nodes makes it possible to choose cheaper, lower-energy hardware while eliminating bottlenecks.
This is a huge win for cloud providers, which must manage far more users and greater performance demands than do enterprises. While the average user of an enterprise system demands high performance, these systems typically have fewer users, and those users can access their files directly through the local network. Furthermore, enterprise system users are typically accessing, sending and saving relatively low-volume files like documents and spreadsheets, using less storage capacity and alleviating performance load.
Outside the enterprise, though, the situation is quite different. The system is being accessed simultaneously over the Internet by exponentially more users, which itself becomes a performance bottleneck. The cloud provider’s storage system not only has to scale to each additional user, but must also maintain performance across the aggregate of all users. Significantly, the average cloud user is accessing and storing far larger files – music, photo and video files – than does the average enterprise user. Web-scale architectures are designed to prevent the bottlenecks that this volume of usage causes in traditional legacy storage setups.
Scaling Storage Economically
Freedom from reliance on hardware is important for web-scale architecture. Since hardware inevitably fails at a number of points within the machine, traditional appliances – storage hardware that has proprietary software built in – typically include multiple copies of expensive components to anticipate and prevent failure. These extra layers of identical hardware extract higher costs in energy usage and add layers of complexity to a single appliance. Because the actual cost per appliance is quite high compared with commodity servers, cost estimates often skyrocket when companies begin examining how to scale out their data centers. One way to avoid this is by using software-defined vNAS or vSAN in a hypervisor environment, both of which offer ways to build out servers at a web-scale rate.
Solving Problems at the Storage Level
To accommodate web-scale architecture, distributed storage offers the best model – even though the trend has been to move toward centralization. This is because there are now ways to improve performance at the software level that neutralize the performance advantage of a centralized data storage approach.
To minimize load time, service providers need to be able to offer data centers located across the globe as users can access cloud services from anywhere at any time. With global availability, however, comes a number of challenges. Load is active in the data center in a company’s region. This creates a problem, since all data stored in all locations must be in sync. From an architecture point of view, it’s important to solve these problems at the storage layer instead of further up at the application layer, where it becomes more difficult and complicated to solve.
Events like natural disasters that cause power outages can put a local server farm offline, which means that global data centers must be resilient. If a local data center or server goes down, global data centers must reroute data quickly to available servers to minimize downtime. While there are certainly solutions today that solve these problems, they do so at the application layer. Attempting to solve these issues that high up in the hierarchy of data center infrastructure – instead of solving them at the storage level – presents significant cost and complexity disadvantages. Solving these issues directly at the storage level through web-scale architectures delivers significant benefits in efficiency, time and cost savings.
Future-Proofed Through Web-Scale
The exponential demand for more storage means that companies can no longer continue to rely on expensive, inflexible appliances in their data centers and remain financially viable. They will be forced to outlay significant funds to develop the storage capacity they need to meet customer needs.
Having an expansive, rigid network environment locked into configurations determined by an outside vendor severely curtails the ability of the organization to react nimbly to market demands, much less anticipate them in a proactive manner. Web-scale storage philosophies enable major enterprises to “future proof” their data centers. Since the hardware and the software are separate investments, either may be switched out to a better, more appropriate option as the market dictates, and at minimal cost.
This new model is necessarily the future of storage for organizations faced with the volumes of data that the modern world presents. Software-defined storage and hyper-converged infrastructures create an agile and cost-effective framework for major enterprises, global organizations and internet service providers to serve their constituents with high performance in a distributed framework that won’t break the bank.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.