Today, a slew of disruptive trends are placing back-breaking demands on existing networks and data center infrastructure. In fact, these much beefier requirements for speed and bandwidth have grown so far so fast that the IT world simply can’t keep up.
The answer doesn’t lie in Band-Aid fixes or short-term plans. The solution, says Data Center World Speaker Chris Crosby, requires a completely new data center strategy with a shelf life of at least five years or more. It’s a huge red flag if that subject isn’t being broached by folks in the data center or by the C-suite in the boardroom, says the founder and CEO of Compass Datacenters.
So, what’s causing all the hoopla? It pretty much boils down to a whole lot of people trying to access the same data or files or movies at the same time. It’s no different than the too-many-cars-on-the- freeway scenario, or trying to connect to the Internet and Xbox to play the new version of Halo on Christmas night.
The results are bottlenecks, lag time or downtime and a lot of unhappy campers. It’s one thing to talk about the need for speed when it comes to traffic or playing videos; it’s a whole different issue when businesses, or hospitals, or governments come to a screeching halt.
The whole BYOD trend where mobile workers are using every device they own and from different corners on the earth and expect lightning fast access to data. Never mind the security issues these willy-nilly connections cause.
Then there’s the shift to software-as-a-service where an application resides on either a public cloud like AWS or on a shared network site. In either scenario, end-users in all different regions of the world are logging into the same cloud space. Crosby suggested that the influx of millennials in the workplace will only exacerbate the situation.
Calling the new generation, “the elephant in the room,” he talked about how younger people today have always worked on computers and will switch Internet or cable service providers on a dime based on their response times.
However, as equal a force as all of the above put together is the Internet of Things (IoT). Tens of millions more wireless enabled devices will be sending and receiving exorbitant bits of information. What does the IoT and streaming videos, accessing applications in the cloud, etc. have in common? They all require real-time processing, explained Crosby.
“Many existing data centers and their supporting network structures weren’t designed and built to effectively process the heterogeneous volumes of data,” he said.
What Crosby says needs to happen—not over a week or month—is a long-term shift from a centralized data center strategy to a stratified system of data centers.
“Big, centralized facilities that are built to process huge amounts of data and that rely on the network to reach out to where customers and IoT devices are may work for certain types of applications, but you couldn’t design a worse data center infrastructure for IoT because it leads to significant latency, flexibility, and computing load issues. The bottom line is that IoT doesn’t like this kind of infrastructure.”
A stratified system of data centers, as described by Crosby, includes edge facilities and even micro data centers very close to where traffic is being sent and received.
“There’s still a central data center or two at the center of the hub, but having these additional strata of localized facilities is a better fit for the processing, data exchange, storage and other support that IoT depends on to work as expected.”
Think of micro data centers as “filtering agents” that determine what data flows where in terms of delivering data as directly to end users as possible. This essentially eliminates latency issues that exist with centralized data centers.
The bottom line is that what works today won’t work tomorrow. The massive volumes of data will continue to place unprecedented demands on data centers and network architectures.