Werner Vogels, CTO, Amazon, speaking at AWS re:Invent 2015 in Las Vegas (Photo: AWS)

What 2016’s Top Data Center Stories Say About the Industry

After we ran the list of the most popular stories that appeared on Data Center Knowledge this year, we couldn’t help pondering the reasons those stories resonated with so many people. The most obvious reason that applies to all of them is that they illustrate some of the biggest changes the data center industry is undergoing.

Here are our thoughts on what those changes are and how some of our stories illustrate those macro-level trends.

The Days of Cloud Doubt are Gone

In February, a short blog post by Yuri Izrailevsky, who oversees cloud and platform engineering at Netflix, notified whoever cared that the online movie streaming pioneer had completed its migration from own (or leased) data centers onto AWS. As it turned out, a lot of people cared. This was hands-down the most widely read story we ran this year.

What Netflix’s cloud migration tells us is that you can definitely rely on public cloud to provide a high-quality digital service at global scale and do it cost-effectively. The factors a company has to weigh as it decides whether or not to outsource to a public cloud provider no longer include whether it will work or not. It’s now primarily a cost calculation, with some compliance considerations.

Not only did it underline the fact that the years of cloud doubt are well behind us, but it also demonstrated that this approach to outsourcing infrastructure can help you rapidly expand your service on a global scale. Izrailevsky’s announcement came one month after Netflix announced it was launching its service in more than 130 new countries simultaneously. There’s no doubt that AWS was instrumental in its ability to scale so quickly.

It’s worth noting that Netflix still operates a network of content caching sites in leased colocation data centers, according to a source involved in the company’s infrastructure operations.

Cloud Infrastructure is a Giants’ Game

While the Netflix story is an illustration of the cloud’s inevitability, the second most-popular story illustrates that it’s really hard to build a successful public cloud business nowadays, even for a company as big as Verizon. In February, the company announced that it was shutting down its public cloud service, giving customers two months to move their data elsewhere.

Verizon was just one of the giants that tried and failed to compete in cloud infrastructure services with Amazon, Microsoft, and increasingly Google. General-purpose cloud infrastructure efforts by HPE, Dell, Rackspace, and most recently Cisco all met the same fate.

The leading hyperscale cloud providers have built infrastructure so vast and brought prices down so low that others who want to play in the market have to either differentiate by building highly specialized niche offerings or build services around the cloud offerings of the handful of hyperscalers. Even then, there’s no guarantee that the hyperscalers will not one day build a specialized offering targeted at your niche or add a managed service that will compete directly with yours.

See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning

No Stress Test is Too Bold

Hurricane Sandy’s massive disruption of internet infrastructure on the East Coast made people overseeing Facebook’s data centers realize they have to think about availability differently than they had been.

You can build redundant systems within a single site and design sophisticated multi-site failover schemes, but you don’t really know how well all this protection will work until an incident occurs. And since you never know how big an incident you’re going to face, no stress test is big enough.

That’s why Facebook’s infrastructure team has made it a regular practice to intentionally shut down entire data centers to see how well failover systems perform. Our coverage of the lessons-learned presentation about this practice by Jay Parikh, Facebook’s head of engineering and infrastructure, was one of the most read and shared stories this year.

No, Data Center Energy Use Is Not Out of Control

One of the most popular stories on DCK this year was perhaps also the most eye-opening one about the data center industry. Contrary to what most assumed, the rate of growth in total energy consumption by all US data centers has slowed substantially in recent years. All while the amount of data center capacity has been growing at unprecedented pace.

That’s according to a study by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University. The last time the US government conducted such a study was in 2007. That study showed that over the five years that preceded it, total data center energy consumption in the country grew by 24 percent, which is why it was surprising even to the authors of the new study themselves to find that consumption grew by only 4 percent between 2010 and 2014.

The researchers attributed the slow-down to two macro-level industry trends: improvements in energy efficiency and the emergence of hyperscale data centers. Servers have gotten a lot more efficient, crunching more data using less energy. That and the widespread use of server virtualization meant that even though demand for compute capacity grew, fewer physical servers were required to deliver it. The 2008 market crash also contributed to the slow-down in global server shipment growth.

Hyperscale data centers built by cloud and internet giants as well as the major colocation service providers are a lot more efficient than the typically older and smaller enterprise facilities. Annual server shipment growth between 2010 and 2014 was 3 percent, but most of the servers responsible for that growth went into hyperscale data centers rather than single-user enterprise ones.

See also: Here’s How Much Water All US Data Centers Consume

Data Center Switch Incumbents Aren’t Disruption-Proof

In 2016, LinkedIn emerged as the newest hyperscale data center player, complete with custom data center designs and home-baked servers and switches. One of the biggest DCK stories this year was coverage of the professional social network’s announcement that it had designed its own 100G data center switch. Facebook also recently designed its own 100G switch.

This illustrates two trends. One: 100G networking is coming into the hyperscale data center in a big way. Two: if incumbent vendors’ data center networking market share seemed safe from disruption similar to the one the server market faced when hyperscalers and ODMs were busy cutting out the middle-men, it no longer seems so.

See also: With Its 100G Switch Facebook Sees Disaggregation in Action

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

San Francisco-based business and technology journalist. Editor in chief at Data Center Knowledge, covering the global data center industry.

Add Your Comments

  • (will not be published)