Twitter data centers, like Google data centers, Facebook data centers, and other so-called “web-scale” server farms, house custom servers, designed by the company’s dedicated staff of hardware engineers to optimize for its own applications.
The use of hardware designed for a particular company’s purpose and produced by original design manufacturers in Asia is now a common approach to infrastructure that needs to deliver web services on a global scale. It’s not a huge surprise that Twitter uses custom servers, but the company’s head of engineering, Alex Roetter, recently confirmed it to Wired.
Unlike Facebook, which makes most designs of its hardware available for public consumption through its open source hardware and data center design community called the Open Compute Project, Twitter is a web-scale company that rarely shares details about its infrastructure.
Sharing Infrastructure Software Tools
That’s not to say it doesn’t share anything. It has used a lot of open source software and open sourced a lot of tools it created in-house.
The most widely known implementation of Apache Mesos runs in Twitter data centers. Benjamin Hindman, one of the open source cluster management system’s creators, is credited with deploying Mesos during his time at Twitter and making the notorious “Fail Whale” largely a thing of the past.
Another example of Mesos in action is a server cluster underneath JARVIS, a Platform-as-a-Service that supports Siri, the natural-language-based interface for Apple iPhones.
Apple also keeps its infrastructure strategy close to the vest, but we know that it is at least interested in custom hardware. After several years of quietly participating in the Open Compute Project, Apple was revealed as an official member earlier this year.
Since Facebook launched it in 2011, OCP has become a hub for companies interested in using, making, and selling custom hardware for web-scale data centers. Not only is it customized for performance, it is also optimized for the lowest possible cost and speed of procurement.
Learning to Scale
Web-scale giants tend to grow at breakneck speeds, and data center capacity planning in support of those growth rates is a science. Spend too much too soon and face stranded capital; deploy too little, and watch your service get knocked out by a traffic spike when Ellen Degeneres decides to tweet a selfie together with everyone from Angelina Jolie and Brad Pitt to Jennifer Lawrence and Bradley Cooper.
As Raffi Krikorian, at the time Twitter’s VP of platform engineering, admitted about a year ago, it was only around then that the company’s infrastructure team could finally say with some degree of confidence that they “know how to do this.” And knowing how to do it involved creating hardware and lots of software in-house.
Web Scale Headed for Mainstream?
Like with other web-scale companies, hardware available off the shelf just didn’t cut it for their purposes. But that’s changing. Slowly but surely, every “incumbent” hardware vendor has joined OCP, and most now have some sort of a commodity line.
The market for this kind of hardware is growing faster than any other category, and it is gradually growing outside of the small circle of web giants. In the financial-services world, for example, Goldman Sachs and Fidelity Investments, who were two of the earliest participants in OCP, have been joined by Bank of America and Capital One. JP Morgan Chase and Bloomberg have also been looking at OCP hardware.
Market analysts at Gartner have predicted that as much as half of all global enterprises would use web-scale IT as their architectural approach. Today, about 15 percent of servers in data centers around the world are customized computers designed for scale, Jason Waxman, general manager of Intel’s Cloud Infrastructure Group, told Wired.