In keeping with its image of a forward-looking technologically sophisticated hosting company, Rackspace held its first Solve event in San Francisco on Monday, speaker presentations sandwiched by Docker CEO Ben Golub as the opening keynote and by CoreOS CEO Alex Polvi as the day’s final keynote.
Docker and CoreOS are venture-capital-backed San Francisco startups working to bring Web-scale computing to the masses. Both startups have open source technology at the core, and both are trying to convince the world that the approach to infrastructure by the likes of Google and Facebook is really the right approach to infrastructure for anybody doing computing at scale.
The goal is to make it easy for developers to build and deploy applications quickly, without having to worry about the infrastructure. Rackspace has designed its services and marketing messaging to drive home the narrative that cloud may be great but it still needs management, which is hard, but Rackspace is there to make it easier.
The company recently changed its tagline to “#1 managed cloud company,” but, more importantly, added a series of managed services for cloud. “We’re all in on managed cloud,” Rackspace CTO John Engates said during his opening remarks. “Managed cloud is everything we do.”
Earlier this month, Rackspace launched two managed services offerings – an entry-level service with around-the-clock access to engineers and general guidance and assistance for deploying applications in the Rackspace cloud, and a high-end one that enables a customer to essentially outsource management of their entire cloud infrastructure to Rackspace.
“It’s about taking the pain out of managing the cloud,” Engates said.
Building WordPress for developers
If Rackspace is about taking the pain out of managing the cloud, Docker is about taking the pain out of deploying applications on any cloud – Rackspace’s or anyone else’s. It’s that simple. The startup’s ultimate goal is WordPress, Blogger or Tumblr for applications. All you’ll need is an idea and some free time. The platform and the infrastructure underneath will not be anything for you to ever worry about.
Golub said we’re not there yet, but sounded 100 percent certain when promising that Docker would get us there eventually. To get there, however, some fundamental notions about infrastructure will have to change, he said. It will not be easy, since majority of IT infrastructure that exists today has been built using these notions.
The three notions the Internet has been built on over the past 15 years that are now breaking down are that the applications are long lived, that they are monolithic and built on a single stack, and that each application is deployed on a single server. Neither of these is true about modern applications, which are constantly under iterative development, are built from loosely coupled components and deployed to multiple servers. Thinking about how all these components interact with each other and how the servers that host them are set up and reconfiguring the infrastructure manually every time something new is deployed simply isn’t going to work.
“It’s simply impossible,” Golub said.
Docker’s solution is to pack the application and all of its diverse components into a standard “container.” While the contents of two containers may be different, everything looks exactly the same on the outside. Outside, in this case, means what the application’s infrastructure requirements look like to an infrastructure platform, be it Amazon Web Services’ or Google’s cloud, a dedicated server in a Rackspace data center or a developer’s sticker-ridden MacBook.
“If you really want to understand what the future of applications is, all you really need to remember is that developers, at the end of the day, are authors,” Golub said. “They’re content creators.”
To explain himself, he draws parallels between IT and publishing. It used to be that the only way to create a book was to sit in a dark cave and write one. When mobile type arrived, the act of creation was separated from the act of replication and distribution, and when the Internet arrived, anybody could be an author without ever having to worry about the nuts and bolts of getting their content in front of an audience.
CoreOS builds resilient compute clusters
CoreOS, the startup with a light-weight Linux distribution for servers that updates automatically across every node it is deployed on (much like Google’s Chrome browser, which served as inspiration for the product) incorporates Docker containers as a “design requirement,” according to Polvi, its founder and CEO.
The startup has generated a lot of buzz and attracted some heavyweight venture capital backing, promising essentially the same future Docker is promising: a Web-scale infrastructure for everyone. Still in its early stages, the company already has attracted a number of customers large and small, including Rackspace.
The Texas hosting firm uses CoreOS to enable its new bare-metal cloud offering called OnMetal, which went into general availability last week. Before anything is loaded onto one of the OnMetal servers, it boots up with CoreOS temporarily to set it up with whatever operating system and other software the customer needs.
CoreOS launched its first stable release last week. “It’s a Linux-based operating system built for running large server deployments,” Polvi said. The product makes management of server clusters easier by providing consistency among the nodes with its global updates. A major value proposition of such consistency is the ability to build compute clusters where outage of a single node does not affect uptime of the cluster as a whole and the applications it is running.
Will data center reliability be less important in the future?
While there are examples of applications for which something like CoreOS is not necessarily the ideal technology – massive Oracle databases are one example Polvi brought, saying these are better off running on the big expensive metal they are running on today – a big chunk of the world’s workloads would benefit from the resiliency of CoreOS or the flexibility of Docker. If the future does shape up along the lines people like Golub and Polvi envision (and they are not the only ones who envision it that way) implications for the data center industry are huge.
A big part of what a data center provider brings to the table is reliability. That’s their expertise, and that’s where they spend the big bucks so that their customers don’t have to. If an environment using CoreOS is designed to withstand an outage of several individual servers in a large cluster, the value proposition of redundant generators, UPS systems and expensive switch gear all carefully orchestrated to make sure servers don’t lose power gets diminished. That is a lot of ifs, but the movement to bring what Google in 2009 referred to as “warehouse-scale machines” to the masses is not a trend to ignore.