Skip navigation

How Google Routes Around Outages

Making changes to Google's search infrastructure is akin to "changing the tires on a car while you're going at 60 down the freeway," according to Urs Holzle, who runs the company's massive data center operations.

Making changes to Google's search infrastructure is akin to "changing the tires on a car while you're going at 60 down the freeway," according Urs Holzle, who oversees the company's massive data center operations. Google updates its software and systems on an ongoing basis, usually without incident. But not always. On Feb. 24 a bug in the software that manages the location of Google's data triggered an outage in Gmail, the widely-used webmail component of Google Apps.

Just a few days earlier, Google's services remained online during a power outage at a third-party data center near Atlanta where Google hosts some of its many servers. Google doesn't discuss operations of specific data centers. But Holzle, the company's Senior Vice President of Operations and a Google Fellow, provided an overview of how Google has engineered its system to manage hardware failures and software bugs. Here's our Q-and-A:

Data Center Knowledge: Google has many data centers and distributed operations. How do Google’s systems detect problems in a specific data center or portion of its network?

Urs Holzle: We have a number of best practices that we suggest to teams for detecting outages. One way is cross monitoring between different instances. Similarly, black-box monitoring can determine if the site is down, while white-box monitoring can help diagnose smaller problems (e.g. a 2-4% loss over several hours). Of course, it's also important to learn from your mistakes, and after an outage we always run a full postmortem to determine if existing monitoring was able to catch it, and if not, figure out how to catch it next time.

DCK: Is there a central Google network operations center (NOC) that tracks events and coordinates a response?

Urs Holzle: No, we use a distributed model with engineers in multiple time zones. Our various infrastructure teams serve as "problem coordinators" during outages, but this is slightly different than a traditional NOC, as the point of contact may vary based on the nature of the outage. On-call engineers are empowered to pull in additional resources as needed. We also have numerous automated monitoring systems built by various teams for their products, that directly alerts an on-call engineer if anomalous issues are detected.

DCK: How much of Google’s ability to "route around" problems is automated, and what are the limits of automation?

Urs Holzle: There are several different layers of "routing around" problems - a failing Google File System (GFS) chunkserver can be routed around by the GFS client automatically, whereas a datacenter power loss may require some manual intervention. In general, we try to develop scalable solutions and build in the "route around" behavior into our software for problems with a clear solution. When the interactions are more complex and require sequenced steps or repeated feedback loops, we often prefer to put a human hand on the wheel.

DCK: How might a facility-level data center power outage present different
challenges than more localized types of reliability problems? How does
Google’s architecture address this?

Urs Holzle: The Google within-datacenter infrastructure (GFS, machine scheduling, etc) is generally designed to manage machine specific outages transparently, and rack/machine group outages as long as the mortality is a fraction of the total pool of machines. For example, GFS prefers to store replicated copies of data on machines on different racks so that the loss of a rack may create a performance degradation but won't lose data.

Datacenter level and multi-region unplanned outages are infrequent enough that we use manual tools to handle them. Sometimes we need to build new tools when new classes of problems happen. Also, teams regularly practice failing out of or routing around specific datacenters as part of scheduled maintenance.

DCK: A "Murphy" question: Given all the measures Google has taken to prevent downtime in its many services, what are some of the types of problems that have actually caused service outages?

Urs Holzle: Configuration issues and rate of change play a pretty significant role in
many outages at Google. We're constantly building and re-building systems, so a trivial design decision six months or a year ago may combine with two or three new features to put unexpected load on a previously-reliable component. Growth is also a major issue - someone once likened the process of upgrading our core websearch infrastructure to "changing the tires on a car while you're going at 60 down the freeway." Very rarely, the systems designed to route outages actually cause outages themselves; fortunately, the only recent example is the February Gmail outage (Here's the postmortem in PDF format).

DCK: How does Google respond to outages and integrate the "lessons learned" into its operations?

Urs Holzle: In general, teams follow a postmortem process when an outage occurs, and produce action items such as "monitor timeouts to X" or "document failover procedure and train on-call engineers". Engineers from affected teams are also quite happy to ask for and supplement a post-mortem as needed. Human beings tend to be quite fallible, so if possible we like to write either a specific or a general automated monitoring rule to notice problems. This is true of both software/configuration problems and hardware/datacenter problems.

RELATED STORIES:

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish