Skip navigation
Custom Google Data Center Network Pushes 1 Petabit Per Second
Inside a Google data center (Photo: Google)

Custom Google Data Center Network Pushes 1 Petabit Per Second

Latest generation of the giant’s in-house network has enough bandwidth to move 5,000 two-hour-long HDTV videos in one second

In a rare peak behind the curtain, a top Google data center network engineer this week revealed some details about the network that interconnects servers and storage devices in the giant’s data centers.

Amin Vahdat, Google Fellow and technical lead for networking at the company, said Google’s infrastructure team has three main principles when designing its data center networks: it employs a Clos topology, uses a centralized software stack to manage thousands of switches in a single data center, and builds its own software and hardware, relying on custom protocols.

Vahdat spoke about Google’s data center network design at the Open Network Summit in Santa Clara, California, Wednesday morning and wrote a blog post about it.

The company’s latest-generation network architecture is called Jupiter. It has more than 100 times the capacity of Google’s first in-house network technology, which was called Firehose, Vahdat wrote. The company has gone through five generations of data center network architecture.

A Jupiter fabric in a single data center can provide more than 1 Petabit per second of total bisection bandwidth. According to Vahdat, that is enough bandwidth for more than 100,000 servers to exchange data at 10 Gbps each, or transmit all scanned contents of the Library of Congress in under one-tenth of a second. According to a 2013 paper by NTT, 1 Pbps is equal to transmitting 5,000 two-hour-long HDTV videos in one second.

Google is a pioneer of the do-it-yourself approach to data center network gear and other hardware. The company started building its own network hardware and software about 10 years ago because products that were on the market would not support the scale and speed it needed.

Other massive data center operators who provide services at global scale – companies like Facebook, Microsoft, and Amazon – have taken a similar approach to infrastructure. If an off-the-shelf solution that does exactly what they need (nothing less and nothing more) isn’t available, they design it themselves and have the same manufacturers in Asia that produce incumbent vendors’ gear manufacture theirs.

That Google designs its own hardware has been a known fact for some time now, and so has the fact that it relies on software-defined networking technologies. The company published a paper on its SDN-powered WAN, called B4, in 2013, and last year revealed details of Andromeda, the network-virtualization stack that powers its internal services.

Therefore, it comes as no surprise that the network that interconnects the myriad of devices inside a Google data center is also managed using custom software. Google’s use of the Clos topology is also not surprising, since it is a very common data center network topology.

Other keynotes at the summit were by Mark Russinovich, CTO at Microsoft Azure, who talked about the way Microsoft uses SDN to enable the 22 global availability regions of its cloud infrastructure, and John Donovan, SVP of technology and operations at AT&T, who revealed the telco’s plans to open source Network Function Virtualization software and hardware specs it has designed to enable its services.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish