Skip navigation
Two Google VCU chips on a PCBA Source: Warehouse-scale video acceleration: co-design and deployment in the wild (Google)
Two Google VCU chips on a PCBA

Google’s New Custom Data Center Chip Improves YouTube Videos

The company says its Video Coding Unit (VCU) enabled it to handle the massive spike in YouTube usage during the pandemic.

You may or may not have noticed, but Google says YouTube videos should now look better and load faster. That’s thanks to a new chip the company has designed inhouse and deployed in its data centers to compress video content.

Google says the chips, called Video (Trans)Coding Units, or VCUs, do that faster and more efficiently than was possible before. Traditional CPUs, the company found, aren’t great at video transcoding.

The VCU gives you the highest YouTube video quality possible on your device while consuming less bandwidth than before. On Google’s end, it optimizes performance and reduces infrastructure costs. The company said YouTube met a huge spike in usage during the pandemic thanks to this innovation.

The VCU is 20 to 33 times more compute-efficient than Google’s previous optimized system, which ran on traditional servers. The improvement takes into account performance and total cost of ownership over a three-year period, including the cost to design and build the custom chip, as well as the cost to run it in Google data centers, Jeff Calow, a Google software engineer and one of the engineers behind the new chip, told DCK in an email.

Google VCU Chip Born Because YouTube Users Wanted Higher Quality

A group of Google engineers spent the last six years designing and optimizing the VCU to run YouTube video in Google data centers. The first version of the VCU, which supports both VP9 and H.264 codecs, has now been rolled out across its data centers worldwide.

“The VCU helps us enable new capabilities like live streaming VP9 at scale or delivering 4K video faster than before. YouTube viewers benefit by saving bandwidth because VP9 is available sooner,” Calow told DCK.

Google engineers began building a custom chip for transcoding video in 2015, when YouTube noted rising demand for higher-quality video, such as 1080p and 4K. To meet demand, it needed to shift to more data-efficient video codecs (video compression standards). Data-efficient video codecs like VP9, however, use five times more compute resources to encode than the widely used H.264 format, Calow explained.

“The combination of these dynamics led us to pursue a dramatically more efficient and scalable infrastructure,” Calow wrote in a blog post.

Google has already developed a second-generation VCU chip that supports the AV1 codec, a next-generation standard that delivers even higher-quality video, with less buffering for users, according to YouTube. The company has begun installing the second-generation chip in its data centers, so it’s already transcoding some videos on YouTube, Calow told us.

A single VCU card has two VCU chips, each chip packing 10 encoding cores. Each VCU system deployed in a Google data center has 20 chips attached via 10 cards.

Google this month published a paper on the VCU, titled Warehouse-scale video acceleration: co-design and deployment in the wild, listing more than 50 Googlers (including Calow) as authors.

500-Plus Hours of Video Uploaded to YouTube Every Minute

YouTube has to transcode each video users upload, compressing it for different versions with different resolutions, so it can support the myriad of devices used to view it (from phones to laptops to TVs) and for different amounts of bandwidth different users have available to them.

More than 500 hours of video content is uploaded to YouTube every minute, Calow wrote in his blog post. YouTube watch time went up by 25 percent in the first quarter of last year, as the pandemic took hold and much of the world went into lockdown.

“Because we had this system in place, we were able to rapidly scale up to meet this surge,” he wrote. “Practically, this meant that videos were available to viewers promptly after the creator uploaded them.”

Hyperscalers Design Custom Chips for Targeted Use Cases

The VCU (named “Argos” internally) isn’t the first custom data center chip Google has designed. The two others it’s talked about publicly are the Tensor Processing Unit (TPU), an ASIC for AI workloads; and the Titan chip for security.

Other tech giants that operate hyperscale cloud platforms have also been designing their own server chips. Their enormous scale and deep pockets enable them to design custom hardware to meet specific needs, Kevin Krewell, a principal analyst at TIRIAS Research, told DCK.

They make the heavy upfront capital investment back as the custom chips improve their computing infrastructure’s total cost of ownership over time. In the case of cloud service providers, they make the money back by adding a new cloud infrastructure option to their service portfolio.

AWS runs custom Arm-based Graviton and Graviton2 processors in its data centers. Microsoft is also reportedly designing its own Arm-based chip for servers that power its Azure cloud platform.

“Google has the design capability in-house,” Krewell said, commenting on the Google VCU chip announcement. “They can crank out a design, and it’s relatively cost-effective for them. And, as they pointed out, this ASIC is very efficient in handling streaming video. It costs money to build and source it, but if it saves money by lowering total cost of ownership, it’s a worthwhile investment.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish