Skip navigation
How Facebook Made Its Data Warehouse Faster
Data hall at Facebook’s Altoona, Iowa, data center. (Photo: @2014 Jacob Sharp Photography)

How Facebook Made Its Data Warehouse Faster

New compression algorithm shrinks storage, CPU requirements

Facebook says it’s time to change the way we compress digital files because the methods used in nearly all compression today were created in the nineties, when both hardware and the problems compression needed to solve were very different.

The company has created a new compression algorithm – addressing a similar problem the fictional startup Pied Piper became famous for solving in the TV series Silicon Valley – which has already resulted in a significant reduction of storage and compute requirements in Facebook data centers.

Facebook open sourced the algorithm, called Zstandard, this week, hoping it will replace Deflate, the core algorithm in Zip, gzip, and zlib – in other words the data compression standard that’s been reigning for more than two decades.

Zstandard enables faster compression that results in smaller files and allows for more options when balancing the essential compression tradeoffs of speed versus compression ratio. Another big advantage: it scales. When Deflate was created, the global scale of data center infrastructure at which internet giants like Facebook operate today was unimaginable.

Facebook’s new compression algorithm was created by Yann Collet, who started working on Zstandard’s earlier version, called lz4, in his spare time, while working in the marketing department of the French telco Orange. Last year, he moved from Paris to Menlo Park to work for Facebook.

He has made more than 1,350 commits to lz4 on Github and 400 to Finite State Entropy, a method for transforming symbols to bits during compression that’s alternative to the method used in Deflate. Finite State Entropy is instrumental to making Zstandard both faster and able to deliver better compression ratios, according to a blog post by Collet and his colleague Chip Turner, a software engineer at Facebook.

Compression at Exabyte Scale

The new algorithm has been running in production in Facebook data centers for the last six months, Collet said Tuesday during a presentation at the company’s @Scale conference in San Jose. “We don’t have to sacrifice compression,” he said. “We get more compression with speed.”

Facebook uses it to compress data stored in its Exabyte-scale data warehouses. A single Facebook data warehouse is thousands of server cabinets, according to Collet. At this scale, even a small optimization can turn into significant demand reduction.

“One percent of an Exabyte is still a huge amount of infrastructure,” he said. “So we pay attention to any efficiency gain we can get.”

Replacing zlib with Zstandard resulted in six percent storage reduction in Facebook data warehouses, 19 percent reduction in CPU requirements for compression and 40 percent reduction in CPU requirements for decompression, Collet said.

Taking Advantage of Modern Hardware

Unlike processors that were around when Deflate was created, which did one task after another, in order, modern CPUs are very good at doing many things at once, and Zstandard takes advantage of this capability, called parallel execution.

Another modern CPU trick it exploits is called branchless design. Put simply, if performing a task depends on the outcome of another task, instead of waiting for that outcome the processor tries to guess what the most likely outcome is and performs the task.

“Today's CPUs gamble,” Collet and Turner write. “They do so intelligently, thanks to a branch predictor, which tells them in essence the most probable result.”

The problem is when they guess wrong, in which case the CPU has to stop all operations that were started in this speculative way and start from scratch. Called a “pipeline flush,” it is very costly in terms of CPU resources. Zstandard is designed specifically with such branchless algorithms in mind.

It also uses a lot more memory than zlib, which was limited to a 32KB window. Technically, there is no limit to the amount of memory Zstandard can take advantage of, but to make it compatible with lots of different receiving systems, Collet recommends limiting its memory usage to 8MB.

“This is a tuning recommendation, though, not a compression format limitation,” he and Turner write in the blog post.

TAGS: DevOps Storage
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish