Cloudflare Wants to Eat AWS’s Serverless Lunch

Its CEO says the new serverless platform is cheaper and faster than Amazon’s, Microsoft’s, and Google’s.

Yevgeniy Sverdlik

July 27, 2020

5 Min Read
Cloudflare CEO and co-founder Matthew Prince
Cloudflare CEO and co-founder Matthew PrinceS3studio/Getty Images

Matthew Prince says the new cloud service engineers at his company Cloudflare have cooked up beats competing services by the three cloud giants – AWS, Azure, and Google Cloud – on both performance and price. On price alone, it beats the largest of them, AWS, the most, according to him.

The new service builds on the work Cloudflare has done for its three-year-old Workers platform for serverless computing at the edge. Called Workers Unbound, it lifts the limitations Workers had for how resource-intensive a workload it could support and how long an execution time it would allow.

With hundreds of thousands of developers using Workers, “we’ve learned a lot… about what developers really want,” Prince, one of Cloudflare’s three founders and its CEO, told Data Center Knowledge. “They wanted to have an unrestricted version of Cloudflare Workers that removed a lot of those limits and made it a more robust, true serverless platform.”

The old Workers platform, now called Workers Bundle, limits script execution times to 50 milliseconds. Workers Unbound does away with those CPU restrictions, allowing up to 15 minutes of execution time, Prince told us.

The company plans to continue supporting Workers Bundle, which clients use for simple tasks like webpage transformations, changing security headers, or A-B testing, he said.

Related:How Google Cloud Plans to Win Enterprises from AWS and Azure

Serverless, With an Edge Computing Angle

Serverless computing essentially abstracts cloud infrastructure from the developer further than a cloud VM does. Using a serverless platform, the developer doesn’t have to think about how their computing resources are allocated.

Also, instead of charging for a set amount of computing resources in set time increments, cloud providers charge for the exact amount of time serverless platforms are used by customers’ applications and for the exact amount of computing resources, which presumably makes for a more efficient way to utilize and pay for computing resources.

Having built out a massive, globally distributed platform to support its web security and later also content delivery services, Cloudflare put its own spin on serverless with Workers. A customer’s code is automatically deployed in all 200-plus Cloudflare data centers and gets executed from the closest location to the application’s end user when called upon.

“You can simply deploy code, and our network will optimize to make sure that we have the capacity to run that as close to users as possible,” Prince said.

This is one of the ways, the company claims, it’s able to give users better performance than the cloud giants. Its network is by design an edge computing network. It’s distributed across many more locations than the amount of availability regions (each a massive concentration of data center capacity) operated by each of the hyperscale clouds, and thus has compute resources physically closer to more of the world’s population.

Related:Explaining Knative, the Project to Liberate Serverless from Cloud Giants

Serverless -- and Containerless

Another way Cloudflare says Workers achieves better performance is by replacing traditional application containers, the infrastructure mainstay among hyperscale clouds, with a different approach to multi-tenancy. Called “isolates,” the more lightweight versions of containers were created by Google Chrome engineers for Chromium, the open source project powering the market-dominating browser.

There are no virtual machines or containers underneath Workers, and Prince said he and his colleagues believe isolates, not containers, are “the future of cloud computing.” Modified to fit Cloudflare’s needs, its isolates provide the same level of sandboxing and protection as containers but allow for much faster startup times, he said.

Startup time is key to making Workers effective. Since the approach is to store a customer’s code in every Cloudflare data center, it’s crucial that the code is executed instantly whenever called upon in any given location.

For the release of Workers Unbound (now in private beta), Prince said he set a goal of zero-nanoseconds cold-start time for his engineers. “And the team found a way to do that,” he said. “Out of the box, we support zero-nanosecond cold-start time.”

In reality, the cold start does take a few nanoseconds, but it doesn’t appear that way to the users. The first thing that happens when a request comes into Cloudflare’s system is a TLS handshake (an encryption process), he explained. Workers Unbound uses that signal to trigger the cold-start process, so by the time the handshake is finished, the code is ready to fire.

All this adds up to some remarkable performance advantages claimed by Cloudflare. Here are results of tests the company said it’s run to compare Workers against its largest competitors’ serverless products: AWS Lambda, Google Cloud Functions, and Azure Functions. (Chart by Cloudflare)

cloudflare serverless performance comparison.png

cloudflare serverless performance comparison

Prince said Workers Unbound was 24 percent cheaper to use than Azure Functions, 52 percent cheaper than Google Cloud Functions, and 75 percent cheaper than AWS Lambda.

Cloudflare doesn’t have “those nickel-and-dimy extras that traditional serverless platforms charge on top,” Prince said. Those would be things like API gateway or DNS request charges, he explained.

Here’s another chart by Cloudflare, comparing its pricing to Lambda:

cloudflare serverless price comparison.png

cloudflare serverless price comparison

AWS, Google, and Microsoft spokespeople could not provide comment in time for publication. We’ll update this story with their comments as they come in.

Architecture Optimized for Computing Power

Asked whether launching Workers Unbound required a significant expansion of computing capacity at Cloudflare data centers, Prince said it didn’t, because the company already had a lot of compute power to support its security services.

One of the services its best known for is DDoS mitigation. To provide it, the network needs to be able to absorb massive floods of malicious traffic on its clients’ behalf.

Analyzing traffic for signs of malice also requires a lot of CPU horsepower. “Because Cloudflare started out very much as a security company, we have always had fairly beefy machines that make up our network,” Prince said. It needs the ability to “pull a packet apart and look inside of it.”

Because it provides a CDN service, Cloudflare is often compared to other big CDN providers, such as Limelight, Fastly, or Akamai. But its security-driven architecture makes its network different from the big CDNs, which tend to optimize more for things like disk space, memory, and network bandwidth, he explained.

Still, if there is a spike in demand for Workers Unbound, Cloudflare’s infrastructure team is ready to beef up data center capacity anywhere in the world. “Our infrastructure team is really tuned up to make sure that, as demand increases, we’re able to meet that demand,” Prince said.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like