At last year’s Velocity conference, Google detailed how faster web pages were boosting its bottom line. This year the search giant is showcasing how it is using its software, servers and infrastructure to create a faster Internet – and calling on site owners to join the effort.
The average web page takes 4.9 seconds to load and includes 320 KB of content, according to Urs Hölzle, Google’s Senior Vice President of Operation. In his keynote Wednesday morning at the O’Reilly Velocity 2010 conference in Santa Clara, Calif., Hölzle was preaching to the choir, addressing a crowd of 1,000 attendees focused on improving the performance and profitability of their web operations.
‘Ensemble’ of Elements
“Speed matters,” said Hölzle. “The average web page isn’t just big, it’s complicated. Web pages aren’t just HTML. A web page is a big ensemble of things, some of which must load serially.”
At Velocity 2009 Google’s Marissa Mayer discussed how latency had a direct impact on Google’s bottom line. When Google’s pages loaded faster, Mayer said, users searched more and Google made more ad revenue. This year Hölzle discussed a broad spectrum of initiatives by Google to extend those benefits to the wider web. That includes efforts to accelerate Google’s own infrastructure, advance standards to speed the web’s core protocols and provide tools for site owners to create faster sites.
He cited the Chrome web browser as an example of how Google’s efforts can generate benefits for a larger audience. Chrome is designed for speed, Hölzle said, noting independent research that showed Chrome loading pages faster than competing browers.
“Our goal isn’t to take 100 percent market share for browsers. Our goal is to move the web forward,” Hölzle said, noting that Firefox and Internet Explorer have improved the speed of their browsers with each new release since Chrome’s debut. “This competition is working. They (Microsoft and Mozilla) are doing a better job working on speed.”
Targeting Core Protocols
One of the foundational problems is that core Internet protocols such as TCP/IP. DNS and SSL/TLS haven’t been updated to reduce overhead that slows the loading of complex web pages. Google has developed refinements to address these challenges, which Hölzle said have been submitted to Internet standards bodies.
Hölzle said Google has seen a 12 percent improvement in the performance of its sites through refinements in its implementation of TCP/IP. He said Google also could serve pages faster if it had more information from DNS servers about an end user’s location. The nature of the DNS system means that Google sometimes can’t tell is a request is from Washington state or southern California. That makes it hard for Google to achieve its goal of serving content from the data center located closest to the end user.
Header Compression brings Big Gains
Hölzle noted a glaring inefficiency in the handling of web page headers, which provide information about a user’s IP address, browser and other session data. The average web page makes 44 calls to different resources, with many of those requests including repetitive header data. Holzle said compressing the headers produces an 88 percent page load improvement for some leading sites.
Google is also using its own data center infrastructure to offer speed enhancements for web surfers and site owners. An example is Google Public DNS, which allows anyone to use Google’s distributed system of DNS servers. “We saw that DNS performance was often a contributor to slowness,” said Hölzle.
Serving Popular Files
Google also hosts widely-used files on its own servers for public use. An example is jquery.js, a file often cited as a performance bottleneck for web applications like WordPress.
Building faster sites can matter in a site’s Google ranking, Hölzle noted, referencing a recent change in which Google considers site loading speed as a factor in its ranking algorithm.
“If your web site is slow, you’ll drop down in the rankings, because we know users don’t like low sites” he said. Page speed isn’t as important as other factors (like content relevance) but can server as a deciding factor between two sites with similar rankings, Hölzle said.
There are a lot of relatively simple things you can do to speed up your web site,” Hölzle said. “They all eliminate overhead that is avoidable. We can do this on our own sites. This isn’t something Google can do by itself.”
UPDATE: the velocity team has posted full video of Urs’ presentation: