Google: How We’re Making the Web Faster
June 23rd, 2010 By: Rich Miller
At last year’s Velocity conference, Google detailed how faster web pages were boosting its bottom line. This year the search giant is showcasing how it is using its software, servers and infrastructure to create a faster Internet – and calling on site owners to join the effort.
The average web page takes 4.9 seconds to load and includes 320 KB of content, according to Urs Hölzle, Google’s Senior Vice President of Operation. In his keynote Wednesday morning at the O’Reilly Velocity 2010 conference in Santa Clara, Calif., Hölzle was preaching to the choir, addressing a crowd of 1,000 attendees focused on improving the performance and profitability of their web operations.
‘Ensemble’ of Elements
“Speed matters,” said Hölzle. “The average web page isn’t just big, it’s complicated. Web pages aren’t just HTML. A web page is a big ensemble of things, some of which must load serially.”
At Velocity 2009 Google’s Marissa Mayer discussed how latency had a direct impact on Google’s bottom line. When Google’s pages loaded faster, Mayer said, users searched more and Google made more ad revenue. This year Hölzle discussed a broad spectrum of initiatives by Google to extend those benefits to the wider web. That includes efforts to accelerate Google’s own infrastructure, advance standards to speed the web’s core protocols and provide tools for site owners to create faster sites.
He cited the Chrome web browser as an example of how Google’s efforts can generate benefits for a larger audience. Chrome is designed for speed, Hölzle said, noting independent research that showed Chrome loading pages faster than competing browers.
“Our goal isn’t to take 100 percent market share for browsers. Our goal is to move the web forward,” Hölzle said, noting that Firefox and Internet Explorer have improved the speed of their browsers with each new release since Chrome’s debut. “This competition is working. They (Microsoft and Mozilla) are doing a better job working on speed.”
Targeting Core Protocols
One of the foundational problems is that core Internet protocols such as TCP/IP. DNS and SSL/TLS haven’t been updated to reduce overhead that slows the loading of complex web pages. Google has developed refinements to address these challenges, which Hölzle said have been submitted to Internet standards bodies.
Hölzle said Google has seen a 12 percent improvement in the performance of its sites through refinements in its implementation of TCP/IP. He said Google also could serve pages faster if it had more information from DNS servers about an end user’s location. The nature of the DNS system means that Google sometimes can’t tell is a request is from Washington state or southern California. That makes it hard for Google to achieve its goal of serving content from the data center located closest to the end user.
Header Compression brings Big Gains
Hölzle noted a glaring inefficiency in the handling of web page headers, which provide information about a user’s IP address, browser and other session data. The average web page makes 44 calls to different resources, with many of those requests including repetitive header data. Holzle said compressing the headers produces an 88 percent page load improvement for some leading sites.
Google is also using its own data center infrastructure to offer speed enhancements for web surfers and site owners. An example is Google Public DNS, which allows anyone to use Google’s distributed system of DNS servers. “We saw that DNS performance was often a contributor to slowness,” said Hölzle.
Serving Popular Files
Google also hosts widely-used files on its own servers for public use. An example is jquery.js, a file often cited as a performance bottleneck for web applications like WordPress.
Building faster sites can matter in a site’s Google ranking, Hölzle noted, referencing a recent change in which Google considers site loading speed as a factor in its ranking algorithm.
“If your web site is slow, you’ll drop down in the rankings, because we know users don’t like low sites” he said. Page speed isn’t as important as other factors (like content relevance) but can server as a deciding factor between two sites with similar rankings, Hölzle said.
There are a lot of relatively simple things you can do to speed up your web site,” Hölzle said. “They all eliminate overhead that is avoidable. We can do this on our own sites. This isn’t something Google can do by itself.”
UPDATE: the velocity team has posted full video of Urs’ presentation:
What about content delivery networks?
It would be nice to know what they’ve changed at the TCP/IP level to get a 12% boost. If it’s tuning configurable parameters, what changes do they recommend? Also, what are the trade-offs associated with changing these defaults?
There is really no reason for a site to take longer than 1 second to load on a high speed line. Google’s been saying for years that html needs to be validated, white space removed, css and html compressed and images optimized through photoshop. These are just the basics. I’m glad they’re now taking a proactive approach since no one really has done any of these things I mentioned.
[...] topic.Powered by WP Greet Box WordPress Pluginmiller60 writes “The average web page takes 4.9 seconds to load and includes 320 KB of content, according to Google executive Urs Holzle. In his keynote at the [...]
google’s desire to see the ip address of the end user when responding to a dns request is an extreme case of cost shifting as well as a fundamental and incompatible change to the internet and dns architecture. i recommend that google find other ways to improve their “time to first ad” metric.
Roberto PeonPosted June 24th, 2010
Re: Clinton Cimring.
It depends on the RTT and how many RTTs the page takes.
Take a look at:
Some top tips for website uptime and speedy delivery of pages:
peterschendePosted June 24th, 2010
Google: How We're Making the Web Faster « Data Center Knowledge-Grassroots Webmaster's blog-Website Optimization,Website Promotion,Site Operations,One article per day.Posted June 24th, 2010
[...] the rest here: Google: How We're Making the Web Faster « Data Center Knowledge build-speedier, collect-sales, government-snooping, holzle, internet, its-efforts, liberties, [...]
[...] Urs Holzle outlined the company's efforts in his Keynote at the Velocity 2010 conference.Source:http://www.datacenterknowledge.com/archives/2010/06/23/google-how-were-making-the-web-faster/ Posted by Mr Gift at 07:44 Labels: google, infrastructure, keynote, [...]
[...] Google: How We’re Making the Web Faster « Data Center Knowledge Google: How We’re Making the Web Faster « Data Center Knowledge tip @techmeme $GOOG [...]
[...] To read more about what and how Google did, please visit: http://www.datacenterknowledge.com/archives/2010/06/23/google-how-were-making-the-web-faster/ [...]
Metering all the dynamics of a page is only possible by replacing TCP/IP so that acceleration has a full view of underlying transactions. This of course requires communication with the higher protocol – which in the HTTP/TCP/IP model is non-existent.
That was why when I built my accelerator technology; I built an improved HTTP (which I call RIMP) and a replacement for TCP/IP (which I call JBT/IP) so that I could re-write and address the problems with TCP, DNS, TLS and host of things. Fundamentally TCP/IP is not suited for today’s web and so among many things, I made JBT/IP stream and not transaction based. In my tests, my accelerator technology was 50-100% faster than Google, Propel, ProxyConn or anything Mozilla is using.
Compression is no longer the key issue. To improve speeds of a web page require replacing not only the fundamental components of a connection (protocol stack) but also how a web browser and the web infrastructure is designed. The trick, doing all that without requiring a change in web page programming.
Just take for example, loading of images. Why download and display each and every image in full quality and size – delays everything. So I built my experiment so that images were first delivered in a lower quality and only once the page is loaded, do I continue to stream in a progression of bytes, slowly improving the image quality. Unlike Propel’s image now stuff, my tech does not require the reloading of a web page and my approach is more like focusing a lens than a ridged low than high quality loading from cache. why? Initial speed and usability. On a mobile app or a user quickly navigating does not need to wait for the ultra high-res of a button or ad to click and go to the next page. This of course addresses design issues but programmatic, as long as a browser sees web page elements as independent objects rather than as a whole picture, they are unable to separate from what is important to what is not and compress, stream and display with priority.
But it’s a lot of work, it takes serious and radical thinking.
I was header compressing and script compressing to get better results years ago. I figured out that the biggest wast of all was the overhead with packet headers as a result of having such a large protocol stack. Made more sense to have a single protocol than many for most transactions.
If you take in account the average web page, including all scripts, and related pulls (such as advertisements) I think the average page is about 750KB. This also changes the playing field.
The whole concept of caching also requires fresh thinking and I am using some new ideas which I call Reflection and Byte-Stream-Caching (BSC) which is a form of non-file incremental caching. I can’t tell you this is a success or not yet, still need to test it more but the idea seems to make sense. File sharing apps have been doing something similar for years, I know, I built one. Why replace a whole file in cache if only a part of it is changed.
Finally, the consistent monitoring, adjusting and delivery of data (accumulates as a web page in the browser as a final product) can not be based on RTT or simple assumptions about a connection. That was why I re-wrote NTP and created a transport protocol notification method that much more accurately gauges and predicts a single trip time of a connection. In other words, my approach was to program a nifty approach not based on RTT, rather, data collected about each packet using a time system that tracks relative distance/time so that minor adjustments in packet size/rate could be accomplished. JBT (my protocol that replaces TCP) does just that and very well I might add. But it was really hard to improve NTP 4 to accomplish this and to build an interaction between the application protocol/object.
For example, I might recommend you all consider what I did.
Take an image being transferred by HTTP ( in my case my protocol RIMP). It’s a whole object of data. Transferred by TCP/IP it’s nothing more than a series of bytes with little to no relation. But, if the whole transfer is labeled (RIMP passing down to JBT I have an object of size x, label each packet as x), than as each packet transfers and is received information can be passed back to the sender (JBT and RIMP) about it’s success rate (# of retransmission and MTU) to improve the transfer of that image and all images next in the web page sequence. it may be determined by RIMP, that the large image of 1MB is best split into 4 small images and transferred than as a whole, either to improve priority for other objects, improve JBT burst rates, or to take advantage of multiple data streaming (swarming) rather than a linear stream.
There are so many angles, so many areas to be improved:
Education (proper programming / application of design)
Which from a technical view means to improve…
Browsing (end-user perception and utilization) which leads to prefetching and that sort of thing
Operating System Limitations & protocol stacks
All of these areas require re-thinking to improve web page load times. There is no one single problem or solution.
My approach has been to incorporate as many as possible and to keep exploring more ideas, no matter how strange.
Glad to see Google is attacking this with eyes open.
Now, if I only had a job with those guys!
LDWPosted June 29th, 2010
Ghaouar Camij ToschianPosted July 5th, 2010
make the internet faster thru caching
For caching to work, the http protocol should be slightly modified:
client sends http request to server
server sends back date of birth of the record
client checks its cache whether it has the record with a date of birth >= date of birth of the record on the server.
If it has the record, there is no need to get it from the server.
If it does not have the up to date record, it requests it, gets it and stores it in its cache.
Also, same trick can be used for database client/server requests.
Also, there would be need for tools to manage/clear the cache if it gets corrupted.
[...] Hölzle, Google’s Senior Vice President of Operation, said that the average web page takes 4.9 seconds to load and it makes 44 calls to different resources. [...]
[...] protocols, and providing tools for publishes to create faster web sites. Data Center Knowledge has a great write up on the technical [...]
[...] protocols, and providing tools for publishes to create faster web sites.Data Center Knowledge has a great write up on the technical [...]
thechromesource Daily: Links for 6/25/10 | thechromesource - Google Chrome and Chrome OS News and ForumPosted July 24th, 2011
[...] At the O’Reilly Velocity 2010, Google’ s Urs Holzle, a VP of Operations, talked about how much faster the company has made the web. [...]