Optimizing Page Load Times 186
John Callender writes, "Google engineer Aaron Hopkins has written an interesting analysis of optimizing page load time. Hopkins simulated connections to a web page consisting of many small objects (HTML file, images, external javascript and CSS files, etc.), and looked at how things like browser settings and request size affect perceived performance. Among his findings: For web pages consisting of many small objects, performance often bottlenecks on upload speed, rather than download speed. Also, by spreading static content across four different hostnames, site operators can achieve dramatic improvements in perceived performance."
Erm.. huh? (Score:2, Insightful)
I can see it's use on large sites but this seems aimed at smaller sites.
Then again HTML isn't my thing so it goes over my head I guess.
HTTP/1.1 Design (Score:5, Insightful)
From TFA:
And:
From RFC 2616, section 8.1.4:
It's not a browser quirk, it's specified behavior.
Caching of dynamic content (Score:5, Insightful)
Abolishment of nasty long query strings into nicer, more memorable URI's is also something we should be seeing more of in "Web 2.0." Use mod_rewrite [google.com], you'll feel better for it.
Gmail (Score:3, Insightful)
The fun is that newer AJAX products from google (like goffice) don't suffer from this behavior, they have a much more cleaner code (just pick view code on your favorite browser and see). Probally Gmail HTML/Javascript is already showing it's age, and paying the price for being a first at google AJAX apps.
Re:HTTP/1.1 Design (Score:4, Insightful)
Re:HTTP/1.1 Design (Score:3, Insightful)
At the end you have just one pipe to push that data even if you have say 100 connections.
By still having one pipe with certain capacity (i.e. bandwidth) but increasing amount of connections, you're wasting your bandwidth for maintenance of multiple connections.
Also you're wasting the resources of the server for the same reason.
At the end, you're slowing yourself down.
Yes, there are scenarios where using for example 4 connections as opposed to just 1 yields better download performance but AFAIK almost all such scenarios are very specific for given implementation of webserver, given implementation of network, given implementation of browser, ...
So to sum myself up: I think that the 1-2 active connections per client as mentioned in RFC 2616 was generaly valid in 1997, is generaly valid now and also will be generaly valid in the future.
Contrary, "the hack" of using multiple connections to speed-up downloads may have been, is and may be in the future sometimes valid but generaly degrades performance.
Pity is, Aaron Hopkins is mentioning true solution (HTTP pipelining) only as "(Optional)" and at total end of the article. But he correctly describe his previous propositions as "tricks". :)
All the offsite stuff is ads anyway. Block them. (Score:3, Insightful)
This is an excellent argument for ad blocking. The article never mentions the basic truth - almost all offsite content on web pages is ads. (Of course, this is someone from Google talking, and Google, after all, is an ad-delivery service which runs a search engine to boost their hits.) Web page load is choking on ads. I noted previously that some sites load ads from as many as six different sources. This saturates the number of connections the browser supports. Page load then bottlenecks on the slowest ad server.
So install AdBlock and FlashBlock in Firefox, and watch your browsing speed up.
Web-based advertising looks like a saturated market. Watch for some big bankruptcies among advertising-supported services.