Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Optimizing Page Load Times 186

John Callender writes, "Google engineer Aaron Hopkins has written an interesting analysis of optimizing page load time. Hopkins simulated connections to a web page consisting of many small objects (HTML file, images, external javascript and CSS files, etc.), and looked at how things like browser settings and request size affect perceived performance. Among his findings: For web pages consisting of many small objects, performance often bottlenecks on upload speed, rather than download speed. Also, by spreading static content across four different hostnames, site operators can achieve dramatic improvements in perceived performance."
This discussion has been archived. No new comments can be posted.

Optimizing Page Load Times

Comments Filter:
  • HTTP Pipelining (Score:5, Informative)

    by onion2k ( 203094 ) on Monday October 30, 2006 @06:13AM (#16640007) Homepage
    If the user were to enable pipelining in his browser (such as setting Firefox's network.http.pipelining in about:config), the number of hostnames we use wouldn't matter, and he'd make even more effective use of his available bandwidth. But we can't control that server-side.

    For those that don't know what that means: http://www.mozilla.org/projects/netlib/http/pipeli ning-faq.html [mozilla.org]

    I've had it switched on for ages. I sometimes wonder why it's off by default.
  • Re:Erm.. huh? (Score:4, Informative)

    by rf0 ( 159958 ) <rghf@fsck.me.uk> on Monday October 30, 2006 @06:15AM (#16640019) Homepage
    If you are on a fast broadband pipe you are correct but there is still a lot of other people on small connections with low upload limits (64k-256kbit) and I can see why this could be a bottle neck as it can't get the requests out fast enough. That said there are things a user can do to help themselves.

    Firstly if the ISP has a proxy server then using it will reduce the trip time for some stored content meaning it only has to go over a few hops than prehaps all the way across the world. You can also look at something like Onspeed [onspeed.com] which is a paid for product but compresses images (though makes them look worse) and content and can give a decent boost on very slow (GPRS/3G) connections and also get more out of your transfer quota.

  • by leuk_he ( 194174 ) on Monday October 30, 2006 @06:17AM (#16640029) Homepage Journal
    "Regularly use your site from a realistic net connection. Convincing the web developers on my project to use a "slow proxy" that simulates bad DSL in New Zealand (768Kbit down, 128Kbit up, 250ms RTT, 1% packet loss) rather than the gig ethernet a few milliseconds from the servers in the U.S. was a huge win. We found and fixed a number of usability and functional problems very quickly."

    What (free) simulation is available for this? I only know dummynet which requires a linux server and some advanced routing. But surely there is more. Is there?

  • Css and Scripts (Score:5, Informative)

    by Gopal.V ( 532678 ) on Monday October 30, 2006 @06:36AM (#16640111) Homepage Journal

    I've done some benchmarks and measurements in the past which will never be made public (I work for Yahoo!). And the most important bits in those have been CSS and Scripts. A lot of performance has been squeezed out of the HTTP layers (akamai, Expires headers), but not enough attention has been paid to the render section of the experience. You could possibly reproduce the benchmarks with a php script which does a sleep() for a few seconds to introduce delays at various points and with a weekend to waste [dotgnu.info].

    The page does not start rendering till the last CSS stream is completed, which means if your css has @import url() entries, the delay before render increases (until that file is pulled & parsed too). It really pays to have the quickest load for the css data over anything else - because without it, all you'll get it a blank page for a while.

    Scripts marked defer do not always defer and a lot of inline code in <script> tags depend on such scripts that a lot of browsers just pull the scripts as and when they find it. There seems to be just two threads downloading data in parallel (from one hostname), which means a couple of large (but rarely used) scripts in the code will block the rest of the css/image fetches. See flickr's organizr [flickr.com] for an example of that in action.

    You should understand that these resources have different priorities in the render land and you should really only venture here after you've optimized the other bits (server [yahoo.com] and application [php.net]).

    All said and done, good tutorial by Aaron Hopkins - a lot of us have had to rediscover all that (& more) by ourselves.

  • by Anonymous Coward on Monday October 30, 2006 @06:52AM (#16640183)
    hostnames != domainname

    Why would a sub-domain confuse anyone?

    rss.slashdot.org
    apple.slashdot.org
    ask.slashdot.org
    backslash.slashdot.org
  • by giafly ( 926567 ) on Monday October 30, 2006 @06:53AM (#16640187)
    If a big part of your job involves using a Web-based application, reducing page-load times really helps. My real job is writing one of these applications and getting the caching right is much more important than sexier topics like AJAX. There's some good advice in TFA.
  • Re:Css and Scripts (Score:3, Informative)

    by Evets ( 629327 ) on Monday October 30, 2006 @06:56AM (#16640203) Homepage Journal
    I've found that once a page has layout it will begin rendering and not before. CSS imported in the body do not prevent rendering. CSS imported in the HEAD will. In fact, the css and javascript in the head section seem to need downloading prior to rendering.

    I have also found that cached CSS and Javascript can play with you a little bit. When developing a site you tend to see an expected set of behaviors based on your own experience with a site - but you can find later that having the external files either cached or not cached can have an effect on things. (i.e. a cached javascript file with a load event may be triggered before the DOM is ready if you aren't checking for the readiness of the DOM itself)

    ETAG headers are very important as well. Running "tail -f access.log" while you browse your own site will show a lot of redundant calls to javascript, css, and image files that should be cached but aren't. IE has a setting of "Check for new content" or something like that that really fouls up css background images without proper expiration headers (lots of flickering).

    There is still a significant portion of the web community that utilizes dialup connections. These users are seemingly ignored by many popular sites. I try to get pages to load in under 8 seconds for dialup users, but with any significant javascript or CSS it is sometimes a difficult task. It's much easier on consecutive page loads by forcing cacheing, but that doesn't matter one bit if the user goes elsewhere because the initial page load was too slow.

    There are certainly a plethora of optimization techniques not even touched on in this article. I know that Google and Yahoo are very keen on these subjects and it's worth taking a look at the source of some of their pages for ideas. Last I checked, they could care less about validation, though. But with the bandwidth they must utilize saving a few bytes here and there can mean significant dollar differences at the end of the month and what truly matters is whether or not the browser renders the page correctly.
  • Re:Pipelining (Score:2, Informative)

    by smurfsurf ( 892933 ) on Monday October 30, 2006 @07:21AM (#16640317)
    Pipelining is not the same as keep-alive. Although pipelining needs a keep-alive connection.
    Pipeling means "multiple requests can be sent before any responses are received. "
  • Re:Erm.. huh? (Score:4, Informative)

    by x2A ( 858210 ) on Monday October 30, 2006 @07:23AM (#16640321)
    There are other factors.

    1 - keepalive/pipelining connections means only 1 dns lookup is performed, often cached on your local machine means this delay is minimal.

    2 - the dns lookup can be happening for the second host while connections to the first host are still downloading, rather than stopping everything while the second host is looked up. This hides the latency of the second lookup.

    3 - most browsers limit the number of connections to each server to 2. If you're loading loads of images, this means you can only be loading two at once (or one while the rest of the page is still downloading). If you put images on a different host, you can get extra connections to it. Also, cookies will usually stop an object from taking advantage of proxies/caches. Putting images on a different host is an easy to way make sure they're not cookied.

  • Re:HTTP/1.1 Design (Score:3, Informative)

    by jakoz ( 696484 ) on Monday October 30, 2006 @07:24AM (#16640327)
    Then perhaps they need to invest in some modern systems. The following definitions are interesting:

    3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. 4. SHOULD NOT This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label. They don't say DO NOT or MUST NOT. Like they say, the behavior can is useful... and they could see this would be the case IN 1997!

    It is time we updated things. It's particularly funny that Microsoft found this RFC, of all things, to obey.
  • Re:Pipelining (Score:4, Informative)

    by TheThiefMaster ( 992038 ) on Monday October 30, 2006 @07:26AM (#16640333)
    Pipelining is not keep-alive. Keep alive means sending multiple requests down one connection, waiting for the response to the request before sending the next. Pipelining sends all the requests at once without waiting.

    Keep-alive no:
    Open connection
    -Request
    -Response
    Close Connection
    Open connection
    -Request
    -Response
    Close Connection
    -Repeat-

    Keep-alive yes:
    Open connection
    -Request
    -Response
    -Request
    -Response
    -Repeat-
    Close Connection

    Pipe-lining yes:
    Open connection
    -Request
    -Request
    -Repeat-
    -Response
    -Response
    -Repeat-
    Close Connection
  • Re:Connection Limits (Score:3, Informative)

    by MathFox ( 686808 ) on Monday October 30, 2006 @07:32AM (#16640355)
    The "max two connections per webserver" limit is to keep resource usage in the webserver down; a single apache thread can use 16 or 32 Mbyte of RAM for dynamicly generated webpages. If you get 5 page requests a second and it takes (on average) 10 seconds to handle the request and send back the results you need 1 Gb RAM in the webserver, if you can ignore Slashdot. (2-4 Gb to handle peaks)

    If you have a second webserver for all static data, that can be a simpeler http deamon with 1 Mb/connection or less. You can handle more parallel connactions (and Akamai the setup if needed!)

    Yes, it's best to avoid inline images, Google text ad objects, etc. But allowing parallel loading of the objects (and that's the trick with using several separate hosts for images) you can take 8 or 16 roundtrips at the same time; here is your perceived speedup.

  • Re:Pipelining (Score:3, Informative)

    by x2A ( 858210 ) on Monday October 30, 2006 @07:45AM (#16640397)
    Keep-alive sends the next request after the first has completed, but on the same connection (this requires the server to send Content-length: header, so it knows after how many bytes the page has finished loading. Without this, the server must close the connection so the browser knows it's done).

    Pipelining sends requests out without having to wait for the previous to complete (this does also require a Content-length: header. This is fine for static files, such as images, but many scripts where output is sent straight to the browser as it's being generated will break this, as it won't know the content length until generated has completed).
  • Some reasons (Score:3, Informative)

    by harmonica ( 29841 ) on Monday October 30, 2006 @07:56AM (#16640435)
    I've had it switched on for ages. I sometimes wonder why it's off by default.

    Some reasons against pipelining [mozillazine.org].

Genetics explains why you look like your father, and if you don't, why you should.

Working...