Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Optimizing Page Load Times 186

John Callender writes, "Google engineer Aaron Hopkins has written an interesting analysis of optimizing page load time. Hopkins simulated connections to a web page consisting of many small objects (HTML file, images, external javascript and CSS files, etc.), and looked at how things like browser settings and request size affect perceived performance. Among his findings: For web pages consisting of many small objects, performance often bottlenecks on upload speed, rather than download speed. Also, by spreading static content across four different hostnames, site operators can achieve dramatic improvements in perceived performance."
This discussion has been archived. No new comments can be posted.

Optimizing Page Load Times

Comments Filter:
  • Re:Erm.. huh? (Score:3, Interesting)

    by mabinogi ( 74033 ) on Monday October 30, 2006 @06:27AM (#16640071) Homepage
    1.5Mbps ADSL.
    5 Seconds to refresh the page on slashdot. That's just to getting the page to actually blank and refresh, there's still then the time it takes to load all the comments.
    Sometimes it's near instant, but most of the time it's around about that.
    Most of the time is spent "Waiting for slashdot.org", or "connecting to images.slashdot.org".
    It used to be a hell of a lot worse, but I installed adblock to eliminate all the extra unecesary connections (google analytics, and the various ad servers). I didn't care about the ads or the tracking, it just bugged me that those things made my browsing experience slower.
    I find it funny that this guy is suggesting spreading across multiple hosts, it's my completely unscientific and entirely anecdotal experience that the more host names the browser has to resolve to load the page, the longer it takes before you get to see anything.

    I'm in Australia so there's a minimum 200 ms latency on roundtrips - five roundtrips and you've added 1 second to the rendering time. Approaches that add extra DNS lookups really aren't going to help. (Though the DNS lookups themselves aren't necesarily going to take 200ms - they could be much faster if they're in my ISPs DNS cache, or the could be longer if it's got to query them)
  • by Jussi K. Kojootti ( 646145 ) on Monday October 30, 2006 @06:34AM (#16640105)
    Try trickle. It won't do fancy stuff like simulating packet loss, but a
    trickle -d 100 -u 20 -L 50 firefox
    should limit download, upload and latency rates.
  • Re:HTTP Pipelining (Score:5, Interesting)

    by baadger ( 764884 ) on Monday October 30, 2006 @06:49AM (#16640171)
    This is NOT just Opera fanboyism, but Opera however *does* do pipelining by default (with a safe fallback)

    Opera pipelines by default - and uses heuristics to control the level of pipelining employed depending on the server Opera is connected to
    Reference [operawiki.info]
  • Connection Limits (Score:3, Interesting)

    by RAMMS+EIN ( 578166 ) on Monday October 30, 2006 @07:04AM (#16640253) Homepage Journal
    ``By default, IE allows only two outstanding connections per hostname when talking to HTTP/1.1 servers or eight-ish outstanding connections total. Firefox has similar limits.''

    Anybody know why? This seems pretty dumb to me. Request a page with several linked objects (images, stylesheets, scripts, ...) in it (i.e., most web pages), and lots of these objects are going to be requested sequentially, costing you lots of round trip times.
  • Requests Too Large (Score:3, Interesting)

    by RAMMS+EIN ( 578166 ) on Monday October 30, 2006 @07:08AM (#16640273) Homepage Journal
    FTFA:

    ``Most DSL or cable Internet connections have asymmetric bandwidth, at rates like 1.5Mbit down/128Kbit up, 6Mbit down/512Kbit up, etc. Ratios of download to upload bandwidth are commonly in the 5:1 to 20:1 range. This means that for your users, a request takes the same amount of time to send as it takes to receive an object of 5 to 20 times the request size. Requests are commonly around 500 bytes, so this should significantly impact objects that are smaller than maybe 2.5k to 10k. This means that serving small objects might mean the page load is bottlenecked on the users' upload bandwidth, as strange as that may sound.''

    I've said for years that HTTP requests are larger than they should be. It's good to hear it confirmed by someone who's taken seriously. This is even more of an issue when doing things like AJAX, where you send HTTP requests and receive HTTP responses + XML verbosity for what should be small and quick user interface actions.
  • by ggvaidya ( 747058 ) on Monday October 30, 2006 @07:10AM (#16640279) Homepage Journal
    You could try using Sloppy [dallaway.com]. I've only ever heard about it because its programmer has a very nice page on getting a free Thwarte FreeMail certificate to work with Java WebStart [dallaway.com], so this isn't a recommendation or anything. Looks pretty decent, though.
  • Re:HTTP/1.1 Design (Score:5, Interesting)

    by x2A ( 858210 ) on Monday October 30, 2006 @07:29AM (#16640347)
    The limit's not to do with your connection speed as such - it's to do with being polite and not putting too much drain on the server your downloading from.

  • Re:Css and Scripts (Score:1, Interesting)

    by Anonymous Coward on Monday October 30, 2006 @08:27AM (#16640585)
    The remarkable thing here is that Google is one of the major causes of slow loading web pages due to the way their adsense system works. The webmaster is not allowed to modify the code which loads the script that creates the ads. Thus the script always loads inline, and since ads are usually placed at the top of a page, delays in delivering the adsense script, which have become more frequent and severe lately, cause the rest of the page to stall.
  • Re:Erm.. huh? (Score:4, Interesting)

    by orasio ( 188021 ) on Monday October 30, 2006 @08:32AM (#16640599) Homepage
    User perception of responsiveness on interfaces has a lower bound of 200 ms. Some times even lower.

    Just because 1 seconds seems fast, it doesn't mean that it's fast enough to stop improving.
    When you reach that 200ms barrier, the interface has perfect responsiveness, a bigger interval is always perfectible.
  • by mrsbrisby ( 60242 ) on Monday October 30, 2006 @10:34AM (#16641529) Homepage
    Nice trick with 4 hostnames, but this means 4 security contexts for your content, which may make a lot of development hard (especially client based with JavaScript).
    Why? Doesn't your javascript explicitly state document.domain to the common root?

    Not to mention the management issues of having to link to content on 4 different domains in an efficient enough manner.
    You mean creating four hostnames for the same address? Or do you mean changing a few src="" attributes?

    This leaves us with pipelining on the client, which could results in much worse load peaks on the servers though.
    Wrong. It leaves us with nothing. Didn't you read the article? HTTP Pipelining isn't enabled in the big two web browsers, so as far as "reality" is concerned it doesn't exist. It's like IPV6- who cares how much "better" it is if no one is using it?
  • by AkimAmaklav ( 1020253 ) on Monday October 30, 2006 @11:24AM (#16642179) Homepage
    Has anyone played around with multipart/mixed or such replies? These could reduce the number of requests but is there any support for them in browsers?

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...