Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Optimizing Page Load Times 186

John Callender writes, "Google engineer Aaron Hopkins has written an interesting analysis of optimizing page load time. Hopkins simulated connections to a web page consisting of many small objects (HTML file, images, external javascript and CSS files, etc.), and looked at how things like browser settings and request size affect perceived performance. Among his findings: For web pages consisting of many small objects, performance often bottlenecks on upload speed, rather than download speed. Also, by spreading static content across four different hostnames, site operators can achieve dramatic improvements in perceived performance."
This discussion has been archived. No new comments can be posted.

Optimizing Page Load Times

Comments Filter:
  • Erm.. huh? (Score:2, Insightful)

    I'm not quite sure how this really has any "real world" effects.. Browsing speed is already insanely fast to the point where if you blink you miss the loading on most connections these days. How does speeding up this second or so really help anything?

    I can see it's use on large sites but this seems aimed at smaller sites.

    Then again HTML isn't my thing so it goes over my head I guess.
    • Re:Erm.. huh? (Score:4, Informative)

      by rf0 ( 159958 ) <rghf@fsck.me.uk> on Monday October 30, 2006 @06:15AM (#16640019) Homepage
      If you are on a fast broadband pipe you are correct but there is still a lot of other people on small connections with low upload limits (64k-256kbit) and I can see why this could be a bottle neck as it can't get the requests out fast enough. That said there are things a user can do to help themselves.

      Firstly if the ISP has a proxy server then using it will reduce the trip time for some stored content meaning it only has to go over a few hops than prehaps all the way across the world. You can also look at something like Onspeed [onspeed.com] which is a paid for product but compresses images (though makes them look worse) and content and can give a decent boost on very slow (GPRS/3G) connections and also get more out of your transfer quota.

    • Browsing speed is already insanely fast to the point where if you blink you miss the loading on most connections these day.
      Unfortunately that is not true. Many "broadband" connections are definitely not insanely fast, and at least here in Finland the upload speeds of most connections are so pathetic that the problems mentioned in the article are very easily observed.
    • Re: (Score:3, Interesting)

      by mabinogi ( 74033 )
      1.5Mbps ADSL.
      5 Seconds to refresh the page on slashdot. That's just to getting the page to actually blank and refresh, there's still then the time it takes to load all the comments.
      Sometimes it's near instant, but most of the time it's around about that.
      Most of the time is spent "Waiting for slashdot.org", or "connecting to images.slashdot.org".
      It used to be a hell of a lot worse, but I installed adblock to eliminate all the extra unecesary connections (google analytics, and the various ad servers). I did
      • Re:Erm.. huh? (Score:4, Informative)

        by x2A ( 858210 ) on Monday October 30, 2006 @07:23AM (#16640321)
        There are other factors.

        1 - keepalive/pipelining connections means only 1 dns lookup is performed, often cached on your local machine means this delay is minimal.

        2 - the dns lookup can be happening for the second host while connections to the first host are still downloading, rather than stopping everything while the second host is looked up. This hides the latency of the second lookup.

        3 - most browsers limit the number of connections to each server to 2. If you're loading loads of images, this means you can only be loading two at once (or one while the rest of the page is still downloading). If you put images on a different host, you can get extra connections to it. Also, cookies will usually stop an object from taking advantage of proxies/caches. Putting images on a different host is an easy to way make sure they're not cookied.

        • 1. HTTP pipelining has nothing to do with DNS. Your machine's IP stack takes care of this, and caches DNS entries regardless. You'll never make multiple DNS requests for the same host in a short period of time, unless you've seriously screwed something up on your client. HTTP pipelining keeps a TCP connection open for more than one object - so, you save yourself the time of a SYN/SYN-ACK/ACK handshake for every further object from the same host. With high-latency links, this can improve performance dramatic
          • by x2A ( 858210 )
            "You'll never make multiple DNS requests for the same host in a short period of time"

            We're not talking about the same host... the parent poster said:

            "I have pipelining on, which may be why multiple hosts is a net loss for me, instead of a gain"

            and I explained that dns lookups for connections to the second host can be occuring in the background while data is being transferred over the connection to the first (ie, when an tag is found pointing to another host in an html page that's downloading, a new connect
            • I was responding to this:

              1 - keepalive/pipelining connections means only 1 dns lookup is performed, often cached on your local machine means this delay is minimal.

              Pipelining does not change behaviour to multiple hosts in the slightest. It's a keepalive of the connection to the same host.

              I think I misinterpreted your second point a bit :)
      • It's obvious how much you identify as part of the slashdot community. I mean, only dedicated slashdotters would go to the trouble to adblock the few banner adverts on this page just so they can get to the comments faster. That's a clear sign that you care more about the good of the community than yourself. I just wanted to say: Thank you for making such a commitment to this website. If everyone took the extraordinary steps to block completely unobtrusive advertising than this website would be a much better
        • Ok, I promise to unblock the ads, but I'll be damned if I'm going to actually READ them. net effect = same?
        • I agree with you in principal, but google-analytics, runner.splunk and double-click really do need to beef up their response times. I've made the changes suggested in the article to my copy of firefox 2.0 and it's made a real seat of the pants difference to the felt load times, yet the total load times seems about the same. The main effect now is because the browser is more responsive and doesn't block, I've scrolled down below the slow loading ads before they've even started to load.
        • by Skreems ( 598317 )
          I suppose now may not be the time to mention that I fast-forward through television commercials too...
        • Do advertisements "know" when they are adblocked? Otherwise, it's the same as not looking at them, which is what most people do anyway. If so, perhaps adblock could do something about that?

          • Do advertisements "know" when they are adblocked?

            I didn't think so. I mean, the main host doesn't know when a request is made from your browser to the ad-server, and the ad-server obviously doesn't know that a connection was NOT made to it just now.
            Nonetheless, this site [ations.net] seems to know that their google ads have been blocked.
            With Adblock Plus active, in place of the ads, they put an image with "Thank you! For blocking our ads!".
            I haven't researched whether Adblock or Firefox somehow divulges information th

            • Yay for replying to myself...

              I did what I should have done in the first place, glanced at the source, and found this little script:

              function bees() {
              var e = document.getElementsByName("google_ads_frame");
              if (!e || e.length == 0) {
              var e1 = document.getElementById('topbanner');
              var e2 = document.getElementById('sidebanner');
              e1.innerHTML = "<img src='/images/blockedtop.png'>";
              e2.innerHTML = "<img src='/images/blockedside

        • by mabinogi ( 74033 )
          I come here because I sometimes find interesting stories. I don't come here out of any sense of obligation to "The Slashdot Community". And that's been how it has been since 1998.
          If they want to show ads, that's fine, I don't have a problem with ads in themselves - the slashdot ones aren't _too_ intrusive most of the time. But no one can expect people to like a feature that degrades the performance of a web site. It doesn't take too long of looking at a blank window with the status bar saying "contactin
    • by jakoz ( 696484 )
      It has very big implications still. For you it obviously has no effect, but let me give you an example.

      We are in the middle of the planning of a software release that rolls out to thousands of users. So that they can access it remotely, we are toying with the idea of supporting 3G PCMCIA cards.

      In the area we're benchmarking in, latency and a retarded slow-start windowing algorithm are the limiting factors. Keep in mind that this software is crucial to the company, which is a fairly large one. Adoption
    • by giafly ( 926567 ) on Monday October 30, 2006 @06:53AM (#16640187)
      If a big part of your job involves using a Web-based application, reducing page-load times really helps. My real job is writing one of these applications and getting the caching right is much more important than sexier topics like AJAX. There's some good advice in TFA.
    • Re:Erm.. huh? (Score:4, Interesting)

      by orasio ( 188021 ) on Monday October 30, 2006 @08:32AM (#16640599) Homepage
      User perception of responsiveness on interfaces has a lower bound of 200 ms. Some times even lower.

      Just because 1 seconds seems fast, it doesn't mean that it's fast enough to stop improving.
      When you reach that 200ms barrier, the interface has perfect responsiveness, a bigger interval is always perfectible.
  • HTTP Pipelining (Score:5, Informative)

    by onion2k ( 203094 ) on Monday October 30, 2006 @06:13AM (#16640007) Homepage
    If the user were to enable pipelining in his browser (such as setting Firefox's network.http.pipelining in about:config), the number of hostnames we use wouldn't matter, and he'd make even more effective use of his available bandwidth. But we can't control that server-side.

    For those that don't know what that means: http://www.mozilla.org/projects/netlib/http/pipeli ning-faq.html [mozilla.org]

    I've had it switched on for ages. I sometimes wonder why it's off by default.
    • I always wonder why it's off by default. IE I can understand -- they still don't support XHTML -- but Firefox?
    • Some reasons (Score:3, Informative)

      by harmonica ( 29841 )
      I've had it switched on for ages. I sometimes wonder why it's off by default.

      Some reasons against pipelining [mozillazine.org].
      • Er, not a very informative page; the only caveat listed is a vague notion that it's "unsupported" and "can prevent Web pages from displaying correctly" because it's "incompatible with some Web servers and proxy servers". There may well be good reasons, but that page doesn't really explain why.
      • First a couple points, Asa Dotzler comments were dated December 26, 2004, that's a few relese cycles ago, but the comments he made were actualyy about the following:

        Here's something for broadband people that will really speed Firefox up:

        1.Type "about:config" into the address bar and hit return. Scroll down and look for the following entries:
        network.http.pipelining network.http.proxy.pipelining network.http.pipelining.maxrequests
        Normally the browser will make one request to a web page at a tim

    • by tweakt ( 325224 ) *
      Hmm. I just tested this with Firefox 2.0 and it seems to cause problems immediately.
      Not all sites will handle this correctly. It seemed like some of the piplined requests were being ignored -- I'd get a page and none of the images would load. Certain application servers are probably written under the assumption that 1 connection = 1 request. Also hardware load balancers, reverse proxy servers and application layer firewalls might not handle pipelining properly as well. It's surprising though, because it is
  • HTTP/1.1 Design (Score:5, Insightful)

    by keithmo ( 453716 ) on Monday October 30, 2006 @06:14AM (#16640015) Homepage

    From TFA:

    By default, IE allows only two outstanding connections per hostname when talking to HTTP/1.1 servers or eight-ish outstanding connections total. Firefox has similar limits.

    And:

    If your users regularly load a dozen or more uncached or uncachable objects per page load, consider evenly spreading those objects over four hostnames. Due to browser oddness, this usually means your users can have 4x as many outstanding connections to you.

    From RFC 2616, section 8.1.4:

    Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.

    It's not a browser quirk, it's specified behavior.

    • by jakoz ( 696484 )
      It might be specified behavior, but it's stupidly outdated, and seriously needs to get with the times.

      It has been that way since I had dialup many years ago. It might have been prudent at the time, but now it is sadly outdated.

      Things have changed. The popularity of FasterFox, which happily breaks all specifications, is a reflection of it.

      I feel that 10-20 is a much more realistic figure now. I haven't seen many webmasters complaining about FasterFox.
      • I feel that 10-20 is a much more realistic figure now. I haven't seen many webmasters complaining about FasterFox.

        I've seen webmasters complain right on FasterFox's download page on Mozilla Update.
        • Re: (Score:3, Informative)

          by jakoz ( 696484 )
          Then perhaps they need to invest in some modern systems. The following definitions are interesting:

          3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. 4. SHOULD NOT This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior
          • Perhaps I should have been more clear, but I was about to head to work.

            The problem that webmasters have with FasterFox has nothing to do with HTTP/1.1 or any RFC. It has to do with that FasterFox prefetches all of the links on a page [mozdev.org]. That's why there are webmasters figuring out how to block FasterFox requests [skattertech.com].

            As a webmaster, I happen to agree with them. I don't want people downloading pages that they're not even going to look at, wasting my bandwidth. The pipelining and max connections I don't have a probl
      • Re:HTTP/1.1 Design (Score:5, Interesting)

        by x2A ( 858210 ) on Monday October 30, 2006 @07:29AM (#16640347)
        The limit's not to do with your connection speed as such - it's to do with being polite and not putting too much drain on the server your downloading from.

        • by jakoz ( 696484 )
          I realize that, but the limits need to be revised. 2 might have been courteous a decade ago, but now it isn't realistic.
          • Re:HTTP/1.1 Design (Score:4, Insightful)

            by x2A ( 858210 ) on Monday October 30, 2006 @07:48AM (#16640405)
            Depends on server load; how many of the objects are static vs dynamic etc. 5-10 connections for images might be okay, but for dynamic objects it might not be. Perhaps it should be specifiable within the html page?

            • by jakoz ( 696484 )
              You know... that's a damn good idea and should be modded up. It's a very good solution that should be in the specs already. Granted that some browsers could ignore it, but they could anyway,
            • Putting it within HTML code would be ugly and problematic, but putting it in the HTTP response headers seems like a mighty fine idea. I honestly don't know why their not doing that. Lazyness?
          • Re: (Score:3, Insightful)

            by hany ( 3601 )

            At the end you have just one pipe to push that data even if you have say 100 connections.

            By still having one pipe with certain capacity (i.e. bandwidth) but increasing amount of connections, you're wasting your bandwidth for maintenance of multiple connections.

            Also you're wasting the resources of the server for the same reason.

            At the end, you're slowing yourself down.

            Yes, there are scenarios where using for example 4 connections as opposed to just 1 yields better download performance but AFAIK almost al

        • by bigpat ( 158134 )
          The limit's not to do with your connection speed as such - it's to do with being polite and not putting too much drain on the server your downloading from.

          The design of the website is what would be causing greater server load, not a browser setting. The total amount of resources is the same if we are just talking about getting all of a pages images in parallel instead of serially. So, if you are just getting all the needed files more quickly then you are just getting out of the way more quickly for when th
        • by dfghjk ( 711126 )
          ...and the performance article is suggesting ways to get around such politeness by tricking the browsers into thinking they are connecting to a larger number of servers. If that's not evidence of outdated advice then what do you want?
      • by Sulka ( 4250 )
        Uhh... I bet you haven't ever administrated a large website.

        When you have a lot of concurrent users, the amount of TCP sockets you can have open on a given server while still maintaining good throughput is limited. If all users out there had 20 sockets open to each server, making sites scale would be seriously hard on very large sites.

        I do agree the two socket limit is a bit low but 20 would be a total overkill.
    • The wonderful thing about the RFC language "SHOULD" and "SHOULD NOT" is that it really is only a suggestion that do not need to be followed. It makes it wonderful to test all possible combinations of "should" and "should not" options in the protocol with both clients and servers, probably the biggest source of bugs and problems.

      rfc2119 [faqs.org] defines the terms:

      3. SHOULD This word, or the adjective "RECOMMENDED", mean that there
      may exist valid reasons in particular circumstances to ignore a
      particular item, bu

    • "A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy."

      It's not a browser quirk, it's specified behavior.


      It may be considered a quirk that browsers use hostname to determine whether two servers are the same, rather than IP address.
    • by Trogre ( 513942 )
      From RFC 2616, section 8.1.4:

              Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.

      It's not a browser quirk, it's specified behavior.


      Ah, got it: the RFC is broken and needs updating. Thanks.

      Anyone else care to provide Comments for this Request 2616?

    • Specified but not mandatory. Thus the description of "browser oddness".
  • by leuk_he ( 194174 ) on Monday October 30, 2006 @06:17AM (#16640029) Homepage Journal
    "Regularly use your site from a realistic net connection. Convincing the web developers on my project to use a "slow proxy" that simulates bad DSL in New Zealand (768Kbit down, 128Kbit up, 250ms RTT, 1% packet loss) rather than the gig ethernet a few milliseconds from the servers in the U.S. was a huge win. We found and fixed a number of usability and functional problems very quickly."

    What (free) simulation is available for this? I only know dummynet which requires a linux server and some advanced routing. But surely there is more. Is there?

  • Css and Scripts (Score:5, Informative)

    by Gopal.V ( 532678 ) on Monday October 30, 2006 @06:36AM (#16640111) Homepage Journal

    I've done some benchmarks and measurements in the past which will never be made public (I work for Yahoo!). And the most important bits in those have been CSS and Scripts. A lot of performance has been squeezed out of the HTTP layers (akamai, Expires headers), but not enough attention has been paid to the render section of the experience. You could possibly reproduce the benchmarks with a php script which does a sleep() for a few seconds to introduce delays at various points and with a weekend to waste [dotgnu.info].

    The page does not start rendering till the last CSS stream is completed, which means if your css has @import url() entries, the delay before render increases (until that file is pulled & parsed too). It really pays to have the quickest load for the css data over anything else - because without it, all you'll get it a blank page for a while.

    Scripts marked defer do not always defer and a lot of inline code in <script> tags depend on such scripts that a lot of browsers just pull the scripts as and when they find it. There seems to be just two threads downloading data in parallel (from one hostname), which means a couple of large (but rarely used) scripts in the code will block the rest of the css/image fetches. See flickr's organizr [flickr.com] for an example of that in action.

    You should understand that these resources have different priorities in the render land and you should really only venture here after you've optimized the other bits (server [yahoo.com] and application [php.net]).

    All said and done, good tutorial by Aaron Hopkins - a lot of us have had to rediscover all that (& more) by ourselves.

    • Re: (Score:3, Informative)

      by Evets ( 629327 )
      I've found that once a page has layout it will begin rendering and not before. CSS imported in the body do not prevent rendering. CSS imported in the HEAD will. In fact, the css and javascript in the head section seem to need downloading prior to rendering.

      I have also found that cached CSS and Javascript can play with you a little bit. When developing a site you tend to see an expected set of behaviors based on your own experience with a site - but you can find later that having the external files either
    • ``The page does not start rendering till the last CSS stream is completed''

      On the other hand, stylesheets are often static and used by many pages, so they could be cached (Opera does this). The same is true of scripts.
  • by baadger ( 764884 ) on Monday October 30, 2006 @06:43AM (#16640147)
    This [web-caching.com] is a good place to start testing the 'cacheability' of your dynamic web pages. Quite frankly it's appauling that even the big common web apps used today like most forum or blog scripts don't generate sensible Last-Modified, Vary, Expires, Cache-Control headers. With most of the metadata you need to generate this stuff stored in the existing database scheme theres just really no excuse for it.

    Abolishment of nasty long query strings into nicer, more memorable URI's is also something we should be seeing more of in "Web 2.0." Use mod_rewrite [google.com], you'll feel better for it.
    • ``Quite frankly it's appauling that even the big common web apps used today like most forum or blog scripts don't generate sensible Last-Modified, Vary, Expires, Cache-Control headers.''

      The problem is that things don't usually break if you don't use these headers effectively. In other words, you don't notice that something could be improved.
    • by hacker ( 14635 )

      Woops!

      Not Found
      The requested URL /cgi-web-caching/cacheability.py was not found on this server.

      Apache/1.3.31 Server at www.web-caching.com Port 80
  • FTFA:

    ``Neither IE nor Firefox ship with HTTP pipelining enabled by default.''

    Huh? So all these web servers implement keep-alive connections and browsers don't use it?
    • Re: (Score:2, Informative)

      by smurfsurf ( 892933 )
      Pipelining is not the same as keep-alive. Although pipelining needs a keep-alive connection.
      Pipeling means "multiple requests can be sent before any responses are received. "
    • Re:Pipelining (Score:4, Informative)

      by TheThiefMaster ( 992038 ) on Monday October 30, 2006 @07:26AM (#16640333)
      Pipelining is not keep-alive. Keep alive means sending multiple requests down one connection, waiting for the response to the request before sending the next. Pipelining sends all the requests at once without waiting.

      Keep-alive no:
      Open connection
      -Request
      -Response
      Close Connection
      Open connection
      -Request
      -Response
      Close Connection
      -Repeat-

      Keep-alive yes:
      Open connection
      -Request
      -Response
      -Request
      -Response
      -Repeat-
      Close Connection

      Pipe-lining yes:
      Open connection
      -Request
      -Request
      -Repeat-
      -Response
      -Response
      -Repeat-
      Close Connection
    • Re: (Score:3, Informative)

      by x2A ( 858210 )
      Keep-alive sends the next request after the first has completed, but on the same connection (this requires the server to send Content-length: header, so it knows after how many bytes the page has finished loading. Without this, the server must close the connection so the browser knows it's done).

      Pipelining sends requests out without having to wait for the previous to complete (this does also require a Content-length: header. This is fine for static files, such as images, but many scripts where output is sen
      • by Nevyn ( 5505 ) *
        Pipelining sends requests out without having to wait for the previous to complete (this does also require a Content-length: header

        Not true, chunked encoding is fine. You just can't use connection close as end of entity marker ... but that's bad anyway.

  • Connection Limits (Score:3, Interesting)

    by RAMMS+EIN ( 578166 ) on Monday October 30, 2006 @07:04AM (#16640253) Homepage Journal
    ``By default, IE allows only two outstanding connections per hostname when talking to HTTP/1.1 servers or eight-ish outstanding connections total. Firefox has similar limits.''

    Anybody know why? This seems pretty dumb to me. Request a page with several linked objects (images, stylesheets, scripts, ...) in it (i.e., most web pages), and lots of these objects are going to be requested sequentially, costing you lots of round trip times.
    • Re: (Score:3, Informative)

      by MathFox ( 686808 )
      The "max two connections per webserver" limit is to keep resource usage in the webserver down; a single apache thread can use 16 or 32 Mbyte of RAM for dynamicly generated webpages. If you get 5 page requests a second and it takes (on average) 10 seconds to handle the request and send back the results you need 1 Gb RAM in the webserver, if you can ignore Slashdot. (2-4 Gb to handle peaks)

      If you have a second webserver for all static data, that can be a simpeler http deamon with 1 Mb/connection or less. You

      • ``The "max two connections per webserver" limit is to keep resource usage in the webserver down''

        I understand that, but why write it into the standard? Couldn't servers be made to handle this? If you don't have the resources right now, just hold off on retrieving/handling the request for a while. If you can handle the load, you will be able to service clients quicker. Now, even if the server can handle the load, the clients will slow themselves down.

        ``If you get 5 page requests a second and it takes (on ave
        • by MathFox ( 686808 )

          If you don't have the resources right now, just hold off on retrieving/handling the request for a while.

          And make your self extra vulnarable to DoS attacks... I know that it is hard to find the right balance of priorities when your site is slashdotted, been there :-(.

          10 seconds to process a request is a very long time. If it takes that long, a few extra round trip times don't matter much.

          Generating the megabyte of html is easily done within a second, there are few users that have a fast enough connect

          • ``

            If you don't have the resources right now, just hold off on retrieving/handling the request for a while.

            And make your self extra vulnarable to DoS attacks...''

            No, actually. At least, having no 2-connection-per-hostname limit in the standard doesn't make you any more vulnerable than having such a limit, because there's no way to force clients to respect that limitation. If one is going to perform a DoS attack, why respect the standard?

            Of course, I see the point that when you get _many_ clients connecting,

      • by Raphael ( 18701 )

        This is not only for keeping the resource usage in the server down, but also for improving the overall performance of the whole network by avoiding congestions and packet losses. Note that the "whole network" includes not only the last mile (cable or DSL link between your home or office and your ISP), but also all routers at your ISP, in the backbone, etc.

        Here is the general idea: if all clients use only one or two TCP connections and they use HTTP pipelining, then the traffic will be less bursty on thes

    • by Malc ( 1751 )
      It's per the HTTP spec.
  • Requests Too Large (Score:3, Interesting)

    by RAMMS+EIN ( 578166 ) on Monday October 30, 2006 @07:08AM (#16640273) Homepage Journal
    FTFA:

    ``Most DSL or cable Internet connections have asymmetric bandwidth, at rates like 1.5Mbit down/128Kbit up, 6Mbit down/512Kbit up, etc. Ratios of download to upload bandwidth are commonly in the 5:1 to 20:1 range. This means that for your users, a request takes the same amount of time to send as it takes to receive an object of 5 to 20 times the request size. Requests are commonly around 500 bytes, so this should significantly impact objects that are smaller than maybe 2.5k to 10k. This means that serving small objects might mean the page load is bottlenecked on the users' upload bandwidth, as strange as that may sound.''

    I've said for years that HTTP requests are larger than they should be. It's good to hear it confirmed by someone who's taken seriously. This is even more of an issue when doing things like AJAX, where you send HTTP requests and receive HTTP responses + XML verbosity for what should be small and quick user interface actions.
    • if you think AJAX requests are bad on asynchronous bandwidth, take a look at the viewstate explosion going on in the MS VisualStudio.NET world.
  • Latency matters a lot. My connection does up to 512 KB/s downstream, meaning a 10 KB object would take about 0.02 seconds to receive. However, before I start receiving the bytes, my request has to travel to the server, and the response has to travel back to me. When the site is in the US (I'm in Europe) the round trip time to the server can easily be 100 to 200 ms. If the TCP connection is already open, this time gets added once. However, if the connection still has to be established, this will result in an
    • ``If you read the article, you will see that the default behavior for Firefox and MSIE is to use only up to two connections per hostname (resulting in many objects being received sequentially - add one round trip time for each), and that they don't use HTTP pipelining, meaning a new connection is set up for each object (add one round trip time for each).''

      Whoops. I somehow got confused into thinking that pipelining == keep-alive (despite clicking on the provided link). HTTP pipelining means that multiple re
    • by fruey ( 563914 )
      _every_ packet you receive has to be ACKed, and so latency can affect your download speed no matter how long your connection stays open.
      • ``_every_ packet you receive has to be ACKed, and so latency can affect your download speed no matter how long your connection stays open.''

        Larger sliding windows for TCP can significantly reduce that problem.
  • Gmail (Score:3, Insightful)

    by protomala ( 551662 ) on Monday October 30, 2006 @07:39AM (#16640377) Homepage
    I hope they apply this study on Gmail. Using it on a non-broadband connection (plain 56k modem) is a pain unless you use the pure HTML view that is crap compared to other HTML webmails.
    The fun is that newer AJAX products from google (like goffice) don't suffer from this behavior, they have a much more cleaner code (just pick view code on your favorite browser and see). Probally Gmail HTML/Javascript is already showing it's age, and paying the price for being a first at google AJAX apps.
    • Yes, I have the problem also, it's very painful - my upload speed at home is severely limited, and GMail gets timeout messages occasionally - and if something like Bittorrent is running, then it's impossible to send anything with attachments - for example, a small 50 kb document is impossible to send.
      • YMMV, but I find that throttling Bittorrent to 90% of its maximum upload speed makes the difference between "internet connection almost unusable" and "internet connection working almost normally".
  • There is a paper about this in SIGCOMM 1997 (!) by Nielsen, Gettys, et al that goes into far more detail of the "whys" and "wherefores". I'm not sure this shows ANYTHING new. In fact, what this gentleman demonstrates is the way that TCP windows work. By spreading requests over four hosts you are in effect getting four times the window size, arguably more than your fair share. Without looking at the aggregate impact, one cannot really judge what's going on.

    Also, the reason pipelining is turned off by def
  • There are a lot of posts here asking "why is this important" and saying that pages already load fast enough on their broadband Internet connection. That may be true for you, but I'm frequently in a position where I am designing a site that needs to load over a slow satellite connection in rural Africa, say, or into a remote village in Nepal. They have a fairly recent computer, OS and browser on the recieving end, but their Internet connection is dog slow; anything I can do to speed it up will be greatly a

    • This won't help your pet poor people. This paper assumes relatively high bandwidth, and examines latency introduced by things like pipelining and script execution. If you have a slow pipe, whether you open 2 TCP sessions to 1 server, or 10 to 5 servers, you're limited by your slow pipe. You need localized caching and prefetching logic to keep your pipe full all the time, so that when people need the data, its likely already there.
  • "Also, by spreading static content across four different hostnames, site operators can achieve dramatic improvements in perceived performance."

    How ironic that a google engineer would say this, since doing this will also pretty well kill your google pagerank rankings. Google is great, yes, but among is many, many problems are the ridiculous ways that it forces people to do web design if they want a decent pagerank. another is how it "helpfully" directs you to "geographically relevant" searches - meaning

  • Nice trick with 4 hostnames, but this means 4 security contexts for your content, which may make a lot of development hard (especially client based with JavaScript).
    Not to mention the management issues of having to link to content on 4 different domains in an efficient enough manner.

    This leaves us with pipelining on the client, which could results in much worse load peaks on the servers though.

    In the end: let the page load a little slower, the workarounds are not worth it.
    • Re: (Score:3, Interesting)

      by mrsbrisby ( 60242 )

      Nice trick with 4 hostnames, but this means 4 security contexts for your content, which may make a lot of development hard (especially client based with JavaScript).

      Why? Doesn't your javascript explicitly state document.domain to the common root?

      Not to mention the management issues of having to link to content on 4 different domains in an efficient enough manner.

      You mean creating four hostnames for the same address? Or do you mean changing a few src="" attributes?

      This leaves us with pipelining on the client

  • Has anyone played around with multipart/mixed or such replies? These could reduce the number of requests but is there any support for them in browsers?
  • by Animats ( 122034 ) on Monday October 30, 2006 @01:39PM (#16644227) Homepage

    This is an excellent argument for ad blocking. The article never mentions the basic truth - almost all offsite content on web pages is ads. (Of course, this is someone from Google talking, and Google, after all, is an ad-delivery service which runs a search engine to boost their hits.) Web page load is choking on ads. I noted previously that some sites load ads from as many as six different sources. This saturates the number of connections the browser supports. Page load then bottlenecks on the slowest ad server.

    So install AdBlock and FlashBlock in Firefox, and watch your browsing speed up.

    Web-based advertising looks like a saturated market. Watch for some big bankruptcies among advertising-supported services.

  • So MythTV is still guessing, like the ad-skipping VCRs of twenty years ago.

    There's better data available. Broadcast TV signals contain considerable metadata. The AMOL data in the VBI and the SID data in the audio clearly identify the program content and source. Here's a encoder [norpak.ca] for that information, which is inserted to make Nielsen ratings and advertising payments work.

    See U.S. patent #5,699,124 for some details of how the data is encoded.

    So far, the PVR community doesn't seem to have figured this

  • I just spent several minutes - three times - waiting for the front page to load because the browser was sitting there waiting on "runner.splunk.com" to get off its ass and do something.

    Guys, splunk does not apparently have the server power or bandwidth to service Slashdot. Get a clue and dump their ads or tell them to buy another server box.

    Ninety percent of the time when I'm waiting on a page to load, it's because some ad server is overloaded. The rest of the time it's because the site server itself is ov

Comparing information and knowledge is like asking whether the fatness of a pig is more or less green than the designated hitter rule." -- David Guaspari

Working...