Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:Detailed info on SPDY (Score 1) 310

You should read the whitepaper at [] - it's quite interesting.

It's really not that interesting. They make a lot of claims without sources or any support whatsoever and won't answer questions about it, for instance see threads here or on reddit where google employees post about SPDY (ie are reading the thread) but won't answer questions about methods and sources or defend their theories.

Sorry that you feel this way - we've been very explicit about how to contact us, and many many people have found us without trouble. If you've got a question that you want answered, the discussion is on, and we've always been very open about that fact.

The problem that SPDY is supposed to solve in it's overly complicated way has a simple solution: tweak HTTP pipelining so that the server can respond in any order it wants to.

Well, you should go implement that and come back with some data. If you're right, I'd love to see it. But before you do, you might want to ask yourself, why none of IE, Firefox, nor Chrome ship with pipelining turned on, even though we all want to be fast?

The answer is that pipelining has serious deployment problems, which you can read more about these within the IETF HTTP working group if you wish. Here is a quick list:
    * client-side proxies and intermediaries sometimes just fail to process the requests correctly
    * the responses must come back in the order which they are sent by the client
    * you can't start pipelining until you finish receiving at least one HTTP/1.1 response from a server
    * server farms sometimes loadbalance requests across HTTP/1.1 and HTTP/1.0 servers
    * pipelining requests behind a hanging GET (or any high-latency request) completely breaks.

These might be surmountable - but I assure you that the complexity of implementing pipelining in a way that works on the web today is pretty much on par with the complexity of SPDY. Up until just a few years ago, major sites in the top-100 could not handle pipelining. And when it fails, the user is left with a hung browser, or worse, garbled data. That is why it isn't implemented.

Then servers can send resources when they are available, in smallest-first order, or whatever and the pipeline doesn't block on (it's just sent last when it is finally finished profiling you).

This is just incorrect information - you should re-read the pipelining specification.

That's all there is to it. The HTTP designers didn't do it because they didn't want to go far enough with changes, instead just tacking pipelining on almost as an afterthought. Maybe Google invented SPDY because they are afraid of tweaking HTTP or think standards move too slowly? I don't know, all I know is that SPDY is bad news.

If you've got data to back that up, I'm listening.

Comment Re:BAD (Score 5, Informative) 310

Actually, you should read the spec as to how it is implemented. The TLS/NPN mechanism for switching to SPDY has no "fallback".

And there is no intent to rush - heck - we've been working on it for over a year. You think that's rushed? If you're an engineer, I hope you'll appreciate that protocol changes are hard. You can't use pure lab data (although we started out with lab data, of course). Now we need real world data to really figure it out. We changed it in a way which *nobody noticed* for about 4 months. So, I don't think we hurt the web at all, but we are accomplishing the goals of learning how to build a better protocol.

Seriously, if you have a better way to figure out new protocols, we'd love to hear them at, and if you want to lend a hand implementing, that is even better!

Comment SPDY clarifications (Score 5, Informative) 310

Thanks for all the kind words on SPDY; I wish the magazine authors would ask before putting their own results in the titles!

Regarding standards, we're still experimenting (sorry that protocol changes take so long!). You can't build a new protocol without measuring, and we're doing just that - measuring very carefully.

Note that we aren't ignoring the standards bodies. We have presented this information to the IETF, and we got a lot of great feedback during the process. When the protocol is ready for a RFC, we'll submit one - but it's not ready yet.

Here are the IETF presentations on SPDY:

I've also answered a few similar questions to this here:

We love help- if you're passionate about protocols and want to lend implementation help, please hop onto Several independent implementations have already cropped up and the feedback continues to be really great.

Comment More explanation from Chrome (Score 1) 505

[disclaimer: I work for google on chrome]

First off, Firefox is definitely great at keeping memory usage low - better than Chrome. Some on this thread say that firefox has memory leaks and bugs. I don't know about that, I find Firefox is pretty solid. Nonetheless - one advantage Chrome has is that tabs are in separate processes. So, as you close tabs you get to completely flush out all memory from that process. This adds a level of resiliency to chrome you can't match in single process browsers.

Second - thanks to dotnetperls for posting their methodology and their exact test source code! The only question I have is "which memory metric was used?" There is a big difference between "working set private", "working set total", "private bytes", etc.

What is the right metric to use? I use the same metric Vista uses: private working set. You might argue "why not use private bytes"? I agree this seems like a good metric, and it's not a bad one. But, it doesn't reflect user experience.

Why? Because the working set is the amount of memory *not available to other apps*. If other apps can have the memory, then using the bytes is inconsequential. Private bytes does reflect bytes allocated by the process at some point, but the OS is not using physical RAM for those pages right now.

For most applications, there isn't much difference between "private bytes" and "working set private bytes". However, because of Chrome's multi-proc architecture, there is a big difference. The reason is because Chrome intentionally gives memory back to the OS. For instance, on my current instance of Chrome, I'm using 16 tabs. The sum of the private bytes is 514408. The sum of the private working set bytes is 275040, nearly half of the private bytes number. This is on a machine with 8GB of RAM, so there is plenty of memory to go around. But if some other app wants the memory, Chrome gave it back to the OS and will suffer the page faults to get it back. Since Chrome has given it back to the OS (and has volunteered to take the performance consequences of getting it back), I don't think it should be counted as Chrome usage. Others may disagree. But Windows uses working set as the primary metric for all applications the OS folks agree that working set is the right way to account for memory usage.

Single process browsers have a hard time giving memory back, because they can't differentiate which pages are accounted to unused, background tabs and which pages are accounted to the active, in-use tabs.

One last note: If you have a version of chromium, you can run it using --single-process. I ran the dotnetperls test in this mode, and then Chrome and Firefox are pretty close in memory usage. Firefox still wins. But most of this memory use is due to the explicit tradeoff to use multiple processes rather than use minimalist memory.

Slashdot Top Deals

Someday somebody has got to decide whether the typewriter is the machine, or the person who operates it.