Comment Re:SPDY clarifications (Score 1) 310
Yes, it is enabled in production for Chrome for any site that advertises SPDY.
Google definitely does advertise SPDY compatibility, thus Chrome may speak SPDY when talking to Google pages.
Yes, it is enabled in production for Chrome for any site that advertises SPDY.
Google definitely does advertise SPDY compatibility, thus Chrome may speak SPDY when talking to Google pages.
Heya-- one of the SPDY developers here.
It doesn't cut page load time in half (it could, but you'd have to have a truly *terrible* site design). It does provide some pretty good latency decreases, however. I wish the OP had quoted more real numbers...
You've got it right, essentially. Server push puts a file into the cache which will be referenced by the page that is loading. I don't recall if Chrome still supports it, but at one point it most certainly did.
A single IP != a single server.
HTTP pipelining doesn't play well with proxies, and since you're never quite sure (on port 80) whether or not you're using a proxy, well, you end up with lots of fun heuristics about whether or not you can use it.
If you think that you're only limited by bandwidth, you should try taking the open-source SPDY implementation, modifying it, and then run an analysis of its behaviors over a similar dataset to the one we used (alexa-500). Data speaks volumes! Conjecture on its own isn't all that useful.
SCTP doesn't have enough flow control to make proxies safe or eliminate head-of-line blocking, good implementations don't exist on all platforms, and most damaging, it doesn't play well with IPv4 NAT. Not playing with IPv4 NAT is a killer.
SCTP was certainly one of the evaluated choices-- it had a lot of theoretically nice things going for it.
The protocol is documented externally, and the implementation is open source. We've begun talking with people at IETF about it and we have a public mailing list. I don' t know how it could be more open.
I'm one of the other people who works on SPDY.
server push: We have some debates about this internally, but it seems the market is deciding that push is important-- e.g. image inlining into the HTML. Server push allows you to accomplish the same, but gives the benefit of having them known as individual resources by a single name, and thus cacheable. I believe it may be particularly beneficial for people on high-rtt devices like mobile. If you look at data just about anywhere, you can see that RTT is the real killer. 100ms RTTs are fairly normal for mobile devices. I'd much rather have the server push that data to me rather than having to wait 100ms between each round of requests-- it can make literally seconds of difference for complex pages.
Why "SPDY": The name we first chose would have made money for the lawyers... and so we picked another.
performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.
"supposed" benefit+pipelining: HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request, and thus disambiguating the situation. In such cases, the client knows that it is safe to retry. In any case, you're talking about doing http-pipelining plus response reordering.
How would you handle the following scenario: User opens video in one tab, creates another in which he or she looks up the Dow Jones index for the day. The video is still being displayed in the other tab. You have a head-of-line blocking issue. How do you deal with it? Canceling the video request is a poor choice-- the user likely will come back to it later. Waiting for the video to finish is a poor choice-- the user probably wants to see the Dow right now instead of 15 minutes later. Opening a new connection incurs additional cost in the network (NAT), on the servers and worse yet, incurs the latency penalty of a new connection setup plus whatever other protocols you're negotiating. I don't want to wait 2 RTTs before I get my content. I'm impatient. I want it now!
I'd suggest taking the open sourced code that we've provided and implementing your solution. You can then run the same battery of tests against your solution that we've done (in the lab) for SPDY. Data is extremely convincing when collected properly. If your solution worked better, then we'd have a basis for re-analysis.
Then, if you don't want an SMS, you install the application on your phone which requires zero access to the 'net.
Well, you can always choose to not do it. You get increased convenience that way, with the expected tradeoff...
In that case you install the application on your phone instead. The app requires no net access at all-- it just generates a code.
I love slashdot!!
Unfortunately, nothing I can publish without permission.
I can say that I'm in charge of maintaining the software that terminates all HTTP traffic for Google. Draw your own conclusions.
This may be true for Java.
It isn't true for C/C++.
With C/C++ and NPTL, the many-thread blocking IO style yields slightly lower latency at low IO rates, but offers significant latency variability and sharply decreased thruput at higher IO rates.
It seems that the linux scheduler is much to blame for this-- the number of times that a thread is scheduled on a different CPU increases dramatically with more threads, and this trashes the caches.
I've seen order-of-magnitude decreases in performance and order-of-magnitude increases in latency as a result of what appears to be the cache trashing.
So long as ONE character set is required, then it works.
It was the latin charset, it may as well have stayed that.
Now, we'll have places where you simply cannot type in the domain name. Hurrah for allowing china's censors another easy way to cut off access to anything else!
Note that this isn't always true.
There are extensions (implemented my many browsers) in TLS which allow it to tell the server which host the connection is intended to be for in the handshake.
I don't believe that IE implements this, but it could only be IE6. I forget.
This is important, since we're running out of IPs and people do want to use virtual hosting in the same manner that they do for HTTP-- symmetry is easy to maintain!
Many people are unenthusiastic about their work.