Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Detailed info on SPDY (Score 1) 310

A single IP != a single server.
HTTP pipelining doesn't play well with proxies, and since you're never quite sure (on port 80) whether or not you're using a proxy, well, you end up with lots of fun heuristics about whether or not you can use it.

If you think that you're only limited by bandwidth, you should try taking the open-source SPDY implementation, modifying it, and then run an analysis of its behaviors over a similar dataset to the one we used (alexa-500). Data speaks volumes! Conjecture on its own isn't all that useful.

Comment Re:Detailed info on SPDY (Score 1) 310

SCTP doesn't have enough flow control to make proxies safe or eliminate head-of-line blocking, good implementations don't exist on all platforms, and most damaging, it doesn't play well with IPv4 NAT. Not playing with IPv4 NAT is a killer.

SCTP was certainly one of the evaluated choices-- it had a lot of theoretically nice things going for it.

Comment Re:SPDY clarifications (Score 2) 310

I'm one of the other people who works on SPDY.

server push: We have some debates about this internally, but it seems the market is deciding that push is important-- e.g. image inlining into the HTML. Server push allows you to accomplish the same, but gives the benefit of having them known as individual resources by a single name, and thus cacheable. I believe it may be particularly beneficial for people on high-rtt devices like mobile. If you look at data just about anywhere, you can see that RTT is the real killer. 100ms RTTs are fairly normal for mobile devices. I'd much rather have the server push that data to me rather than having to wait 100ms between each round of requests-- it can make literally seconds of difference for complex pages.

Why "SPDY": The name we first chose would have made money for the lawyers... and so we picked another.

performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.

"supposed" benefit+pipelining: HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request, and thus disambiguating the situation. In such cases, the client knows that it is safe to retry. In any case, you're talking about doing http-pipelining plus response reordering.
How would you handle the following scenario: User opens video in one tab, creates another in which he or she looks up the Dow Jones index for the day. The video is still being displayed in the other tab. You have a head-of-line blocking issue. How do you deal with it? Canceling the video request is a poor choice-- the user likely will come back to it later. Waiting for the video to finish is a poor choice-- the user probably wants to see the Dow right now instead of 15 minutes later. Opening a new connection incurs additional cost in the network (NAT), on the servers and worse yet, incurs the latency penalty of a new connection setup plus whatever other protocols you're negotiating. I don't want to wait 2 RTTs before I get my content. I'm impatient. I want it now! :)

I'd suggest taking the open sourced code that we've provided and implementing your solution. You can then run the same battery of tests against your solution that we've done (in the lab) for SPDY. Data is extremely convincing when collected properly. If your solution worked better, then we'd have a basis for re-analysis.

Comment True for JAVA, but not generally true... (Score 4, Interesting) 270

This may be true for Java.
It isn't true for C/C++.

With C/C++ and NPTL, the many-thread blocking IO style yields slightly lower latency at low IO rates, but offers significant latency variability and sharply decreased thruput at higher IO rates.
It seems that the linux scheduler is much to blame for this-- the number of times that a thread is scheduled on a different CPU increases dramatically with more threads, and this trashes the caches.
I've seen order-of-magnitude decreases in performance and order-of-magnitude increases in latency as a result of what appears to be the cache trashing.

Comment Re:Duh (Score 1) 269

Note that this isn't always true.
There are extensions (implemented my many browsers) in TLS which allow it to tell the server which host the connection is intended to be for in the handshake.
I don't believe that IE implements this, but it could only be IE6. I forget.

This is important, since we're running out of IPs and people do want to use virtual hosting in the same manner that they do for HTTP-- symmetry is easy to maintain!

Slashdot Top Deals

Many people are unenthusiastic about their work.

Working...