Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Slabs with LCDs on them similar! News at 11! (Score 2) 495

That is the biggest load of bullshit I've ever heard-- MOST creative people do it for the joy of having creating. A minority of them are in it only for the money.
And today that is what we have. A bunch of people who are only in it for the money (many of them creative enough only to purchase patents) suing people who are creative and who likely just though whatever the heck it was is so obvious that it didn't cross their minds that it should be patentable.

Comment Re:bad analogy? (Score 1) 74

How is it different to having politician's random falsehoods enter into my brain against my will?
You can carry these analogies to ridiculous extremes without trying.

Using the above (access to my brain), the obvious choice is to ensure that the politicians cannot contact my brain by turning off the communications device.

Comment Re:SPDY clarifications (Score 1) 310

performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.

Since top sites have more resources than most sites, on average more than 6 per host, and since Chrome has a low connection limit and had blocking problems preventing parallel loads (since there's no data on the metrics there's no way to know what webkit bugs were present) the results are then far less impressive. In fact, these performance numbers are pretty much meaningless, wouldn't you agree?

They are perfectly meaningful. If you don't like our findings, the most productive thing to do is to create an experiment that shows something better!

HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request

What? When do you "not receive a response" for a request and it isn't a broken connection? If the server rejected a request then you get back an error status right? In any case charging your card twice is not a failing in HTTP, so I'm not sure what you are trying to say here. I have a hard time taking this point seriously, but maybe I don't understand HTTP well enough to understand your point.

I'm one of the people who maintain the servers which terminate all of these HTTP connections, and yes, when rebooting servers, when loadbalancing switches for whatever reason, etc. the request gets lost. It is preferable for a server to signal to the rest of the network (including any connected clients with idle connections) that it is going away. Since HTTP offers no mechanism to push any notification to the client, a server can either close the connection (and possibly thus swallow a request), or attempt to serve all requests until it goes away later (and the connection is closed). It ends up being the same-- the HTTP server has no mechanism to tell a client that it is going away and resolve the race.

In any case, you're talking about doing http-pipelining plus response reordering.

That's right, and you didn't respond to the fact that a connection problem would leave only one resource partially transferred instead of several, so I assume you accept that.

I think you're over simplifying. Certainly that is one of many possible scenarios. Another possible scenario is that it is also possible that you successfully transferred zero items using pipelining since the first element was large (or the server had significant think time), whereas SPDY successfully transfers N-1 items out of N. Making an experiment and testing against real-world behaviors is the best way to say whether or not it works. The possible state space is very large.
In any case if you decide to use HTTP-pipelining like semantics with SPDY, you can. If you decide that there are higher priority items you'd like to receive, you signal the server that and it responds appropriately by preempting the low priority streams and/or interleaving the responses as per its heuristics.

How would you handle the following scenario: User opens video in one tab, creates another in which he or she looks up the Dow Jones index for the day. The video is still being displayed in the other tab. You have a head-of-line blocking issue. How do you deal with it?

That's incredibly contrived. You almost certainly wouldn't be serving videos from the same host as stock data so it would be a separate connection. Problem solved. You also probably wouldn't want the video streamed over HTTPS because, why would you? You can tell the client not to reuse the streaming connection so that it can open a new one (not take up a per-host of the keep-alive slots). I mean I understand that Google Chrome has had problems with per-host connection limits exacerbated by things like gmail that keep connections open and that they WontFix... but since it doesn't seem to affect other browsers creating a new protocol doesn't seem the right way to fix it.

To turn the tables, how would you handle the situation in SPDY of a user requesting a GiB of data and there's several megabytes floating around in the network. Then they make a request for a 1k resource, but it can't be received until the whole amount already sent is read in, and if there are dropped packets this can add several round trips before the 1k finally arrives. With plain HTTP, the 1k request goes through another connection and is unaffected... it can take a different route and won't be held up by the already sent data.

As for your first few questions, the answer is, with a dollop of sarcasm: Proxies are wonderful.
Clients don't generally get to decide when to make a connection or not, even with HTTP, rather the browser does. Websockets, or making an API to SPDY to do something similar would give some potential for clients (and transitively page authors) the ability to choose.

You're assuming that you know how loss occurs on the network. We hypothesize that most of the time when loss occurs on a network, it is correlated to a path, and irrespective of the number of connection. IF you had 6 connections and they were all in use, they'd all see loss and so you'd still get your effective BW cut in half. If you use 1 connection and there is loss like this you're more likely to trigger the appropriate TCP behaviors (e.g. fast rexmit) to causing bandwidth to increase faster.

On the request side, with SPDY if the 1k request is of equal priority, the other (1GiB) request may be fragmented (it is suggested that everything be fragmented into ~4k or smaller chunks), and so the 1k response is sent after 4k of the other request has been forwarded. You have significantly more control over the server->client path because you actually control the BW used, as opposed to a distributed collaboration/control model that you must assume with more connections since each connection is likely to end up at another server (yes, even if they have the same IP-- this is typical loadbalancing).

I don't want to wait 2 RTTs before I get my content. I'm impatient. I want it now! :)

Perhaps you should use Firefox 4 or IE 9 then? /jk...

lol. :)

I'd suggest taking the open sourced code that we've provided and implementing your solution. You can then run the same battery of tests against your solution that we've done (in the lab) for SPDY.

I see. So it sounds like basically you didn't test an HTTP pipeline with reordering. This seems like a pretty big omission in doing basic research for creating a new protocol like this.

There is external research on this topic. Feel free to look it up as we did.

Why "SPDY": The name we first chose would have made money for the lawyers... and so we picked another.

Google seems to have a problem coming up with good tech names. Just an observation.

I don't know, I kind of like it :). Its all a matter of preference I suppose.

I'd much rather have the server push that data to me rather than having to wait 100ms between each round of requests-- it can make literally seconds of difference for complex pages.

How does the server know if the client already has that data already? What if you are browsing with images disabled, or javascript disabled, or style disabled? Thank you for taking the time to explain the reason behind it. It still seems like a complication for little gain though.

There are a lot of ways that you can attempt to figure out that the client has the info already. All the ones that we tried seemed to cause more latency than sending the data and having the client cancel that stream if it already had it. It is theoretically possible for the client to cancel such streams before they are sent due to the way the server push is implemented-- the server advertises to the client that it will be pushing the resource before the client sees a reference to it in what it has already downloaded... The implications are twofold. First, this prevents a race on the client whereby it might attempt to request the resource when the server is pushing it. Second, the client can cancel the push possibly before any bytes have been sent, and certainly after at most rtt*BW bytes have been sent.

Comment Re:Detailed info on SPDY (Score 1) 310

A single IP != a single server.
HTTP pipelining doesn't play well with proxies, and since you're never quite sure (on port 80) whether or not you're using a proxy, well, you end up with lots of fun heuristics about whether or not you can use it.

If you think that you're only limited by bandwidth, you should try taking the open-source SPDY implementation, modifying it, and then run an analysis of its behaviors over a similar dataset to the one we used (alexa-500). Data speaks volumes! Conjecture on its own isn't all that useful.

Comment Re:Detailed info on SPDY (Score 1) 310

SCTP doesn't have enough flow control to make proxies safe or eliminate head-of-line blocking, good implementations don't exist on all platforms, and most damaging, it doesn't play well with IPv4 NAT. Not playing with IPv4 NAT is a killer.

SCTP was certainly one of the evaluated choices-- it had a lot of theoretically nice things going for it.

Comment Re:SPDY clarifications (Score 2) 310

I'm one of the other people who works on SPDY.

server push: We have some debates about this internally, but it seems the market is deciding that push is important-- e.g. image inlining into the HTML. Server push allows you to accomplish the same, but gives the benefit of having them known as individual resources by a single name, and thus cacheable. I believe it may be particularly beneficial for people on high-rtt devices like mobile. If you look at data just about anywhere, you can see that RTT is the real killer. 100ms RTTs are fairly normal for mobile devices. I'd much rather have the server push that data to me rather than having to wait 100ms between each round of requests-- it can make literally seconds of difference for complex pages.

Why "SPDY": The name we first chose would have made money for the lawyers... and so we picked another.

performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.

"supposed" benefit+pipelining: HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request, and thus disambiguating the situation. In such cases, the client knows that it is safe to retry. In any case, you're talking about doing http-pipelining plus response reordering.
How would you handle the following scenario: User opens video in one tab, creates another in which he or she looks up the Dow Jones index for the day. The video is still being displayed in the other tab. You have a head-of-line blocking issue. How do you deal with it? Canceling the video request is a poor choice-- the user likely will come back to it later. Waiting for the video to finish is a poor choice-- the user probably wants to see the Dow right now instead of 15 minutes later. Opening a new connection incurs additional cost in the network (NAT), on the servers and worse yet, incurs the latency penalty of a new connection setup plus whatever other protocols you're negotiating. I don't want to wait 2 RTTs before I get my content. I'm impatient. I want it now! :)

I'd suggest taking the open sourced code that we've provided and implementing your solution. You can then run the same battery of tests against your solution that we've done (in the lab) for SPDY. Data is extremely convincing when collected properly. If your solution worked better, then we'd have a basis for re-analysis.

Slashdot Top Deals

"Card readers? We don't need no stinking card readers." -- Peter da Silva (at the National Academy of Sciencies, 1965, in a particularly vivid fantasy)

Working...