Submission + - ~1200 signatures on the petition to limit copyrigh (whitehouse.gov) 1
Uh, we have more Slashdotters than that who agree.
mod this one up please.
The Constitution is a wonderful document, but it doesn't guarantee anything. It specifies something.
The Gov't is supposed to follow the specification, at which point we have a guarantee.
Wish it were so.
That is the biggest load of bullshit I've ever heard-- MOST creative people do it for the joy of having creating. A minority of them are in it only for the money.
And today that is what we have. A bunch of people who are only in it for the money (many of them creative enough only to purchase patents) suing people who are creative and who likely just though whatever the heck it was is so obvious that it didn't cross their minds that it should be patentable.
The Judge's test is a poor one.
He should have held up some random combinations of ipads and galaxy tabs and asked if he was holding up two of the same thing.
The important part is that they're different, not that you can pick which one is the tab.
That was a horribly shitty experiment.
How is it different to having politician's random falsehoods enter into my brain against my will?
You can carry these analogies to ridiculous extremes without trying.
Using the above (access to my brain), the obvious choice is to ensure that the politicians cannot contact my brain by turning off the communications device.
FYI, it has been done before. The computer did better than the GPs (and this was decades ago), however, noone was wanting to be the liable party.. and so, it never saw real use outside the study.
Score some more "benefit" for lawyers and the people who litigate.
This.
Damn, that is one of the funniest comments I've seen in a LONG time!
increment/decrement is *extremely* cheap.
performance measurement: In the whitepaper, as per my recollection, Chrome was the client for all of the measurements that we did.
Since top sites have more resources than most sites, on average more than 6 per host, and since Chrome has a low connection limit and had blocking problems preventing parallel loads (since there's no data on the metrics there's no way to know what webkit bugs were present) the results are then far less impressive. In fact, these performance numbers are pretty much meaningless, wouldn't you agree?
They are perfectly meaningful. If you don't like our findings, the most productive thing to do is to create an experiment that shows something better!
HTTPs fault handling is terrible, actually. When you're sending a request and you don't receive the response, you don't know if the request was processed or not. This is particularly fun for non-idempotent transactions like say.. charging your credit card. SPDY includes mechanisms for telling the client (assuming the connection wasn't broken) that the server rejected the request
What? When do you "not receive a response" for a request and it isn't a broken connection? If the server rejected a request then you get back an error status right? In any case charging your card twice is not a failing in HTTP, so I'm not sure what you are trying to say here. I have a hard time taking this point seriously, but maybe I don't understand HTTP well enough to understand your point.
I'm one of the people who maintain the servers which terminate all of these HTTP connections, and yes, when rebooting servers, when loadbalancing switches for whatever reason, etc. the request gets lost. It is preferable for a server to signal to the rest of the network (including any connected clients with idle connections) that it is going away. Since HTTP offers no mechanism to push any notification to the client, a server can either close the connection (and possibly thus swallow a request), or attempt to serve all requests until it goes away later (and the connection is closed). It ends up being the same-- the HTTP server has no mechanism to tell a client that it is going away and resolve the race.
In any case, you're talking about doing http-pipelining plus response reordering.
That's right, and you didn't respond to the fact that a connection problem would leave only one resource partially transferred instead of several, so I assume you accept that.
I think you're over simplifying. Certainly that is one of many possible scenarios. Another possible scenario is that it is also possible that you successfully transferred zero items using pipelining since the first element was large (or the server had significant think time), whereas SPDY successfully transfers N-1 items out of N. Making an experiment and testing against real-world behaviors is the best way to say whether or not it works. The possible state space is very large.
In any case if you decide to use HTTP-pipelining like semantics with SPDY, you can. If you decide that there are higher priority items you'd like to receive, you signal the server that and it responds appropriately by preempting the low priority streams and/or interleaving the responses as per its heuristics.
How would you handle the following scenario: User opens video in one tab, creates another in which he or she looks up the Dow Jones index for the day. The video is still being displayed in the other tab. You have a head-of-line blocking issue. How do you deal with it?
That's incredibly contrived. You almost certainly wouldn't be serving videos from the same host as stock data so it would be a separate connection. Problem solved. You also probably wouldn't want the video streamed over HTTPS because, why would you? You can tell the client not to reuse the streaming connection so that it can open a new one (not take up a per-host of the keep-alive slots). I mean I understand that Google Chrome has had problems with per-host connection limits exacerbated by things like gmail that keep connections open and that they WontFix... but since it doesn't seem to affect other browsers creating a new protocol doesn't seem the right way to fix it.
To turn the tables, how would you handle the situation in SPDY of a user requesting a GiB of data and there's several megabytes floating around in the network. Then they make a request for a 1k resource, but it can't be received until the whole amount already sent is read in, and if there are dropped packets this can add several round trips before the 1k finally arrives. With plain HTTP, the 1k request goes through another connection and is unaffected... it can take a different route and won't be held up by the already sent data.
As for your first few questions, the answer is, with a dollop of sarcasm: Proxies are wonderful.
Clients don't generally get to decide when to make a connection or not, even with HTTP, rather the browser does. Websockets, or making an API to SPDY to do something similar would give some potential for clients (and transitively page authors) the ability to choose.
You're assuming that you know how loss occurs on the network. We hypothesize that most of the time when loss occurs on a network, it is correlated to a path, and irrespective of the number of connection. IF you had 6 connections and they were all in use, they'd all see loss and so you'd still get your effective BW cut in half. If you use 1 connection and there is loss like this you're more likely to trigger the appropriate TCP behaviors (e.g. fast rexmit) to causing bandwidth to increase faster.
On the request side, with SPDY if the 1k request is of equal priority, the other (1GiB) request may be fragmented (it is suggested that everything be fragmented into ~4k or smaller chunks), and so the 1k response is sent after 4k of the other request has been forwarded. You have significantly more control over the server->client path because you actually control the BW used, as opposed to a distributed collaboration/control model that you must assume with more connections since each connection is likely to end up at another server (yes, even if they have the same IP-- this is typical loadbalancing).
I don't want to wait 2 RTTs before I get my content. I'm impatient. I want it now!
Perhaps you should use Firefox 4 or IE 9 then?
lol.
I'd suggest taking the open sourced code that we've provided and implementing your solution. You can then run the same battery of tests against your solution that we've done (in the lab) for SPDY.
I see. So it sounds like basically you didn't test an HTTP pipeline with reordering. This seems like a pretty big omission in doing basic research for creating a new protocol like this.
There is external research on this topic. Feel free to look it up as we did.
Why "SPDY": The name we first chose would have made money for the lawyers... and so we picked another.
Google seems to have a problem coming up with good tech names. Just an observation.
I don't know, I kind of like it
I'd much rather have the server push that data to me rather than having to wait 100ms between each round of requests-- it can make literally seconds of difference for complex pages.
How does the server know if the client already has that data already? What if you are browsing with images disabled, or javascript disabled, or style disabled? Thank you for taking the time to explain the reason behind it. It still seems like a complication for little gain though.
There are a lot of ways that you can attempt to figure out that the client has the info already. All the ones that we tried seemed to cause more latency than sending the data and having the client cancel that stream if it already had it. It is theoretically possible for the client to cancel such streams before they are sent due to the way the server push is implemented-- the server advertises to the client that it will be pushing the resource before the client sees a reference to it in what it has already downloaded... The implications are twofold. First, this prevents a race on the client whereby it might attempt to request the resource when the server is pushing it. Second, the client can cancel the push possibly before any bytes have been sent, and certainly after at most rtt*BW bytes have been sent.
Yes, it is enabled in production for Chrome for any site that advertises SPDY.
Google definitely does advertise SPDY compatibility, thus Chrome may speak SPDY when talking to Google pages.
Heya-- one of the SPDY developers here.
It doesn't cut page load time in half (it could, but you'd have to have a truly *terrible* site design). It does provide some pretty good latency decreases, however. I wish the OP had quoted more real numbers...
You've got it right, essentially. Server push puts a file into the cache which will be referenced by the page that is loading. I don't recall if Chrome still supports it, but at one point it most certainly did.
A single IP != a single server.
HTTP pipelining doesn't play well with proxies, and since you're never quite sure (on port 80) whether or not you're using a proxy, well, you end up with lots of fun heuristics about whether or not you can use it.
If you think that you're only limited by bandwidth, you should try taking the open-source SPDY implementation, modifying it, and then run an analysis of its behaviors over a similar dataset to the one we used (alexa-500). Data speaks volumes! Conjecture on its own isn't all that useful.
HOLY MACRO!