With all this discussion about different types of detail and complexity, and the need to do massive coding to properly handle all forms of HTML headers, I'm reminded of a discussion I had once with a back-end designer.
He had made a system designed to work via http. When I asked why, and mentioned all the overhead and everything, he had a real simple response.
Essentially: the application wasn't a full powered web server. In it's simplest form, an http request is "open a socket, write a string, read a string, close socket". By designing it to look like http, and work over port 80, it fits into modern corporate systems, gets past most firewalls, etc.
The result? Sure, you could send crazy http thingies to this server; it would just return an error. But it was able to work with the expected use case -- send data to the server, get a response back. Plain text. Easy to work with.
What happens to such a system once everything becomes binary? No longer so simple.
Does this help the typical user? If the concern is over the per-file TCP overhead/startup, there is already a way to re-use a single TCP connection for multiple file transfers. If the concern is over the size of the headers, there is a much better way to ... you know, remove all the junk, cookies, extra headers, etc, than just trying to shrink text labels down (which is about all you can do if you want to keep the data content the same). If the concern is over "Perceived browser speed", well, let me remind you of Larry Wall's "rn" versus the previous dominant news reader -- and how "rn" would parse and display on the fly, instead of having to read the entire thing in before displaying. Or compare Safari to Firefox -- again, firefox starts displaying faster than Safari, so I can start reading the page sooner. Or, compare ....
Do not assume that "perceived slowness" is caused by a bad protocol.
It may be caused by a bad user agent.
If the page has to be fully loaded before being displayed, that's one thing.
If you can load the entire base text of a page, put it up, and then go back and start loading the images, or sounds, or flash thingies, that's another.
If you don't like all the screen jumping, well guess what? You can open a bunch of separate TCP/http connections, read just enough from each to determine size, and then stop reading -- let the socket data stop coming -- and put up the text of the page, the placeholders for what you need, and then, once the base information is up, then go back and read the rest without the "jumping" and resizing.
Does any of this require binary?
The goals of 2.0:
1. Substantially and measurably improve end-user perceived latency in most cases, over HTTP/1.1 using TCP.
That's a user agent behavior problem.
3. Not require multiple connections to a server to enable parallelism, thus improving its use of TCP, especially regarding congestion control.
That requires a way to say "Here is control data", and "Here is content data". Sure, that would help a great deal, if as a user agent, while reading a text blob for a web page, I see that I need to open a TCP channel for an image to determine it's size. I currently have to either wait for the web page text to finish, or open a new TCP channel to get the image. What can avoid this? If I am now having to send ACK's and NACK's back to the server during my read, I can also send a "Put on hold -- now give me Y instead". So this is a "win" for this behavior -- I can now reuse the same TCP channel, and get the beginning of another data stream, determine size, etc, and then get back to the first file. So clearly, this goal is a good goal, right? There are no flaws in the goal of using a single TCP channel to send multiple data streams, right?
No one could possibly see any flaws with transmission speed by saying that the sender is going to stop sending, and come to a complete halt until the receiver has synchronized, right?
It's a tradeoff. The old way required filling up the TCP transmission window with data sent that was not acknowledged. This doesn't slow down the total transmission time (better throughput), but has the problem of the desired channel being slower (competing with the unwanted stuff). The new way has worse throughput (the communication has to stop and drain out repeatedly), but the stuff you want doesn't compete with the stuff you don't want.
What's the real problem? The TCP protocol lacks key, needed features, including any actual end application control of what's going on / how stuff is transmitted. TCP is a bad protocol today -- consider how long ago it was designed, and the network pipes it was designed for. The better solution would be to make a better TCP.
Instead, the approach being taken is: "Make everything that uses TCP re-implement better stuff on top of it".
4. Retain the semantics of HTTP/1.1, leveraging existing documentation (see above), including (but not limited to) HTTP methods, status codes, URIs, and where appropriate, header fields.
... Retain the semantics, while introducing new features? Gee, how about backwards compatibility?
5. Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in intermediaries (both 2->1 and 1->2).
... Err, you want a 1.1 and a 2.0 to talk to each other, with different framing systems, different abilities expected by the 2.0 side, and still somehow maintain the semantics???
... Yea, let me know how that AI project works out for you.