What I have experienced is that SPDY (and therefor also HTTP/2) will only offer more speed if you are Google or are like Google.
This is so worn out it's even in the FAQ.
Multiplexing doesn't offer that much speed increase as some people would like you to believe.
It's good for every high-latency connection, like all the small sites have. I run a server on a VM on a generic Xeon box behind a DSL line, four hops from a backbone, for security and privacy reasons. It'll be great for my use case - I can't really compete with the big guys for responsiveness the way it is now.
Often, the content of a website is located on multiple systems (pictures, advertisements, etc), which still requires that the browser uses more than one connection, even with HTTP/2.
So, every one of those requests will be faster.
Also, HTTP/1 already allows a browser to send multiple requests without waiting for the response of the previous request. This is called request pipelining
Pipelining and multiplexing are different.
So, to me HTTP/2 adds a lot of complexity with almost no benefits in return.
So, don't add HTTP/2 support to your server - if nobody leaves then nobody wanted it. HTTP/1.1 will be supported for the next two decades.
Then why do we have HTTP/2? Well, because it's good for Google. They have all the content for their websites on their own servers.
Install Status4Evar - you'll see Google sites constantly jumping across all their domains, stalling on many of them.
Because IETF failed to come up with a HTTP/2 proposal, a commercial company (Google in this case) used that to take control. HTTP/2 is in fact a protocol by Google, for Google.
That just happens to help everybody else. Your critique of IETF's failure of leadership is quite valid, though. People have been hacking around HTTP/1.1's problems for the past 17 years and if Google hadn't done SPDY, we'd still be in that situation.
In my experience, you are far better off with smart caching. With that, you will be able to get far better speed-increase results than HTTP/2 will ever offer.
Agreed wholeheartedly! Nobody is arguing for not optimizing each layer of the stack. Proper caching can yield whole-number multipliers for responsiveness, not just mere percentages like HTTP/2!
In the last 20 years, a lot has changed. HTTP/1 worked fine for those years.
Read up on so many of the tricks browsers and servers (especially CDN ops) do to make HTTP/1.1 usable. These hacks are what inspired some of the SPDY changes, by people knee-deep in those hacks.
But for where the internet is headed, we need something new. Something completely new and not a HTTP/1 patch.
That would be great too. Do you have a proposal? Incremental approaches are usually easier to find acceptance for because they're more clearly understood and carry less risk of unexpected consequences.