Furthermore, how much more efficient would HTTP2 binary be over pipelined, gzip-9 content? If headers are that big of an overhad, why not just a standard compression on headers?
With pipelining, the server still has to return the responses in the order they were requested, which can mean a larger/slower resource holds everything else up. More importantly, pipelining doesn't work well in the real world. Various servers and proxies claim to support HTTP 1.1 but actually don't. So then browsers have to detect if the server they are talking to is broken, or if they are connecting through a broken proxy, and fall back to not pipelining if necessary. Failing to pipeline and falling back is slow, so browsers did things like creating blacklists to avoid trying pipelined connections unsuccessfully. And a browser on a laptop might be behind a broken proxy only some of the time. That is why pipelining-by-default didn't make it out of beta on either Chrome or Firefox, despite some efforts in the past couple of years.
Browsers should be able to be pretty sure that HTTP 2 connections will actually support HTTP 2, and there is no need to workaround, or fix, millions of broken servers and proxies. And they can do things like multiplexing and header compression while they're at it.