I disagree. To me there's at least one really compelling reason: To push universal encryption.
Too bad the goal was not pushing universal security instead.
One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.
What we need are systems that are actually secure not ones that pretend to be. Hard to get excited about a "feature" that is worthless against most threats.
True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least.
Is there a technical barrier with respect to TCP and or TLS that makes addressing issues unrealistic? Linux kernel has had TFO support for years and it didn't take decades for SYN cookies or SACKs to hit all but the long tail. There must be at least a dozen or two TLS extensions by now. How hard can it be from a technical perspective to tweak session tickets or whatever vs throwing everything out and reinventing wheels?
QUIC, on the other hand, will be rolled out in applications, and it doesn't have to be backward compatible with anything other than previous versions of itself.
Not sure retrofitting IP stacks into applications qualifies as a feature. More of the same thing duplicated everywhere usually means a bigger headache for all.
Then you can figure out how to integrate your new ideas carefully into the old protocols without breaking compatibility, and then you can fight your way through the standards bodies, closely scrutinized by every player that has an existing TLS or TCP implementation. To make this possible, you'll need to keep your changes small and incremental, and well-justified at every increment. Oh, but they'll also have to be compelling enough to get implementers to bother. With hard work you can succeed at this, but your timescale will be measured in decades.
These are political not technical arguments. Heads of state routinely whine about not being "king" yet there are advantages to not so easily having your way.
I totally understand why Google is doing what their doing. If you own the net stack and the browser and the servers and are a big enough player this is a completely understandable and logical approach to accomplish your goal.
As for using TCP without encryption so you don't have to pay for something you don't need, I think you're both overestimating the cost of encryption and underestimating its value.
I'll assume it can cost nothing in terms of CPU, I/O or Time.
A decision that a particular data stream doesn't have enough value to warrant encryption it is guaranteed to be wrong if your application/protocol is successful.
Not everyone has the same problems or constraints. Caution against mapping your experiences onto the rest of the world and drawing conclusions from it.
I may need to use a specialized security layer.
I may care about the ability for agents to passively monitor a telemetry stream.
I may care only about integrity of a message.
I may be prohibited by law or corporate rules from encrypting data.
I may have a mission critical system where any use of encryption is an unnecessary potential for failure.
Only for restarts. For new connections you still have all the TCP three-way handshake overhead, followed by all of the TLS session establishment. QUIC does it in one round trip, in the worst case, and zero in most cases.
Yes at a minimum any TCP + TLS solution would require minimum 2 separate unavoidable round trips - one for TCP and a second for TLS regardless of construction if state / cookies / tickets / whatever were not available for TCP and or TLS. So what?