Yes, and if the batteries are a significant part of the price of the car (true today), this is potentially moving significant expense to the car's owner.
Ignore the rural parts which account for most of the area and just focus on the metro areas, and you'll find that the US *STILL* is way behind.
And sorry if I sound frustrated about it... I am *really* frustrated by the current state of the world w.r.t. parallel connections. It makes my life such a pain in the butt!
TCP implementations are very mature. As impementors, we've fixed most/many of the bugs, both correctness and performance related. TCP offers reliable delivery, and, excepting some particular cases of tail-loss/tail-drop, knows the difference between packets that are received but not delivered and packets that are neither received nor delivered.
TCP has congestion control in a variety of different flavors.
TCP has various cool extensions, e.g. MPTCP, TCP-secure (not an RFC, but a working implentation), TFO, etc. etc.
You said streams. I agree that HOL blocking is solved by multiplexing over something, whether that be streams or connections, or messages.
That being said...
HOL blocking is NOT NOT NOT NOT NOT NOT NOT NOT solved with concurrent *connections*, because while doing so solves the HOL blocking problem, it opens up a greater number of cans-of-worms.
If one creates many connections:
I don't want to see more congestion in the connection startup phase since we're creating 60 connections (not an exaggeration) each with between 1-10 packets. I don't want to see poorer congestion avoidance because of the multiple connections. I'm tired of having each one of these connections land on a different server, and losing all ability to optimize what resources are being sent vs inlined because of the complexity inherent in attempt to rectify this. I don't want to have to expand the congestion window on X connections with short flows. I don't want to have to deal with tail-drop on X flows, etc. etc.
Where was that claimed?!
In any case:
TCP's implementations are almost without fail doing per-flow congestion control, instead of per-session congestion control/per-ip-ip-tuple congestion control. This implies that, if loss on path is mostly independent (and that is what data seems to show), per-flow congestion control backs off at a constant factor (N where N == number of parallel connections) more slowly than a single stream between the same endpoints would.
So, indeed, sending several files in parallel has a potential for going faster when on links with independently correlated packet loss.
This sucks, by the way, because it makes the lives of those folks working on HTTP2 more difficult.
I think that you're forgetting that packet loss on a TCP stream incurs a retransmit.
So, when there is 33% loss, you end up sending rexmits with an overhead of 50% (33% of the first 33% lost would also be lost, etc. so the series of sum of (p^i) where p ==1/3 and i goes from 1->infinity converges at 50%)
In any case, you end up with an overhead of packets/bytes on the wire with rexmits as well.
With XOR based FEC, it takes one FEC packet at MTU size to recreate any one lost packet in the range of packets covered by the FEC. This means that, so long as your flow is long enough, FEC becomes potentially superior at recovering data, as the FEC can cover a longer range packets, making multiple-packet-recoveries possible. This really depends on the length of the flow, however, and it is certainly true that FEC by itself is never sufficient.
Comparing the two:
FEC: is good since it has a probability of removing an RTT before the application can interpret the data. It is not as great when one focuses on bandwidth efficiency in a non-loss case. It can potentially do better in terms of bandwidth efficiency than rexmit at higher loss rates since the FEC packet can deal with any one packet being lost within the range.
Rexmit: is good since it uses no additional bandwidth in the no-loss case. It is potentially a fair bit worse when there is loss (the internet seems to average ~1.5% packet loss) in terms of latency for the application, and it isn't great in terms of bandwidth efficiency when loss is occuring.
Both FEC and rexmit seem like reasonable loss recovery mechanisms, each excels at different parts of the curve.
Part of the focus is on mobile devices, which often achieve fairly poor throughput, with large jitter and moderate to large RTTs.
Surprisingly, QUIC can be more efficient given how it packs stuff together, but there this wasn't a primary goal.
Think about second-order effects:
Given current numbers, if FEC is implemented, it is likely that it would reduce the number of bytes actually fed to the network, since you end up sending fewer retransmitted packets than you send FEC packets since the FEC packets allow for any one packet in the range of packets it covers to be reconstructed!
Nah; it is valuable for many people to be doing this benchmarking even with the current state of code.
Concluding that buggy-unfinished-QUIC is slower than TCP is absolutely valid, for instance.
That isn't the same as QUIC being slower than TCP (at least, not yet!)
TCP doesn't suck.
TCP is, however, a bottleneck, and not optimal for all uses.
Part of the issue there is the API-- TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.
As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets, even when that lost packet would affect only one resource and would not affect the framing of the application-layer protocol.
bprodoehl is absolutely correct-- the code is unfinished, and while the scenario is certainly one which is worried about, it isn't the focus of attention at the moment. The focus at the moment is getting the protocol working reliably and in all corner cases... Some of the bugs here can cause interesting performance degredations, even when the data gets transferred successfully.
I hope to see the benchmarking continue!
AC, don't worry.
TCP is simply a reliable, in-order stream transport.
HTTP on TCP is what was described, and, yes, not the best idea in today's web (though keep in mind that most browsers open up 6 connections per hostname), but that is also why HTTP2 is working on being standardized today.
The benchmark looked well constructed, and as such is a fair test for what it is testing: unfinished-userspace-QUIC vs kernel-TCP
It will be awesome to follow along with future runs of the benchmark (and further analysis) as the QUIC code improves.
It is awesome to see people playing with it, and working to keep everyone honest!
Right now QUIC is unfinished, so I hesitate to draw conclusions about it.
What I mean by unfinished is that the code does not yet implement the design; the big question right now is how the results will look after correctness has been achieved and a few rounds of correctness/optimization iterations have finished.
That is a complicated question
Hopefully this mostly answers it:
The goal is to not be worse than TCP in any way. Whether or not we'll achieve that is something we won't know 'till we've achieve correctness and had time to optimize, and then time to analyze where and why it performs badly, then iterate a few times!