Sign the petition about it:
At least it helps it get more noticed.
Sign the petition about it:
At least it helps it get more noticed.
The low hanging fruit is where the regulations allow them to deploy the most quickly to the largest number of customers.
The incentive to create is in the money you make from having written and either sold the service that you're selling, or selling the software itself, or sometimes just from the good feeling you get from having made something that works. The latter item is what inspires most really good folks, honestly.
Patents are horrendously ineffective at their intended purpose of incenting innovation in a world where non-practicing entities (read patent trolls) have a vast number of patents and exist with the *sole purpose* being to get money from those patents, and NOT to actually use them. Often the patent is granted (with the application having been secret) years after others have independently gone and done the thing themselves, thinking it was no big deal, probably because *it was no big deal*!
Even worse, many patent holders wait to sue until the idea (or company implementing such) is successful, maximizing the damage.
Worse, most of the patents these days (and there has been an explosion of patents... why orders of magnitude more patents when we're arguably no smarter than we were 10 or 30 years ago??) are fricking obvious.
And of course there is the fun bit that NO COMPANY CAN DO A PATENT SEARCH BECAUSE THEN IT WILLFULLY INFRINGES AND MUST PAY TRIPLE DAMAGES. So, noone looks at patents who actually might use them.
Patents, especially in the realm of software, do more harm than good today.
Strike that. They're almost purely harmful.
Yes, and if the batteries are a significant part of the price of the car (true today), this is potentially moving significant expense to the car's owner.
Ignore the rural parts which account for most of the area and just focus on the metro areas, and you'll find that the US *STILL* is way behind.
And sorry if I sound frustrated about it... I am *really* frustrated by the current state of the world w.r.t. parallel connections. It makes my life such a pain in the butt!
TCP implementations are very mature. As impementors, we've fixed most/many of the bugs, both correctness and performance related. TCP offers reliable delivery, and, excepting some particular cases of tail-loss/tail-drop, knows the difference between packets that are received but not delivered and packets that are neither received nor delivered.
TCP has congestion control in a variety of different flavors.
TCP has various cool extensions, e.g. MPTCP, TCP-secure (not an RFC, but a working implentation), TFO, etc. etc.
You said streams. I agree that HOL blocking is solved by multiplexing over something, whether that be streams or connections, or messages.
That being said...
HOL blocking is NOT NOT NOT NOT NOT NOT NOT NOT solved with concurrent *connections*, because while doing so solves the HOL blocking problem, it opens up a greater number of cans-of-worms.
If one creates many connections:
I don't want to see more congestion in the connection startup phase since we're creating 60 connections (not an exaggeration) each with between 1-10 packets. I don't want to see poorer congestion avoidance because of the multiple connections. I'm tired of having each one of these connections land on a different server, and losing all ability to optimize what resources are being sent vs inlined because of the complexity inherent in attempt to rectify this. I don't want to have to expand the congestion window on X connections with short flows. I don't want to have to deal with tail-drop on X flows, etc. etc.
Where was that claimed?!
In any case:
TCP's implementations are almost without fail doing per-flow congestion control, instead of per-session congestion control/per-ip-ip-tuple congestion control. This implies that, if loss on path is mostly independent (and that is what data seems to show), per-flow congestion control backs off at a constant factor (N where N == number of parallel connections) more slowly than a single stream between the same endpoints would.
So, indeed, sending several files in parallel has a potential for going faster when on links with independently correlated packet loss.
This sucks, by the way, because it makes the lives of those folks working on HTTP2 more difficult.
I think that you're forgetting that packet loss on a TCP stream incurs a retransmit.
So, when there is 33% loss, you end up sending rexmits with an overhead of 50% (33% of the first 33% lost would also be lost, etc. so the series of sum of (p^i) where p ==1/3 and i goes from 1->infinity converges at 50%)
In any case, you end up with an overhead of packets/bytes on the wire with rexmits as well.
With XOR based FEC, it takes one FEC packet at MTU size to recreate any one lost packet in the range of packets covered by the FEC. This means that, so long as your flow is long enough, FEC becomes potentially superior at recovering data, as the FEC can cover a longer range packets, making multiple-packet-recoveries possible. This really depends on the length of the flow, however, and it is certainly true that FEC by itself is never sufficient.
Comparing the two:
FEC: is good since it has a probability of removing an RTT before the application can interpret the data. It is not as great when one focuses on bandwidth efficiency in a non-loss case. It can potentially do better in terms of bandwidth efficiency than rexmit at higher loss rates since the FEC packet can deal with any one packet being lost within the range.
Rexmit: is good since it uses no additional bandwidth in the no-loss case. It is potentially a fair bit worse when there is loss (the internet seems to average ~1.5% packet loss) in terms of latency for the application, and it isn't great in terms of bandwidth efficiency when loss is occuring.
Both FEC and rexmit seem like reasonable loss recovery mechanisms, each excels at different parts of the curve.
Part of the focus is on mobile devices, which often achieve fairly poor throughput, with large jitter and moderate to large RTTs.
Surprisingly, QUIC can be more efficient given how it packs stuff together, but there this wasn't a primary goal.
Think about second-order effects:
Given current numbers, if FEC is implemented, it is likely that it would reduce the number of bytes actually fed to the network, since you end up sending fewer retransmitted packets than you send FEC packets since the FEC packets allow for any one packet in the range of packets it covers to be reconstructed!
Nah; it is valuable for many people to be doing this benchmarking even with the current state of code.
Concluding that buggy-unfinished-QUIC is slower than TCP is absolutely valid, for instance.
That isn't the same as QUIC being slower than TCP (at least, not yet!)
TCP doesn't suck.
TCP is, however, a bottleneck, and not optimal for all uses.
Part of the issue there is the API-- TCP has all kinds of cool, well-thought-out machinery which simply isn't exposed to the application in a useful way.
As an example, when SPDY or HTTP2 is layered on TCP, when there is a single packet lost near the beginning of the TCP connection, it will block delivery of all other successfully received packets, even when that lost packet would affect only one resource and would not affect the framing of the application-layer protocol.
bprodoehl is absolutely correct-- the code is unfinished, and while the scenario is certainly one which is worried about, it isn't the focus of attention at the moment. The focus at the moment is getting the protocol working reliably and in all corner cases... Some of the bugs here can cause interesting performance degredations, even when the data gets transferred successfully.
I hope to see the benchmarking continue!
AC, don't worry.
TCP is simply a reliable, in-order stream transport.
HTTP on TCP is what was described, and, yes, not the best idea in today's web (though keep in mind that most browsers open up 6 connections per hostname), but that is also why HTTP2 is working on being standardized today.