Forgot your password?

Comment: Re:Investors? Really? (Score 5, Informative) 243

No, they are NOT investors.
If they were investors,they'd be in trouble with the FTC, which hasn't yet setup regulations allowing such.

People who use Kickstarter are pre-purchasing whatever it is they're being sold. That can act as income for a company, and thus a funding source, but that does not make people who purchase things via Kickstarter investors.

One of these days, we will be able to invest in this manner, but not yet.

Comment: Re:Investors? Really? (Score 5, Informative) 243

Kickstarter doesn't do investing. It is a pre-purchase...
I challenge you to find the word "invest" in the below (hint, it isn't there, nor is it *anywhere* on the Kickstarter page)

From Kickstarter:

Pledge $35 or more

  22997 backers

You will receive a digital version of the movie within a few days of the movie’s theatrical debut, plus the T-shirt, plus the pdf of the shooting script. Naturally, you will also receive regular updates and behind-the-scenes scoop throughout the fundraising and movie making process. Available to US, Canada, Australia/New Zealand, Mexico, Brazil, and Select EU countries (Now including Norway and Switzerland! See Project Description for full list)

Comment: Re: Abolish software patents (Score 4, Insightful) 204

by grmoc (#45946547) Attached to: Supreme Court Refuses To Hear Newegg Patent Case

The incentive to create is in the money you make from having written and either sold the service that you're selling, or selling the software itself, or sometimes just from the good feeling you get from having made something that works. The latter item is what inspires most really good folks, honestly.

Patents are horrendously ineffective at their intended purpose of incenting innovation in a world where non-practicing entities (read patent trolls) have a vast number of patents and exist with the *sole purpose* being to get money from those patents, and NOT to actually use them. Often the patent is granted (with the application having been secret) years after others have independently gone and done the thing themselves, thinking it was no big deal, probably because *it was no big deal*!
Even worse, many patent holders wait to sue until the idea (or company implementing such) is successful, maximizing the damage.

Worse, most of the patents these days (and there has been an explosion of patents... why orders of magnitude more patents when we're arguably no smarter than we were 10 or 30 years ago??) are fricking obvious.

And of course there is the fun bit that NO COMPANY CAN DO A PATENT SEARCH BECAUSE THEN IT WILLFULLY INFRINGES AND MUST PAY TRIPLE DAMAGES. So, noone looks at patents who actually might use them.

Patents, especially in the realm of software, do more harm than good today.
Strike that. They're almost purely harmful.

Comment: Re:Morons (Score 1) 141

by grmoc (#45397553) Attached to: Taking Google's QUIC For a Test Drive

TCP implementations are very mature. As impementors, we've fixed most/many of the bugs, both correctness and performance related. TCP offers reliable delivery, and, excepting some particular cases of tail-loss/tail-drop, knows the difference between packets that are received but not delivered and packets that are neither received nor delivered.
TCP has congestion control in a variety of different flavors.
TCP has various cool extensions, e.g. MPTCP, TCP-secure (not an RFC, but a working implentation), TFO, etc. etc.

You said streams. I agree that HOL blocking is solved by multiplexing over something, whether that be streams or connections, or messages.

That being said...
HOL blocking is NOT NOT NOT NOT NOT NOT NOT NOT solved with concurrent *connections*, because while doing so solves the HOL blocking problem, it opens up a greater number of cans-of-worms.
If one creates many connections:
I don't want to see more congestion in the connection startup phase since we're creating 60 connections (not an exaggeration) each with between 1-10 packets. I don't want to see poorer congestion avoidance because of the multiple connections. I'm tired of having each one of these connections land on a different server, and losing all ability to optimize what resources are being sent vs inlined because of the complexity inherent in attempt to rectify this. I don't want to have to expand the congestion window on X connections with short flows. I don't want to have to deal with tail-drop on X flows, etc. etc.

Comment: Re:Thank you (Score 1) 141

by grmoc (#45397485) Attached to: Taking Google's QUIC For a Test Drive

Wait, what? :)
Where was that claimed?!

In any case:
TCP's implementations are almost without fail doing per-flow congestion control, instead of per-session congestion control/per-ip-ip-tuple congestion control. This implies that, if loss on path is mostly independent (and that is what data seems to show), per-flow congestion control backs off at a constant factor (N where N == number of parallel connections) more slowly than a single stream between the same endpoints would.

So, indeed, sending several files in parallel has a potential for going faster when on links with independently correlated packet loss.

This sucks, by the way, because it makes the lives of those folks working on HTTP2 more difficult.

Comment: Re:It's not broken... (Score 1) 141

by grmoc (#45397393) Attached to: Taking Google's QUIC For a Test Drive

I think that you're forgetting that packet loss on a TCP stream incurs a retransmit.
So, when there is 33% loss, you end up sending rexmits with an overhead of 50% (33% of the first 33% lost would also be lost, etc. so the series of sum of (p^i) where p ==1/3 and i goes from 1->infinity converges at 50%)

In any case, you end up with an overhead of packets/bytes on the wire with rexmits as well.

With XOR based FEC, it takes one FEC packet at MTU size to recreate any one lost packet in the range of packets covered by the FEC. This means that, so long as your flow is long enough, FEC becomes potentially superior at recovering data, as the FEC can cover a longer range packets, making multiple-packet-recoveries possible. This really depends on the length of the flow, however, and it is certainly true that FEC by itself is never sufficient.

Comparing the two:
FEC: is good since it has a probability of removing an RTT before the application can interpret the data. It is not as great when one focuses on bandwidth efficiency in a non-loss case. It can potentially do better in terms of bandwidth efficiency than rexmit at higher loss rates since the FEC packet can deal with any one packet being lost within the range.

Rexmit: is good since it uses no additional bandwidth in the no-loss case. It is potentially a fair bit worse when there is loss (the internet seems to average ~1.5% packet loss) in terms of latency for the application, and it isn't great in terms of bandwidth efficiency when loss is occuring.

Both FEC and rexmit seem like reasonable loss recovery mechanisms, each excels at different parts of the curve.

Comment: Re:It's not broken... (Score 1) 141

by grmoc (#45372995) Attached to: Taking Google's QUIC For a Test Drive

Part of the focus is on mobile devices, which often achieve fairly poor throughput, with large jitter and moderate to large RTTs. .. so, yes there is attention to low bandwidth scenarios.
Surprisingly, QUIC can be more efficient given how it packs stuff together, but there this wasn't a primary goal.
Think about second-order effects:
Given current numbers, if FEC is implemented, it is likely that it would reduce the number of bytes actually fed to the network, since you end up sending fewer retransmitted packets than you send FEC packets since the FEC packets allow for any one packet in the range of packets it covers to be reconstructed!

It is contrary to reasoning to say that there is a vacuum or space in which there is absolutely nothing. -- Descartes