I had wondered if I was just ignorant, since the Petrov story sounded like something I should have heard about somewhere. I suppose that's the risk of making a show with a target audience that includes a lot of trivia nerds - it's hard to come up with surprising true stories that they haven't already heard.
(I am familiar with a very similar story about a Soviet submarine with nuclear torpedoes that decided not to fire at an American destroyer that was dropping depth charges on them during the Cuban missile crisis because they needed the unanimous consent of a certain three officers, and only two agreed.)
I watched the whole thing and thought they did a pretty decent job. Some of the camera work could have been better, but I'm willing to overlook that given their constraints. I like the characters. None of the roles seem like they've been forced in order to advance the plot (though the interactions between the DHS guy and his assistant seemed rather cliche). The "nuclear battery" thing is real technology (called a radioisotope thermal generator), and Stanislav Petrov appears to have been a real person (the Wikipedia article is probably too old to have been made up by the creators of the show). For me, it's too early to tell if the show is likely to be a good one, but the first episode is promising.
One viewer that really loves Firefly and will buy the DVDs is worth more revenue than a viewer who kills an evening watching Dancing with the Stars because they're bored and then forgets about it forever.
I don't know that that's really true. The point of network TV isn't to sell DVDs, it's to sell commercials. If Ford runs a commercial, and viewers go out and buy Fords, the show is a success, regardless of whether the viewers were really enjoying the show.
It may be possible that if a person really likes a show, they're more likely to think highly of its advertisers, but I think the networks are really more interested in attracting the maximum number of eyballs, and the more gullible they are, the better.
-jim
I gave up in disgust after looking at the first question. "Legitimate" psychological tests don't ask you to self diagnose; they ask a large number of concrete questions that can be used to infer psychology.
I had a similar reaction to the test. (I didn't bother to read the article.. it sounds like that was a good plan on my part.) The test didn't seem like it measured empathy so much as whether I self-identify as an empathetic person. I don't think my own system of empathy works at the same level as what the test was trying to measure. I really don't think very much about what circumstances are like for other people. Rather, I've adopted a set of rules for what constitutes fair and decent behavior between human beings, and it bothers me when those rules are violated. Similarly, it bothers me when it becomes popular to think of some group of people as sub-human in some way (whether they are Mexicans, Palestinians, Republicans, whatever...). Maybe I've lost some of my empathy by reducing it to a set of axioms, or maybe my empathy is still there in full force, it just does its work subconsciously. It might even be possible to test this in some way, but I'm pretty sure the test linked from the article doesn't measure anything meaningful whatsoever.
A potential $100,000 fine just for filing a bad patent would stop most small businesses or independent inventors from filing at all, even if the patent was valid -- it's way too much risk. I certainly don't have $100,000 lying around.
Ultimately, it's the patent offices' job to determine if patents are valid or not. The applicant ought to do due diligence to check if their idea is original, but realistically it's not possible to know every invention that has ever been thought of in the whole history of the human race. Also, infringement is fairly subjective -- it isn't always possible to predict which way the courts are going to rule.
If you want there to be fewer bad patents, there is an easier solution: simply make it cheaper and easier to challenge an existing patent. When you file a patent, the patent office is effectively saying "this patent represents a unique idea significantly different from anything that has been invented before". This is the claim they make after you pay them a couple hundred dollars. It should be even simpler to ask them, "does such-and-such invention infringe on such-and-such patent?", and they should be able to come up with an answer for a cost that is similar to filing a patent. As far as I understand it, such a simple determination currently requires a very expensive lawsuit.
I don't know if I agree; haskell programmers tend to be demographically more experienced (it's the only language I know of where it seems that the median programmer has or is working on a PhD), but I would also trust a relatively inexperienced programmer to write fairly good code in Haskell, especially if they used an existing web framework like HappStack. Static typing and well-defined libraries go a long way towards making it hard to do the wrong thing. This is one of the things I find compelling about Haskell - you don't have to be some kind of awesome programmer to be able to write solid applications.
I feel like web security is like a bunch of switches that you have to know to turn off - training can help, but it's better if those switches are not just off by default, but hard to turn on. For instance, I used HappStack to write a simple web app, and never worried about escaping strings. Once things were working properly, I thought, "ok, now let's fix all the security holes I've left lying around and do some proper string escaping". I discovered, when I tried to enter html tags in one of the input forms, that the html library was escaping the strings for me, and I didn't even know it.
There is, of course, a steeper learning curve, and you won't find a pile of books to tell you how to use HappStack, like you would if you were using a more mainstream framework. This contributes to sampling bias; the mediocre or poorly motivated programmers are likely to give up, so it would be difficult to test my hypothesis.
Secondly - unless you are
... calculating eigen values of huge matrices ...
Actually, that is what I've been using my home computer for recently. I think you're right though about few "normal" applications being able to use many cores effectively. However, I think you should keep in mind that much of the software that will be run on current hardware hasn't been written yet. Of the computers that are being bought new now that will still be used regularly three or four or eight years from now (why buy a new computer if the one we have is fast enough), they will probably be running much more heavily threaded applications.
Even if the vast majority of applications in the not-too-distant future remain single-threaded, it is likely that the few that are CPU-bound will be optimized and properly threaded. You only have to fix the 1% of the code that takes 99% of the time - the rest doesn't matter.
I was thinking more in terms of durability than performance. Traditional hard drives are still recommended for swap because they don't wear out quite so easily in write-heavy workloads.
Performance may be better as well, though; just because it's possible to saturate a SATA link with multiple SSDS on one particular workload, doesn't mean that they're fast for every workload. For instance, you can't write to flash without zeroing out the surrounding chunks. It might be handy to have a place on the drive that can handle small writes without the extra overhead.
"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah