You forgot to divide by sqrt(2) in your erfc expression. The actual probability of IQ of a random human being over 197 is about 5e-11, which means about 0.35 humans should have it.
While that is indeed the solution, it is also true that it is too easy to forget. Perhaps one could modify all commands to require the use of the "--" separator, or to warn if it's not present, at least if some environment variable is set. That could be very helpful for people trying to write more secure code.
Please compare the supermarket shelves in the USA with those in Venezuela or North Korea and then come back here and tell me why big government controlling the means and distribution of production is a good idea, compared to the free market, with people providing each other with services in return for a token of exchange (currency).
I'm not saying that there isn't an element of truth in what you are saying, but you have to pick comparable countries or the comparison will mean nothing. So looking at North Korea versus South Korea is fine, as is comparing Venezuela to Colombia, or Cuba to Dominican Republic. If you want to compare the U.S. to anyone, perhaps Sweden would do. But Sweden is pretty darn nice.
This idea that in order to achieve intelligence you need to understand how the brain works is preposterous.
We don't understand how grandmasters play chess, and yet we can build machines that play chess better than any grandmaster. The same thing will happen with more and more skills, and we'll get to a point where it will be clear that machines are more intelligent than humans.
2029 sounds optimistic to me, but the arguments in TFA are very weak:
* "What exactly does as-smart-as-humans mean?" It means "as good as humans at most tasks". The precise definitions won't matter when you actually see the machine in action.
* "Human intelligence is embodied." But artificial intelligence need not be embodied. If we can make a machine as smart as Stephen Hawking, I think we have done OK. I don't think his embodiment is a key part of his intelligence.
* "As-smart-as-humans probably doesn’t mean as-smart-as newborn babies, or even two year old infants." Of course not, but there is no reason a machine would have to learn at the same pace we do, or from the same sources, or in a similar fashion. Going back to the computer chess analogy, a grandmaster requires years of experience to learn how to play well, while a program can parse a large database of games and learn from them in a matter of hours or days.
* "Moore’s Law will not help." This is retarded. The paragraph goes on to acknowledge that it will help, but computer power is not the whole story. Of course it's not the whole story! But it will certainly help.
* "The hard problem of learning and the even harder problem of consciousness." Machine Learning is a very active discipline, with many recent successes. I don't think learning is a serious obstacle. I don't see a problem of consciousness anywhere. "Consciousness" sounds like a new name for "the soul" to me: It's likely to be an attribute that we assign to people as part of the theory of mind, not an actual thing we need to produce and insert into our machines. In any case, it has very little to do with intelligence.
It won't matter if we know what makes humans intelligent, or what intelligence is, or what consciousness is: The proof will be in the pudding. When you see machines that surpasses humans at most tasks we think of as requiring intelligence, we'll have intelligent machines. And philosophers can continue to argue about definitions all they want.
This is the best I've found so far: http://www.youtube.com/watch?v...
I am not sure how I feel about that measure. If we were to use the median absolute error and try to be consistent, we would have to use as the central measure whatever minimizes the median absolute error. That would be a point somewhere between the 25th and 75th percentile, in the "flatter" part of the distribution, in some sense. I don't know if that central measure has a name, but I suspect it's not very relevant in practice.
Perhaps non-mathematicians don't have a problem with this, but it rubs me the wrong way.
What makes the mean an interesting quantity is that it is the constant that best approximates the data, where the measure of goodness of the approximation is precisely the way I like it: As the sum of the squares of the differences.
I understand that not everybody is an "L2" kind of guy, like I am. "L1" people prefer to measure the distance between things as the sum of the absolute values of the differences. But in that case, what makes the mean important? The constant that minimizes the sum of absolute values of the differences is the median, not the mean.
So you either use mean and standard deviation, or you use median and mean absolute deviation. But this notion of measuring mean absolute deviation from the mean is strange.
Anyway, his proposal is preposterous: I use the standard deviation daily and I don't care if others lack the sophistication to understand what it means.
They don't claim to have a realistic model of the situation: They showed a very simple model in which the rational behavior contains an apparent violation of transitivity. And they didn't need to introduce a variety of nutrients to obtain it. This makes their model better, in the sense that it is simpler.
[Sorry, I posted as AC earlier.]