5. (or 6. including rtaylors point): Journal editors and referees frequently don't read past the abstract of submitted articles. Therefore scientists frequently say something attention-grabbing in the abstract simply to induce the editor to read further.
I guess they hope that once the editor realizes the abstract distorts the results, they might have found something else in the article that they like.

I mentioned that Wikipedia math pages are a "dick measuring contest for experts on the subject"

Please check out the comment above by exploder (should be easy to find - it is rated +5 Insightful). In particular:

the articles are written in a way that makes them most useful to the people who donate their time to produce them

I just want to briefly provide an example as to why this is a good thing. I'm a math/stats guy. For me, the free and easily accessible Wikipedia pages are always my first port of call when looking into a new topic/method.

On the other side of the coin is my best mate. He is a med science guy. He avoids Wikipedia like it has the plague and instead uses a resource that is behind a paywall. Why? Because the Wikipedia med science topics are not written for guys like him. They're written to be more accessible. Unfortunately, this makes them of little use to researchers in the field, so they don't bother contributing.

So, what would you prefer? Personally, I think it is better to put up with a little jargon if it ensures a free and open resource that is constantly being peer-reviewed and updated by the top players in a given field. Surely this is preferable to a system where those top players instead choose to contribute to a resource that is behind a paywall?

I don't see how superannuation could possibly be better than (gasp!) allowing ME to prepare for my retirement

That may be true, but can you also vouch for the rest of Australia? Let's be honest, if the government didn't force people to put money aside for their retirement, half the population would drop it in the pokies without a second thought (there was a point in my life where I certainly would have).

A term deposit of course would perform much better.

see eg http://www.rest.com.au/Performance-Investments/InvestmentsPerformance.aspx. In summary, core strategy returned 6.95% p.a. on average over the last 10 years. This is definitely better than a term deposit. Is it more risky? Probably not in the long term. As you have noted, the real issue here is the ratio of fees to amount you have under management. For most super funds, the fees are small fixed costs, so yes, you will get screwed if you only have a $1000 under management. But with a couple of a hundred thousand earning 6.95%, fees are barely noticeable.

Is there even a mandate that these funds have to show a decent return?

No, such a mandate would be impossible to guarantee. For anyone. Period. However, they do have *very* strict rules about what they can and cannot invest in. Your money is not being used to speculate on exotic derivatives markets, for example. Also, part of the mandate prevents directors from recieving renumeration in relation to their duty as trustees. (see eg http://www.rest.com.au/About-REST.aspx). This is important.

but there's a reason they keep posting such spectacular profits

Agreed, Aussie bank profits have been a little too large for comfort over the last few years. But this doesn't have much to do with the way super funds are run.

because that kid will probably never claim the super from that job they had in high school

That is the kid's fault. The system to roll-over super is in place and easy to use.

Then your superannuation fund is ripping you off. Shop around, it is a competitive market in Australia. Here's a good place to start the research: http://www.superguide.com.au/comparing-super-funds/what-super-fund-is-best-performing.
In general the industry super-funds are usually the least corrupt and best performing (eg REST or SunSuper etc). I'm young and thus only have a very small amount with REST at the moment (hence management fees are a large proportion of my super), yet mine still goes up every year without fail (with the exception of 2008-9 for obvious reasons - and even in that year it only dropped by a small amount).
DISCLAIMER: The above is not financial advice, just some observations on the (generally good) system in Australia - the reason for this disclaimer is that I'm pretty sure it is illegeal to give financial advice in Australia without a licence.

I've read a lot of interesting comments here about phenomena that contributed to the GFC, but I don't think anyone has really nailed down the set of necessary conditions (if they have and I'm repeating someone elses post I apologize). Clearly, the articles assertion that Black-Scholes caused the GFC is rubbish - many commenters have already clearly explained why this is so. But there seems to be a trend to place all the blame squarely on the shoulders of complex derivatives such as credit default swaps. IMHO this is only a third of the story. I think the three necessary conditions are:
1) the existence of complex derivatives, especially credit default swaps,
2) the repeal of the Glass-Steagall act, and
3) the privatization of Freddie Mac and Fannie Mae.
3) essentially created a private institution with monopoly power whose profit was linked the quantity of mortgages it created. The incentive structure of Freddie Mac and Fannie Mae virtually guaranteed that a large number of loans of dubious quality would be made (similarities to the current incentive structure for the US patent office anyone?). Combine 3) with 1) and the inherent risk in these mortgages was able to be disguised from the vast majority of investors. Throw 2) into the mix and all of the sudden the institutions that hold the deposits of mum and dad investors (ie the ones that should be taking as few risks as possible) are taking enormous positions in incredibly risky assets.
Finally, when the shit hits the fan, none of the banks know how exposed the other banks are to the bad loans, and so they all get very shy about lending to each other. Anyone who knows a bit of economics will understand that when banks stop lending to each other in the overnight market, the system will collapse.
So, my position is that without all 3 conditions - complex derivatives, repeal of Glass-Steagall, privatization of Fannie Mae and Freddie Mac - the GFC could not have happened. This is not to say that there would have been no financial turmoil at all, but the scale of the problems would have (IMO) been orders of magnitude smaller.
By the way, slashdot, how about adding a FAQ page for how to format comments? How do I indicate a new line for example? Standard LaTeX or reddit type formats don't appear to work, nor does a newline indicator like /n???

Good Wikipedia link! I wasn't familiar with the terminology, but I understand your point now. As you no doubt guessed, I mistakenly thought you were trying to assert that the random error was unable to be diversified away. I do have some concern over one of your assertions though: "you need to substantiate why you expect the size of the systematic error when giving one person one test is smaller than a point or so" ---> For the paper in question, wouldn't it be okay for the systematic error to be larger than a point, as long as the difference in systematic error between the two sample groups (with gene and without gene) is much less than one point? That is, if the two groups are given the test under similar conditions, even if those conditions cause a systematic error of, say +5 for each individual in group 1, and +5.2 for each individual in group 2, this is fine as far as the test is concerned, because the bias in the sample means will be approximately equal for both groups (difference of 0.2 given large N)? Also, you challenge me to support a belief that the (difference in) systematic error is much less than a point between the two groups. I can't :-) (certainly not without actually being there and observing the conditions for myself). However, if all individuals (from both group 1 and 2) are sitting an IQ test for the first time, in the same room, at very similar desks, taking the same test etc etc then my gut feel is that the difference in systematic error between the two groups will be small. As you point out, it would be very difficult to prove this gut feeling right or wrong.

"As soon as you assume that the measurement error is zero-mean and uncorrelated, you are for all intents and purposes assuming a Gaussian distribution, by the Central Limit Theorem [wikipedia.org]." --> Yes. Zero-mean errors that are an independent sequence and uncorrelated with X_n will imply that a sample mean (ie a scaled sum of random variables) converges on Gaussianity via the Central Limit Theorem (CLT) (easy to prove using characteristic functions). You say this like it is a bad thing??? Sure, you can make a different assumption regarding the errors (like non-zero mean and dependence), but why? Such an assumption would make no sense in this context. Why would the measurement error in a test be biased in one particular direction, or correlated across different people doing the test?
"Increasing sample size increases your real confidence only to the point where your error ceases to be dominated by statistical fluctuations and becomes dominated by systematics." --> I have no idea what this sentence means. If you could phrase this using mathematics that would probably help. Do you mean that the error term will become dominated by it's law of large number properties? If so then that is exactly the point of my argument. If it is zero mean, then it will be averaged out with large numbers. I'm really taking a stab in the dark here about what you mean.

If the measurement error of the test has an expected value of zero, finite variance and is uncorrelated with the "true" IQ, surely it can be averaged out with large numbers. To avoid confusion, perhaps it would help to phrase the point mathematically:
Let X_n denote the IQ of a randomly selected individual. The individual sits a test which measures IQ with some error, resulting in IQ estimate Y_n = X_n + e_n.
Assume E(e_n) = 0, cov(e_n, X_n) = 0, V(e_n) = s^2, V(X_n) = v^2, with s^2 and v^2 both finite.
The sample mean of N independent individual test scores is:
\bar(Y) = N^(-1) \sum_n^N Y_n (I'm using LaTeX here, so \sum_n^N should be read sum over n = 1 to N)
Thus V(\bar(Y)) = N^(-2) \sum_n^N V(Y_n) = N^(-2) \sum_n [V(X_n) + V(e_n)] = N^(-2) * N * (v^2 + s^2) = N^(-1) (v^2 + s^2)
So lim_{N \rightarrow \infty} V(\bar(Y)) = 0.
The implication of this is that for large N, E(Y_n) can be estimated with increasing accuracy. And by construction, E(Y_n) = E(X_n), since E(e_n) = 0 (by assumption). It seems to me then, that the expected value of the true IQ (if there is such a thing - different argument altogether) can be estimated with arbitrary accuracy as N increases. The point is that by assuming the measurement error has expected value of zero (and is not correlated with true IQ) we're able to average it out with large N. Two different populations (differing by, say, the presence or absence of a gene) can then be tested using a Chow test for the difference of two means, where confidence will be increasing in N.
I'm not trolling here, I'm genuienly curious as to what you think is incorrect in the above working. Also, regarding your point further down about measuring the width of an atom with a 1mm ruler, the same argument applies. If the measurement error has an expected value of zero, then given enough measurements, we could hone in on the width of the atom, since almost all our measurements will be zero, except for the very rare 1, and then when we divide the sum of all these zeros and ones by the number of measurements (given appropriate assumptions regarding measurement error etc which are clearly ridiculous in this example admittedly), the sample mean will converge on the width of the atom.
None of this is "shit statistics". It is the law of large numbers (Lindeberg variant given the independence assumption I've made), and it is kind of a big deal.

Well, not completely eradicated I hope! See eg, http://science.slashdot.org/story/11/08/11/1458205/cancer-cured-by-hiv

Plenty of people play a lottery, even knowing they can expect a loss (my mum is a maths teacher, yet still enjoys playing the lottery on occassion). Expectation is not always the best metric to use when thinking about games of chance (for example, see the St. Petersburg Paradox: http://en.wikipedia.org/wiki/St._Petersburg_paradox).
I'm perhaps playing devil's advocate a bit here, but consider a person who earns a low wage and has no possibility of ever increasing this wage except by winning a low probability game of chance that costs a very small nominal amount to play and entails an expected loss with each play (ie a lottery). If the cost of play is a small proportion of weekly earnings, it seems reasonable (to me at least) that from a utility perspective, the person would choose to play the lottery. The key point is that if the payout is large enough, then the person will stop playing if they win (hopefully), so the concept of expectation (ie, number of plays going to infinity) is not necessarily the right metric.

Actually, "other countries" could easily solve the patent problem in the US by banding together and refusing to adhere to the Agreement on Trade-Related Aspects of Intellectual Property Rights (see http://en.wikipedia.org/wiki/Agreement_on_Trade-Related_Aspects_of_Intellectual_Property_Rights). The US would be forced to overhaul the system, or else watch half their economy relocate overseas.

From the article, it looks like the "Other" category accounts for almost half the downloads. I'm a bit dubious about any conclusions drawn from a dataset with this many unidentified points.

Neutrinos are into physicists.