Anyone citing seriously the Seralini et al. study immediately loses all credibility in my eyes.
That article was retracted chiefly because many professional statisticians (I am one) pointed out that this study was, from the point of view of basic statistical methodology, a complete joke. In no significant way did this study establish any correlation between GMO and rat tumors (which is not to say it can't exist. Just that the data collected from this particular study does not prove anything).
It is laughable how the piece you link to suggests a big conspiration because the paper was retracted despite its original publication undergoing a "rigorous peer review". The fact of the matter is, peer review can fail big time (given the number of submitted scientific papers, that is hardly a surprise), and journals should definitely retract papers when it turns out after publication that they are a methodological disgrace.
Expose questionable scientific behavior practices, undisclosed conflicts of interests, biased studies, question established truths -- I am all in favor of it. But using bogus (and in this case sensationalist) studies to do so is self-contradictory. Bad science should be countered by good science, not by wishful thinking and vague conspiration theories.
I know this comment comes way too late for anybody to read it... Anyway, I still feel it relevant to point out given the context that in France, legislation making browsing a "terrorist website" a felony (punishable by 2 years prison and/or 30,000EUR fine) already exists (as from the last June 4).
It is telling (on the levels to which Europe is de facto politically sinking), that France, with a "socialist" government, is actually already ahead of Newt Gingrich.
Last time I looked there was no application of ANNs which couldn't be solved more efficiently by other algorithms
Is it possible that last time you checked was a long time ago? Deep neural networks are again all the rage now (i.e. huge teams working with them at Facebook and Google) because
I hold a faculty position in statistics (that's for the AC above who called me a "passer-by sitting at home in their boxers munching on Hot Pockets", so I guess I have to pull credentials, though in his defense my post sounded more dismissive than what I'd wanted).
Yes, the p-value threshold of 0.05 is considered "standard" in many applied sciences, in particular medicine. It is convenient for many of reasons that were outlined by other posters (cost, number or persons required for an experiment, ethics). It does not mean that it is intellectually satisfactory. The joke among statisticians is that this value was introduced about 100 years ago by the R.A. Fisher (one of the founding fathers of statistics) who once wrote something akin to "if we decide on a value of alpha such that the probability of falsely claiming a discovery when the null hypothesis holds seems reasonably low, say for instance, alpha=5%...", and this has somehow been engraved as gospel ever since.
The truth is, this threshold value of 5% is now considered very lax by modern statisticians, essentially because of the very large numbers of published papers reporting significant values as compared to Fisher's times. The posts of penguinoid and ras above explained it very professionally, one can also refer to "Why Most Published Research Findings Are False" (Note: this was published in PLOS medicine, hardly an obscure journal)
In conclusion, my post was certainly not a defense of soda pop (there is already sufficient evidence that it is extremely damaging for your health for very clearly identified reasons), but a reminder that the specific results of this study (the effect on telomeres), though certainly not to be dismissed, should not be considered as established truth at this point, but rather pointing in a direction which should be investigated further for confirmation. That, by the way, is the actual meaning of "being skeptical", unfortunately this tends to be conflated with "being in obtuse denial" nowadays.
For all I know they might have been looking at a lot of different nutrition factors and only reported those which appeared significant after the experiment (obligatory xkcd reference: http://xkcd.com/882/ )
I think everyone but the media recognizes at this point that the Quinn scandal is about corruption in journalism.
Except it's not. If it was, where is the outrage on an even remotely comparable scale concerning the truly powerful forces of corruption in this industry -- financial pressure being exerted by publishers on gaming outlets, flow of free perks being offered to journalists, secretly sponsored Let's Play videos, etc? Even if it was actually true that Quinn's admittedly particularly shitty personal behavior was only fueled by the desire of personal gain and media exposure (for which the evidence is nonexistent if you ask me), how would that even compare in terms of leverage gained and scale to the corrupting power of money, which is pervasive in this industry?
It's not about the corruption. It's about the sex, it's about the hate of anything that says "feminism", it's about the desperate quest to find a negative poster child justifying that hate towards anyone else expressing a related opinion. Using the fight against corruption as a justification is a total delusion, yet one in which all the haters have to believe, for otherwise they could not stand to face their own cognitive dissonance.
You can solve this using Excel, but a dedicated app to to track the scenario mentioned in the original piece could be very useful to some.
As a matter of fact, it already exists: http://www.kittysplit.com/ This is a free webapp developed by some people I know. Also, probably prior art or something.
So only 1/3 of fourth graders were able to find the experimental setup to find the best fertilizer level out of nine, when you are only allowed to try six out of them.
The correct strategy consists in going in two steps, first trying out interspaced levels e.g. 2-4-6-8 then "refining" with the two remaining tries around the approximate minimum. This necessitates to model implicitly/intuitively the plant growth as a unimodal (increasing, then decreasing) function of the fertilizer level, thinking ahead with the limited tries constraint, and mentally planning different outcomes of the two steps.
I'd go contrary to the flow and say that 33% of 4rth graders solving an assignment of this difficulty is pretty darn awesome.
There is a difference between randomly sprinkling a paper with references in a superficial effort to make it look "serious" and conform to the usual academic mold; and actively researching, citing and discussing earlier relevant references in comparison to your own work in a balanced way. The latter is how good quality academic writing should be done. The former tends to give rise to papers with pointless laundry lists of citations. I hope your friends were suggesting the latter way. Even if they were not able to point to specific references because they are not specialists of the issue you are addressing, they probably know by experience that it is quite unusual that no previous relevant references exist on a given academic issue. The fact is, nothing annoys a reviewer or an editor more than someone reinventing the wheel and giving the impression of ignoring previous work out of intellectual laziness.
Where there is a clear problem is when an editor or reviewers imposes an obviously irrelevant citation for self-serving reasons.
We will have solar energy as soon as the utility companies solve one technical problem -- how to run a sunbeam through a meter.