At least in bioiniformatics, the correction of p-values for multiple comparisons ("q-values") has been standard practice for quite a while now.
But then your beta-error goes through the roof and you wont find anything. wouldnt it be far more efficient to repeat the significant experiments.
"A p-value of 0.05 means there's a 5% chance that your paper is wrong. In other words, 1 in 20 papers is bullshit."
This is complete bullshit. If you study something where the h1 is true then there is a 0% possibility to be wrong if you report significant findings.
To me, the battle doesn't even look cool. The ships are all mashed on top of one another, pointing in random directions, and it's almost impossible for an observer to see what's actually going on.
As beings raised in a mostly 2 dimensional plane, it's natural for a truly 3-dimensional no-gravity-bias large-scale interaction to bewilder us. I think this might be one of the things EVE got right.
But space is an incredible boring tactical 3d environment.
Probably won't be able to disable SecureBoot. That's what makes it better!
If it is a certified for Windows 8 x86 machine then it MUST be possible to disable SecureBoot. But you probably already knew that.
It's not Linux's fault that the developers of Final Cut Pro and Lightroom specifically chose *not* to support Linux. It is also not Linux's fault that both Apple and Adobe guard and keep their programs' source code secret, so it is impossible for anyone else to compile it for anything other than the operating systems that these two companies choose to compile these programs for themselves.
Why would i care whose "fault" it is?
It should be P(H0|significant) != 1 - P(significant|H0)
If you have a sample without real differences you have P(H0|significant = 0) which is the statistic used in the article.
This would result in:
"Yes, I agree. If a p-value of 0.05 actually "means" 1 when evaluated, then any sane frequentist will tell you that things are fucked, since the limiting probability does not match the nominal probability (this is the definition of frequentism)."
Johnson found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in many fields including social science — still meant that as many as 17–25% of such findings are probably false (PDF).
. Found? Was he unaware that using a threshold of 0.05 means a 20% probability that a finding is a chance result - by definition ?
More interesting, IMO, is that statistical doesn't tell you what the scale of an effect is. There can be a trivial difference between A and B even if the difference is statistically significant. People publish it anyway.
Ofcourse it was found. The 20% are not by definition but a function of the percentage of studies done based on correct/incorrect H1. You could have 0% if you only did studies on correct H1s.