Comment Re:Overly broad? (Score 1) 422
The p-value does not measure the confidence people have that the hypothesis is true. The p-value is the probability that the given results support the hypothesis merely as a result of random chance. This means that every sane person considers the probability of the hypothesis being false as higher than the p-value, sometimes several times higher, depending on circumstances.
For example, if someone tested 20 hypotheses with the same dataset, and only reported one with a p-value of 4%, it is almost certain that the results are meaningless, as per the linked comic. When you factor in the bias toward reporting interesting experiments with statistically significant support for the result, both on behalf of the researcher and the publishing journal, some would consider it generous to have even 20% confidence in an "interesting" result with 4% p-value.
A p-value of 5% is the absolute minimum allowed in most scientific professions, although several require an even smaller p-value. The only reason p-values that large are even allowed, is because of the economics (and sometimes ethics) of performing experiments. Eg it is more worthwhile to do 10 studies with sample size 100, and then 1 study with sample size 1000 to verify the most interesting result, than to do a single study with sample size 2000.