I am a biostatistician.
I have not read the details of this study, but consider the following example, with included R code so you can replicate it. It is a hypothetical study where 6 subjects are randomly determined to be administered treatment, and 6 subjects are randomly given placebo. All 6 in the treatment arm are cured of blindness. None of the 6 in the placebo arm are. The p-value for Fisher's exact test, which is a *conservative* test (i.e., has lower size than the proclaimed alpha level) yields a p-value of ~ 0.002, a highly significant finding. Granted, N = 12, not 6, in my study, but only 6 were given treatment.
Your claims about this not demonstrating safety are valid, as this study was not powered to detect safety issues. But a follow-up study surely will be.
My point is that you don't need a large sample size to prove something causes an event if the odds of the event happening spontaneously are practically nil, as my example shows. And as another poster pointed out, this is how medical research progresses, and you screaming on Slashdot what every third-rate scientist in the medical profession already knows is pointless.
R code:
trial = data.frame(trt = rep(c("Treatment", "Placebo"), each = 6),
out = rep(c("Cured", "Not Cured"), each = 6))
tbl = table(trial$trt, trial$out)
fisher.test(tbl)
Fisher's Exact Test for Count Data
data: tbl
p-value = 0.002165
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.0000000 0.2837803
sample estimates:
odds ratio
0