There are, however, many quality degree programs in Statistics. As someone who went through one of them, you can largely choose your own mix of theory and practice. I wonder if this isn't just statistics rebranded? I hope it doesn't concentrate too much on certain proprietary software packages. Statistics is like anything else. You can easily produce a bunch of numbers and compile massive books of tables and graphics. But if you don't know the assumptions of each of your methods, and consequently their shortcomings in each situation, you can draw some fairly bad conclusions rather quickly. I just hope this program gives a solid background in theoretical statistical inference, experimental design, and regression analysis, so students understand the 'why'.
I must say I was literally lol'ing the whole time at this post too.
In my experience, what you describe, accounting, is a separate discipline under the Business school, where Economics is usually its own department. An Econ major would almost never need to take a single accounting class I imagine, if they chose not to.
I am a biostatistician.
I have not read the details of this study, but consider the following example, with included R code so you can replicate it. It is a hypothetical study where 6 subjects are randomly determined to be administered treatment, and 6 subjects are randomly given placebo. All 6 in the treatment arm are cured of blindness. None of the 6 in the placebo arm are. The p-value for Fisher's exact test, which is a *conservative* test (i.e., has lower size than the proclaimed alpha level) yields a p-value of ~ 0.002, a highly significant finding. Granted, N = 12, not 6, in my study, but only 6 were given treatment.
Your claims about this not demonstrating safety are valid, as this study was not powered to detect safety issues. But a follow-up study surely will be.
My point is that you don't need a large sample size to prove something causes an event if the odds of the event happening spontaneously are practically nil, as my example shows. And as another poster pointed out, this is how medical research progresses, and you screaming on Slashdot what every third-rate scientist in the medical profession already knows is pointless.
R code:
trial = data.frame(trt = rep(c("Treatment", "Placebo"), each = 6),
out = rep(c("Cured", "Not Cured"), each = 6))
tbl = table(trial$trt, trial$out)
fisher.test(tbl)
Fisher's Exact Test for Count Data
data: tbl
p-value = 0.002165
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.0000000 0.2837803
sample estimates:
odds ratio
0
One of the chief duties of the mathematician in acting as an advisor... is to discourage... from expecting too much from mathematics. -- N. Wiener