Probably because you're not willing to lie to get what you want. And you're likely not hot.
Usually it's called reckless endangerment, or perhaps criminal negligence. If you take an action that you know will cause death, it usually qualifies as murder though, whether it's telling a lie or pulling a trigger.
If you think there are nasty surprises in human testing, you should see what happens in the preclinical animal testing. Nothing gets put into a human until we're as sure as we can be that it works as desired and isn't harmful in animals.
Even if it were visible (it's not) there's no way you can see or be affected by an LED modulated at 100 mHz. It would look around half as bright as a regular light of the same size though (and use half the power).
The point is that it's line of sight. You have to be directly under the emitter. So, for example, you could stick one over every chair in the airport. Everybody gets their own bandwidth, no interference.
The trick in the article seems to be a system where you can switch between the optical units and regular wifi if you lose contact.
Nah, he was only wounded by astronomy. Some dude with a pencil and paper who signed onto a navy ship as a "naturalist" did the actual deed.
Remember how when Steve Jobs stood up and said "hey, we've got an awesome solution! Webapps!" everyone said it would suck and it did? Remember when Google said the same thing?
I assume you're getting at multiple comparisons because you said "he measures many things."
You're right, the researcher should correct his p-value for the multiple comparisons. Unfortunately, alternatives to p-values ALSO give misleading results if not corrected and, in general, are more difficult to correct quantitatively.
Your 95% confidence interval (roughly*) indicates an interval containing 95% of the probability. The p-value indicates how much probability lies within a cutoff region. What most people do with a 95% CI is look to see if it overlaps the null value (zero, or the mean of the other group, for example). The p-value gives the same information, except quantitatively.
* yes, Bayesians, technically the 95% credible interval, from a Bayesian analysis, contains the area of 95% probability. The confidence interval, technically, isn't quite the same thing. Practically, in the vast majority of cases, the two are either mathematically equivalent or equal to within a large number of decimal places.
You know that in a lot of statistical testing the null hypothesis is the output of a theory, right? Just because you didn't ever advance beyond the most basic t-test doesn't mean nobody else did.
Actually, no. TFA article doesn't like Bayesian techniques either. They want to use purely descriptive statistics.
So basically, they're replacing something that a lot of people misinterpret with something else that essentially cannot be interpreted properly due to lack of information.
There really aren't any good ways to measure those other effects. If you knew how your experiment was biased, you'd try and fix it.
Criticisms of p-values usually fall into two groups. Some people believe that p-values are bad because some people interpret them as the false positive rate. Personally, I think that's a problem with some people, and not p-values. The other criticism, which is particularly prevalent in social sciences, epidemiology and some of the squishier medical-type areas, is that if you get a non-significant p-value you discard potentially useful results. The usual proposal (which is probably the situation in this case) is to use confidence intervals. That way you can see all the area where your confidence interval is not overlapping zero! I have two objections to that. First, CIs are simply calculated from p-values and vice versa - they're really the same thing presented differently. Second, the reason you discard your result (or save it for a meta-analysis) if you get an insignificant p-value is because your data has been ruled insufficient evidence. Looking at CIs and marvelling at all the potentially meaningful area between them is just softening the p 0.05 rule of thumb. Incidentally, the false positive rate people suggest doing the opposite - using p 0.01 or 0.001 as the threshold for significance.
That's a good way of putting it. It might actually be a decent tool for the cops to use. The difference being that the police and courts are (supposed to be) knowledgeable about the law, trained in its enforcement, and accountable for their actions. The university offices in charge of these things, not so much.
On the other hand, if you're not even willing to walk down to the campus police station and file a report, any prosecution probably isn't going to go very far anyway.
"The reason Boeing went for this was to reduce weight, power consumption and complexity."
No, it's not. They most certainly are not running the entertainment system on the same wires as the avionics. The avionics system is a real-time network that is different at a very low level. The FAA exception allowed Boeing to connect the two networks at a single point, using a "network extension device."