Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Ionnidis gives formula that result is correct, PPV (Score 1) 137

The exact probability a field's (eg, a journal's) article is true can be found in John Ionnidis Plos Medicine article "Why Most Published Research Findings are False". He lets R be the number of true relationships to no relationships among those tested in a field. It's equivalent to a background probability (prior, though perhaps unknown). The positive predictive value (PPV), a probability, is PPV = (1 - Beta) * R / (R - Beta * R + alpha). A coarse bound for this is, when alpha = 0.05, PPV less than 20 R. This bound becomes useful when it's less than 1, eg, R less than 0.05. R is small as in cancer research when, out of 30 genes affecting a cancer amongst 30,000 genes, R = 30 / 30000 = 0.001, so PPV less than 20 R = 0.02. That is, in genetic this research, THE PROBABILITY A PUBLISHED PAPER DECLARING 0.05 SIGNIFICANCE IS CORRECT IS NOT 0.95 -- IT'S AT MOST 0.02! Some have decided that all their research is statistically significant; eg, the journal Basic and Applied Social Psychology banned the p-value. Some research fields' articles tests are truly meaningful 25 percent of the time -- research becomes a child's game unworthy of most research. But when the research is difficult, as when truly meaningful results occur 0.001 of the time, then the p-value becomes a "deceiver of fools" (quote from symphonic metal band Epica).

Comment ASA "Statement on p-Values" -- Feb 3, 2017 (Score 1) 331

The American Statistical Association Board of Directors published on February 3, 2017, the article ~~ "ASA Statement on Statistical Significance and P-Values"~~. They said things like ~~"P-values do not measure the probability that the studied hypothesis is true."~~ Most people think something different than the typical hypothesis test conclusion that must be stated OBSCURELY as "In the long run of such data collection, when the null hypothesis is true, only 5 percent of resulting tests reject the null hypothesis as being unlikely". The MOST IMPORTANT article on classical hypothesis testing was written by Ionnidis: Why Most Published Research Findings are False by John Ionnidis at PlosMedicine.org 2005 http://journals.plos.org/plosm... This Ionnidis paper gives a succinct formula for the probability a published relationship/effect is correct [not wrong], using the elsewhere used statistical term Positive Predictive Value (PPV). "After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV." PPV = (1 - beta) R / (R - beta * R + alpha) = 1 / [1 + alpha / (1 - beta) R) ] < 1 / [1 + alpha/R], since 1 - beta is in [0,1] < R / alpha = 20 * R, considering alpha = 0.05 for a hypothesis test where alpha = .05 usually -- the probability of a Type I error beta is the probability of a Type II error (1 - beta is the power ) R is the ratio of true relationships to no [false] relationships in that FIELD of tests You can also call R the pre-study odds. Writing R / (1+R) is the pre-study probability the relationship is true. You can call this the "Background Probability" of a true relationship. You can see that the PPV is small when a field's true relationships are even moderately unlikely. Here's a table showing the maximum probability a published paper detects a true relationship R PPV maximum
------------------
0.5 0.91
0.2 0.80

0.1 0.67

0.05 0.50

0.01 0.16

0.001 0.02

  1. Even when half a field's relationships are true, at most 91 percent of published results are true. When one-tenth of a field's relationships are true, at most 67 percent of published results are true. This is abysmal. And more, why even investigate a topic when true relationships are common. Hypothesis testing then becomes a petty activity. What the statistician can't set, and what is never mentioned -- the Background Probability -- is most important in most research! "PPV depends a lot on the pre-study odds (R). Thus, research findings are more likely true in confirmatory designs ... than in hypothesis-generating experiments." The problem becomes obvious when research seeks from 30,000 genes the (at most 30 genes) influencing a genetic disease, for which R = 30/30000 = 0.001 with a PPV about 0.02! When the Background Probability (so too R) is moderate, a design with moderate power (1 - beta) can get good PPV. But research often works in a field of previously unseen results, or uses data mining software (a good generator of false results), where R does equal 0.01 or even 0.001. In these many fields, the Background Probability (so too R) swamps any statistical design's alpha and beta. "Most research findings are false for most research designs and for most fields... A PPV EXCEEDING 50% IS QUITE DIFFICULT TO GET." Indeed, a look at the PPV formula shows that whatever alpha, even a power of 1 (a little thought reveals why more power hardly helps here) produces mostly false results if the pre-study odds R itself is less than alpha! "Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between 'null fields' [fields with no true relationships], the fields that claim stronger effects ... are simply those that have sustained the worst biases." "This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and TOO HIGHLY SIGNIFICANT EFFECTS may actually be more likely to be SIGNS OF LARGE BIAS in most fields of modern research." This article can solve the p-value problem by letting researchers continue standard hypothesis testing but with smaller alpha levels. Each journal could assign an appropriate alpha-level to reject the null hypothesis -- a large alpha for the social sciences (say, 0.1) and a smaller alpha (say, 0.0001) for genetic research.

Comment Google and Youtube feed us ourselves -- tribalized (Score 2) 108

Some applications get our admiration because they give us our previous interests and predisposition. These reinforcements of ourselves corral us into similar minded groups, who then get similar non-conflicting information. We become tribalized. That can further understanding in the tribe of astronomy. In internet social arenas, we can become like Belfast, Ireland, where protestants go to protestant schools, catholics go to catholic schools, and the two tribes learn to hate each other. Tribalizing science is efficient. Tribalizing politics, religion, and empathy is destructive.

Comment Subaru already has this in my car (Score 4, Interesting) 229

The 2016 Subaru Outback calls this EyeSight, with stereo cameras near the rear view mirror. The Outback decelerates and eventually brakes to keep a fixed distance (I choose about 160 feet, but it's selectable) from any car ahead. When no car is ahead, the Outback accelerates back to the set speed; eg, 60 mph. If I stray off road lines, the car will beep and tug back some. I presume other manufacturers do similarly -- the technology has arrived, not Volvo has arrived.

Comment Sikh is NOT Muslim (Score 1) 954

The Sikhs geographically live at the interface of other religions in western India, having moderated their lives for hundreds of years to avoid getting murdered. Social researchers observe the above teacher and police reactions as like those of toddlers. When slightly stressed, they seek what comforts them in unrelated aspects of their lives: stuffed animals for toddlers, bigotry for too many Texans. Man up.

Comment Probability a paper is correct in a "FIELD" = PPV (Score 1) 174

These results merely define the proportion of true results in the psychology FIELD, presuming the papers did good research.
See PLOS's "most" viewed paper,
"Why Most Published Research Findings are False"
by John Ionnidis
August 30, 2005
at PlosMedicine.org
Why Most Published Research Findings are False

Ionnidis paper proves that
"After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the Positive Predictive Value,"
PPV = (1 - beta) R / (R - beta * R + alpha)
= 1 / [1 + alpha / (1 - beta) R) ]
where

alpha = .05 usually -- the probability of a Type I error
beta is the probability of a Type II error (1 - beta is the power)
R is the ratio of true relationships to no [false] relationships in that field

Here, for psychology, with alpha = 0.05,
PPV = 0.39 = 1 / [1 + .05 / (1 - beta) R]
so

R =
0.03 if 1-beta = 1 [the power for a very large sample]
0.06 if 1-beta = 0.5
0.15 if 1-beta = 0.2 [the power for a moderate sample].

That is, these psychology papers operate in a field with around R = 0.15 true/false relationships.
Germany's Pharmaceutical Bayer found only 30 percent (PPV=0.30) of all pharmaceutical papers verifiable, corresponding to an R = 0.11.

You can change the ratio R to
R / (1-R),
the pre-study probability the relationship is true. Call this the "Background Probability" of a true relationship.

In the extreme though not uncommon genetics field, research seeks from 30,000 genes the (at most) 30 genes that influence a genetic disease, for which
R = 30/30000 = 0.001
and at this small R, PPV is then also about 0.001.

Don't lose track. There are three fractions mentioned here,
(1) R (ratio of true relationships to false relationships in the field, before experiment)
(2) Background probability = R / (1-R)
(3) PPV (after an experiment and publication, this is the probability the result as significant)

While the researchers/statisticians can set alpha = 0.05, and can get beta = 0.80, their probability meaning is clouded by their frequentist interpretation. What the statistician can't set, and what is never mentioned -- the Background Probability -- differs and is important in each research field!

When the Background Probability is moderate, a design with moderate power (1 - beta) can get good PPV. But research often works in a field of previously unseen results, or uses data mining software (a good generator of false results and tool of charlatans), where R does equal 0.01 or even 0.001. In these many fields, the Background Probability swamps any statistical design's alpha and beta. "Most research findings are false for most research designs and for most fields... a PPV exceeding 50% is quite difficult to get." Indeed, a look at the PPV formula shows that whatever alpha, even a power of 1 (a little thought reveals why more power hardly helps here) produces mostly false results if the Background Probability itself is less than alpha!

If R must be relatively large in a "field" for published results to represent true relationships, then a large proportion of relationships considered in that field are true (significant). Such a research field should be exceedingly boring. In the other extreme, in a "field" with relatively few true relationships, research produces mostly false conclusions. However, in followup studies from published results (eg, pharmaceuticals check results with further studies), R becomes large (note the conditioning). When you see that the probability published research represents a true relationship is smaller than the chance a random coin flips heads, then you quickly see the need for more followup research.

It is important to refine these ideas by bounding the term "field", not to all research, or even to biological research, but maybe to research on cancer -- involving a careful choice of bounds. This is another case revealing the importance of conditioning, if not the Conditionality Principle itself. Here, the choice of "field" affects the Background Probability, equivalently R. Since each journal represents a "field", each journal could require its own level of evidence; eg, genetics could require alpha = 0.001, and psychology could require alpha = 0.01.

The proud fool echos that psychology is effete, full of the innumerate and pompous head cases. Almost everyone else looks at psychology's 0.39 reproducibility from a Classical (frequentist) perspective, a view less than 100 years old. On the other hand, Bayes Theorem has been in use for 250 years. Moreover, whatever you do, you should not violate Bayes Theorem. The august Bayes Theorem has been mathematically proven true and confirmed over the centuries. The above PPV takes the background probability (prior) into account, a probability that is relevant though not exactly known. When you reduce Bayes Theorem to an arena with two states -- true relationships and false relationships -- the results greatly simplify and wonderously clarify.

Slashdot Top Deals

What is now proved was once only imagin'd. -- William Blake

Working...