Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Stats

1 In 4 Statisticians Say They Were Asked To Commit Scientific Fraud (acsh.org) 95

As the saying goes, "There are three kinds of lies: lies, damned lies, and statistics." We know that's true because statisticians themselves just said so. From a report: A stunning report published in the Annals of Internal Medicine concludes that researchers often ask statisticians to make "inappropriate requests." And by "inappropriate," the authors aren't referring to accidental requests for incorrect statistical analyses; instead, they're referring to requests for unscrupulous data manipulation or even fraud. The authors surveyed 522 consulting biostatisticians and received sufficient responses from 390. Then, they constructed a table that ranks requests by level of inappropriateness. For instance, at the very top is "falsify the statistical significance to support a desired result," which is outright fraud. At the bottom is "do not show plot because it did not show as strong an effect as you had hoped," which is only slightly naughty.
This discussion has been archived. No new comments can be posted.

1 In 4 Statisticians Say They Were Asked To Commit Scientific Fraud

Comments Filter:
  • As long as there is the incentive to get the results the sponsor wants, there will be fraud.
  • Only 1 in 4? (Score:5, Interesting)

    by HornWumpus ( 783565 ) on Friday November 02, 2018 @05:57PM (#57583200)

    1 in 4 biostatisticians...

    Dollars to donuts it's much worse in the soft 'sciences'. Slightly remediated by the fact they're too stupid to realize what they were asking was wrong.

    • In the soft sciences there are also things like: small sample sizes, ill-defined terms, and using overly complex statistical methodology to extract meaningless conclusions. And there is no remediation via stupidity: large swaths of the social sciences are just breeding grounds for career-hungry paper pushers whose motivation has nothing to do with the furthering of human knowledge.

      The good news is that there is still good research going on. We could weed out the bad if we changed the promotional model of re

      • by Bongo ( 13261 )

        In the soft sciences there are also things like: small sample sizes, ill-defined terms, and using overly complex statistical methodology to extract meaningless conclusions. And there is no remediation via stupidity: large swaths of the social sciences are just breeding grounds for career-hungry paper pushers whose motivation has nothing to do with the furthering of human knowledge.

        The good news is that there is still good research going on. We could weed out the bad if we changed the promotional model of researchers, but that won't happen easily because those at the top are there because of the current methodology.

        Science has an aura of responsibility and objectivity and truth, but as a social institution it is prone to corruption as is any other social institution, like the police, government, the churches, and big corporations, and even charities. To what degree is an open question.

        But the statement "it's science!" is a persuasion device in rhetoric, whereas the science method is to be able to check and test and verify and repeat and sure, you have to have the intellectual integrity to know if you are too uninforme

    • Re:Only 1 in 4? (Score:5, Insightful)

      by hey! ( 33014 ) on Friday November 02, 2018 @06:38PM (#57583384) Homepage Journal

      This study is talking about biostatisticians. Most of those guys are bound to be working for pharmaceutical companies.

      As for the social science Brian Wansink was recently stripped of his Cornell professorship when he and is lab were caught doing extensive "p-hacking". Interestingly, the research they were doing was essentially psychological in nature, but Wansink has no academic training in psychology; he has a BA in business administration, an MA in journalism and a PhD in marketing, and his lab was operated out of Cornell's business school.

      • by dcw3 ( 649211 )

        Brian Wansink was recently stripped of his Cornell professorship when he and is lab were caught doing extensive "p-hacking"

        But p-hacking is a thing in pretty much every field, and it needs to be stopped.
        https://fivethirtyeight.com/fe... [fivethirtyeight.com]

    • by Z80a ( 971949 )

      Well the true question is, can we have the opinion of another 3 statisticians? If one in four are wrong, this one could be as well.

    • >> 1 In 4 Statisticians Say They Were Asked To Commit Scientific Fraud
      One chance in 4 that this statistic is rigged. Who did the statistics about corrupting the statisticians ?

      • The authors surveyed 522 consulting biostatisticians and received sufficient responses from 390

        My guess is the ones that didn't respond are guilty of doing it putting the number a lot higher than they claim.

    • 1 In 4 Statisticians Say They Were Asked To Commit Scientific Fraud

      Actually, it was 25% +- 3% 4 times out 5.

    • by imidan ( 559239 )

      I experienced this, in a small way. I was a consulting statistician at the university when a professor from "food sciences" came in and wanted me to "give her the p-value" for her study. I spent some time talking with her about the project that she'd done. It was incredible... tiny sample size, entirely self-selected sample, data based solely on self-evaluation, unaccounted for dependence between responses, likely acquiescence bias, no attempt at control, poorly designed survey instrument, ambiguously writt

  • Not a shocker (Score:4, Interesting)

    by SirAstral ( 1349985 ) on Friday November 02, 2018 @05:58PM (#57583212)

    1 out of 4 are asked to commit fraud.
    2 our of 4 are "expected" to commit fraud without being asked.
    1 out of 4 are actually trying to get at some form of truth.

    Statistics are always biased by their sample sizes, and criteria.

    There are lies, damn lies, and then there are statistics.

    • by Anonymous Coward

      1 in 4 ADMIT to being asked to do it. That means there is a non-insignificant number that did it and never admitted to it.

      There are lies, damn lies, and then there are statistics.
      Apparently it is more true than we know.

      Also the irony here using 1 in 4 is semi amusing!

    • by Anonymous Coward

      There are quotes, damn quotes, and a bunch of jackoffs trying to be clever by posting the most predictable quote, which was already in the fucking summary.

    • by Anonymous Coward

      Tobacco Industry - Completely harmless and may even be good for you - even with a few real doctors on board.
      The sugar Industry - Or Diabities/Corn Syrup brigade.
      Mobile phone has harmless radiation - unless you have braces, implants and titanium antenna installed.
      Artificial sweeteners
      Toothpaste recommended by dentists
      Laundy powder - whiter than white

      The list goes on. Most statistical lying is now the ambit of politicians. Strangely voters sense BS and are voting for 'other' and Lassie.

    • 1 out of 4 are asked to commit fraud.
      2 our of 4 are "expected" to commit fraud without being asked.
      1 out of 4 are actually trying to get at some form of truth.

      Statistics are always biased by their sample sizes, and criteria.

      There are lies, damn lies, and then there are statistics.

      Unless it is LIGO, then not having Faith in the statistics is a sign of being an anti-science Luddite. :)

    • 1 out of 4 are asked to commit fraud.

      The real number is actually 1 out of 3, but they massaged the numbers a bit.

    • by nasch ( 598556 )

      Statistics are always biased by their sample sizes, and criteria.

      What do you mean by this?

  • by petes_PoV ( 912422 ) on Friday November 02, 2018 @06:07PM (#57583252)
    And when the experiment is repeated - many times, by different teams in different labs using different statistical techniques to analyse the results, the truth will come out.

    But if an experiment is only performed once, never scrutinised, never checked, never tested then there can be little or no confidence in its conclusions.

    • by ClickOnThis ( 137803 ) on Friday November 02, 2018 @06:29PM (#57583358) Journal

      And when the experiment is repeated - many times, by different teams in different labs using different statistical techniques to analyse the results, the truth will come out.

      But if an experiment is only performed once, never scrutinised, never checked, never tested then there can be little or no confidence in its conclusions.

      Even if an experiment is not repeated exactly, its results still provide a way-point that can be scrutinized in future studies. Other scientists will try to build on previous results, and if something subsequently does not make sense, they will back-trace to find the problem. This is often how science evolves.

      Experiments are often repeated, at least implicitly, if some process that previous experimenters followed must be followed again to pick up where they left off. And often it is worthwhile to repeat an experiment with improved equipment, to see whether additional insights can be found.

      In short, don't dwell on whether there is a cadre of scientists who make it their mission to repeat other scientists' experiments. That's impractical, and frankly silly. Scientific studies do get scrutinized and repeated (at least implicitly) -- just not in the narrow way you suggest.

      • by Megol ( 3135005 ) on Friday November 02, 2018 @06:55PM (#57583456)

        https://retractionwatch.com/ [retractionwatch.com]

        Notice that papers that have been used to direct research and being used as supporting data often have been detected as frauds long after the publication. That means that the falsified data have already escaped most scrutiny and have already wasted time, money and effort.

        • Fair enough. But I would say that these are examples of the scientific method working as it should. Mistakes or fraud may take time to detect, but sooner or later they are corrected.

          And keep in mind that science is not the only human endeavour that has occasionally wasted time, money, and effort. Science progresses most efficiently when honest actors work together in good faith, scrutinizing each other's work but also building mutual trust. The waste from occasional bad actors is eclipsed by the benefit fro

          • Fair enough. But I would say that these are examples of the scientific method working as it should. Mistakes or fraud may take time to detect, but sooner or later they are corrected.

            And keep in mind that science is not the only human endeavour that has occasionally wasted time, money, and effort. Science progresses most efficiently when honest actors work together in good faith, scrutinizing each other's work but also building mutual trust. The waste from occasional bad actors is eclipsed by the benefit from the good ones.

            That's all very well when science is relegated to its own little corner of the universe interested only in unimportant issues like when did the universe begin or is there life on some distance planet which will be unreachable for the next big bite of eternity.

            It's quite another thing when fraudulent science is used to direct public policy which has an immediate and negative effect on people's actual lives. It is even more a problem when fraudulent science is used to precipitate cultural change that effects

      • by Megol ( 3135005 )

        This is a good example of the problem: https://www.statnews.com/2018/... [statnews.com]

      • Experiments are often repeated, at least implicitly, if some process that previous experimenters followed must be followed again to pick up where they left off. And often it is worthwhile to repeat an experiment with improved equipment, to see whether additional insights can be found.

        This is an extremely inefficient way of doing it, and it can take decades for the error to be corrected, even in a hard field like physics (Feynman gives the example of the oil drop experiment [caltech.edu]. It was also an example of incorrect previous studies leading newer studies astray).

        In short, not double-checking studies can lead to wrong results for decades, or longer. Think of the confusion in nutritional science.

    • Well, they can also hide their data and not publish it, just the "processed" results...
    • And when the experiment is repeated - many times, by different teams in different labs using different statistical techniques to analyse the results, the truth will come out.

      What about like with LIGO where the people using different statistical techniques get different answers, but the original researchers insist you have to use their techniques, and that it is too hard for other scientists to just have a go at the data without their careful guidance?

      Surely in that case, slashdot would be 100% behind the Scientific Certainty of Statistics because the problem is so hard that only the very very Top People are working on it, and so they must know. Because anybody as smart as we th

  • "How to Lie with Statistics" lol

    Just my 2 cents ;)
  • by Anonymous Coward

    My own experience as a statistician is that at least seven in four of us have produced dubious numbers.

  • by careysub ( 976506 ) on Friday November 02, 2018 @06:19PM (#57583314)

    Statistics about statistical fraud. Down the rabbit hole we go.

  • by neoRUR ( 674398 )

    That's 25% for all you math majors out there.

  • 3 out of 4 didn't have to be asked.

  • Were they offered bribes?
  • Anyway better in IT than in stats

  • This reminds me of a story an acquaintance of mine once told me. She has a Ph.D. in statistics and has put in a couple of decades with a major biotech firm. A friend of hers was doing a Ph.D. in engineering and had some data to analyze, but he wanted to make sure his conclusions were statistically sound. He asked her to check his work and let him know if he had made any big errors. I'm sure nobody will be shocked to learn that he made some basic errors that non-statisticians make all the time (I think it ha

  • Where we not only rely heavily on statistics and proxies (because real data is hard to get), we rely on models of statistics and proxies. No wonder the results are all over the map. Being skeptical is not only a good idea in this case, it really is demanded by the process.

  • Did they specify the percent who requested they lie were women researchers,
    or raised under single mother,
    or #NoFaultDivorce'd muted fathers?

    What percent of Statisticians agreed to (so perhaps lied) & the percentage of them women,
    or raised by single mother,
    or ...

    1/3 of mothers #ParentalFraud (stat. sample of All women) to those they 'love'.
    Most would sell your&Children's soul to damnation just for the pleasure of watching them Suffer/suicide/die.
    No Vote, no legal contracts, no testimony, no place of

  • Can we truly trust this statistic...? I mean look who came up with it.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...