Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Scientists Don't Read the Papers They Cite 351

WatertonMan writes "Very interesting and sure to be controversial study that suggests most scientists don't read the papers they cite. This means that if one paper misreads a work the misreading propagates. It's a very interesting study and has big implications for science, in my opinion. New Scientist has a good overview of the work. Given that most attention to work has been in sloppy work on the experimental side (poor methadology or outright fraud) this suggests a whole other problem. A lot of the ultimate problem is that many in research are concerned more about publishing than in solving the issues they investigate. Ideally the point both in science and in academics in general is to understand the ideas. Yet those of you who've looked up footnotes realize that actually engaging the ideas of other researchers typically falls by the wayside. Often footnotes are there simply because references are needed. Engaging others works is secondary. I've always thought that the hard sciences were more immune to that effect than the humanities. I guess not."
This discussion has been archived. No new comments can be posted.

Scientists Don't Read the Papers They Cite

Comments Filter:
  • Well duh (Score:5, Funny)

    by SteweyGriffin ( 634046 ) on Saturday December 14, 2002 @02:16PM (#4887739)
    I wouldn't either -- those things are boring! ;-)
    • lol (Score:2, Interesting)

      by SHEENmaster ( 581283 )
      As shown earlier [slashdot.org] the wright brothers were not the first in flight and had not done their research.
    • by orange7 ( 237458 ) on Saturday December 14, 2002 @06:10PM (#4888717)
      Look, as someone who's written scientific papers, the claims in the article are not only false, but indicative of poor science themselves. They're making the classic experimental stats mistake. Namely, copying and pasting citations from other sources is *absolutely uncorrelated* with whether those papers have been read by the author.

      Formatting citations is fussy, tedious, and annoying. You have to look up the page numbers in the journal (which you may not even have in these days of online papers), figure out who the publisher was, the issue or journal number.

      I read every single one of the papers I've ever cited. But it was rare that I ever typed in a citation from scratch. Usually you get them either from an on-line citation database, from the bibtex entry helpfully supplied on the cited author's web page (scientists like being cited!) or, yes, by typing out a citation from a printed paper.

      In any given field, usually some kind-hearted soul starts collecting a database of citations for others to use. For instance, here's one here:

      http://www.helios32.com/resources.htm#Bibliographi es

      Have a look; you'll soon twig to why people don't type these in from scratch.

      Creating the citation all over from scratch when it's right there in front of you is about as pointless as adding a link to a web page by retyping some monstrous 200-character URL. Just because you copy & pasted a link doesn't mean you didn't read the article did you? (I guess slashdot is the wrong place for that particular piece of rhetoric.)

      I'm disappointed in New Scientist. The pissy little diatribe about science in the story submission is par for the course. Please, leave the pontificating to people who have a clue.

      In fact, how about a retraction? (Ha ha ha ha!)

      A.
      • I agree. I have never cited a paper without having read it. But I have shared my citations file with many students who have passed through my lab. And I have also, when I couldn't remember the exact citation of a paper that I'd read, looked it up from another published paper that cited the same reference.

        Another problem is that they seem to be estimating the frequency with which people read papers they cite from the frequency of repeating errors. The underlying assumption of this analysis is that scientists who read all the papers they cite and those who don't are equally likely to make errors. But it seems quite likely to me that scientists who are careful not to cite papers they have not read are also extra careful to get their citations correct. So sloppy scientists will tend to be overrepresented in the authors' statistics.
      • by Anonymous Coward
        The man says it like it is. I can think of a situation where you might not read a paper though. It's when the paper is really old and really famous. Take this situation: You develop a new widget for processing DNA samples for sequencing or PCR or whatever, fields with thousands of workers and billions of dollars of turnover. You write a paper for a cheerful process oriented journal like "Biotechniques". Your opening line is. Since the discovery of DNA[1] high throughput process tools have become increasingly important...Blah Blah [1] refers to the original watson and crick paper. Now everyone should have read that classic one pager, but if you haven't it doesn't really call into question the existance of DNA. The same could happen quoting Binning ahd Rohrer's 1986 paper on the scanning tunnelling microscope In fact I haven't read it and I built a working STM which I published.

        So it's not the famous papers you have to worry about, it's the low profile papers covering recent developments or still poorly understoon areas of nature. I'd like to see those crackers at the new scientist research that., becuase everyone I know reads those ones very carefully.

  • first post? (Score:5, Funny)

    by SHEENmaster ( 581283 ) <travis@uUUUtk.edu minus threevowels> on Saturday December 14, 2002 @02:17PM (#4887741) Homepage Journal
    Most of my classmates don't read the papers they write. Do we hold others to a higher standard?
    • sadly, neither do the professors for the most part. They have too much to do and too little time to do it.

      People know how to bend the rules and make it seem valid. These principles carry on (apparently) into the rest of their careers.
    • Re:first post? (Score:5, Informative)

      by bcrowell ( 177657 ) on Saturday December 14, 2002 @03:22PM (#4888032) Homepage
      OK, +5, Funny. But actually you're uncomfortably close to the truth.

      When I was a postdoc at Argonne National Lab, my group's policy was that every group member got his name on every paper. On this [arxiv.org] paper, one of my coauthors refused to read the paper before publication. He said he was busy, and it was a long paper. I wanted to take his name off, but he insisted on having it on. This is not uncommon at all. I'm just quoting an example that I know of from personal experience.

      There was a recent scandal where a group at Berkeley claimed to have discovered a new element. Later on, it turned out that the evidence had been fabricated. However, the group claimed it was only one guy who was a loose cannon who had invented the data. In other words, if it was right, they were ready to take credit. But if it turned out to be a fraud, they had plausible deniability.

      There is huge pressure on young people in the sciences to establish a long list of publications, because permanent jobs are so hard to get. Another common phenomenon is where you have a senior administrator at a lab who "blesses" every paper done at the lab, and gets his name on every single one of them, even though he never actually makes any scientific contribution.

      • Re:first post? (Score:5, Informative)

        by the gnat ( 153162 ) on Saturday December 14, 2002 @03:37PM (#4888085)
        Yup! I've had the same happen to me. A bunch of people attend meetings, and get added to the paper. Perhaps a few of them helped proofread, but that's nothing like making an original scientific contribution. On the one hand, I was added to a number of papers based largely on my a) meeting attendance and b) technical expertise. This helps me immeasurably in furthering my career, except for the difficulty of explaining what I did on some projects (ranging from "absolutely zilch" to "constructed the web page"). However, when it was my turn to write a paper, about five people ended up as co-authors with minimal contributions.

        I'm now in a position where I don't let that happen any more; I've asked to be taken off of papers because my contribution was minimal, and the last time I wrote something I stated up front who would be on the author list. I do not forsee that this will be a permanant solution, however; I'm still too junior a scientist to have much sway. I've seen other cases where someone asked to be added to a project because he "needed more publications", or where a senior investigator did not realize he was a co-author until well after the paper had been published.
        • Re:first post? (Score:3, Insightful)

          by quintessent ( 197518 )
          What it all boils down to is partial dishonesty. I wish poeple would take credit for what they actually do. Perhaps a list of authors should be annotated indictating very honestly the degree and type of participation. Then you might have more people choosing not to be named.

          Watched presentation; corrected spelling in three places.
          • The worst case was when I made a huge effort to improve a technical resource that was central to a paper written by several coworkers. I was added to the paper right before it was submitted and never even had a chance to read it- I was far too busy with school and protested loudly at the time. By the time I actually got a chance to read it, months later, I was familiar enough with the project and the data to realize that they'd done many things wrong. Had I been included from the beginning I would have never let my name go on the final project. Unfortunately, it was too late for me to fix this or to take my name off the paper (the paper had been accepted). I'm much more careful about this kind of thing now.
      • Re:first post? (Score:3, Insightful)

        by Ed Avis ( 5917 )
        I am not a scientist, I am just a humble student (or rather ex-student, I graduated this year). When I wrote my project report I was asked by my supervisor to get in references to papers X, Y and Z. So I ended up putting in a few fairly meaningless or irrelevant sentences just to cite the correct paper. Of course here the aim is to get marks, not to get kudos or whatever else real researchers write for, so it's not really a problem.
      • Re:first post? (Score:3, Informative)

        by Zeinfeld ( 263942 )
        When I was a postdoc at Argonne National Lab, my group's policy was that every group member got his name on every paper. On this [arxiv.org] paper, one of my coauthors refused to read the paper before publication. He said he was busy, and it was a long paper.

        The people who insist on such models tend to have a very parochial view of science.

        If I spend 5 years designing an experimental apparatus and gather data then a collegue (or more likely his grad students) takes that data and produces an analysis I have the right to have my name on the paper, the analysis is only a part of the work.

        Science is based on trust. Consider the case where a physicist wants to measure some effect but does not know how to build the apparatus to test the theory. I might well design the apparatus to test his theory even though I don't fully understand the theory he is testing, ultimately I have to trust him on that point. Equally if I provide him with a bunch of experimental data he trusts me not to have fabricated it.

        On the large experiments (500+ authors) I have worked on there has been a review committee that checked over the paper in detail of 30 or so people.

  • um... (Score:2, Insightful)

    by jeffy124 ( 453342 )
    This means that if one paper misreads a work the misreading propagates.

    You're assuming the paper which mis-cites another gets read when it gets cited.
  • by sczimme ( 603413 ) on Saturday December 14, 2002 @02:19PM (#4887748)

    ...where no one reads the articles they cite. We are in good company! :-)
  • I have to do a research paper for next week! ^^

  • Ironic (Score:2, Funny)

    by zachlipton ( 448206 )
    Now wouldn't it be ironic if the prople who did this study to prove that scientists don't read articles that they cite didn't read the articles that they cite?
  • what's to say that even if other people write something about the topic that it's right? Plenty of poorly researched ideas move through circles and even end up in other's research.

    Most people twist previous research to fit what they are trying to say anyway (that's the nature of it). AFAIAC it's all bullshit anyway.

    Unless they are showing HARD evidence (which in recent months they have been making up as well) and others have reproduced the same results, it's all about money/greed/profit.

    Yes.

    1. Research
    2. Make up shit.
    3. Lie.
    4. ???
    5. Profit.
    • by DrLudicrous ( 607375 ) on Saturday December 14, 2002 @02:44PM (#4887884) Homepage
      I have to disagree strongly. When one is doing basic science research in an academic setting, 95% of the time there is no chance for profit. If one lies, there is the risk of being caught, as evidenced by the Bell Labs fraud; perhaps this is even more likely to pass in an industrial environment where profit can be a motive behind "[making] up shit and [lying]".

      As far as twisting up evidence, yes, this does happen. But most definitely not 100% of the time. How was the solar neutrino problem ever discovered in the first place? How was a re-evaluation of the cosmological constant initiated? These (and many other ideas) were brought forth not because someone wanted their ideas to be put forth, but because their hypotheses did not match the experimental data! It most definitely is not bullshit. AFAIAC, science is still the most altruistic of professions, not to mention one of the most self-sacrificing.

      • working in an academic setting is profit. It's called a job. Research is work and you make profit from it.
        • It's not profit. Academic scientists have no monetary incentive to publish papers. They will not get paid more money if they publish a paper quickly. The only thing they can take advantage of is being able to land grants, which enables them to do more research.

          Companies can truly profit from research. Their research tends to be more application-oriented, and thus profit-oriented. They are doing research for a market. Academia is doing research for knowledge (for the most part). There is a huge difference.

          You seem to think that there is no difference between academic research and industrial research. Nothing could be farther from the truth. They are very distinct, and have completely different motives. If a scientist wanted to make money based off of productivity and profit, he works for industry, where the pay is substantially higher than in academia. There is also the added pressure of marketing departments, supervisors, human resources, etc. In academia, you are pretty much your own boss. They are completely different worlds. I've seen it with my own eyes at Bell Labs, TRW, and several universities, and also NASA. I think your assessment is a little misguided.

      • AFAIAC, science is still the most altruistic of professions, not to mention one of the most self-sacrificing.

        Being a research Professor has its advantages and prestige is definitly one of those. If you really want to go the altruistic way, do like the Chinese (from the olden days) and publish your work anonymously.

        Would you be willing to publish your work anonymously?

    • by BistroMath ( 69628 ) on Saturday December 14, 2002 @02:53PM (#4887928) Homepage
      Um, OK. I'll try it:

      1. Read /. headline
      2. Form angry, uninformed opinion.
      3. Post
      4. ????
      5. Karma!

      Doing science for the money is like having sex for
      the exercise. There are many other ways to make considerably more money that require
      far less work. The raison d'etre of science is the joy
      of discovery; no one spends 6-8 years in higher education
      getting a PhD just for the paycheck. People do it
      because they love it.

      As far as scientists faking results, yes, it happens.
      However, the beauty of the scientific method is that
      it is self-policing. Anyone can read the journals;
      anyone can write the editors of said journals and
      report anything that's not above board. As for papers
      not being read in the first place, well, let's hop on
      the Magic School Bus and take a quick tour of the
      scientific publishing process.

      First, write the paper. Then, submit it to either a
      journal or a conference. In either case, the pool
      of available papers will be divided over the number
      of people on the review board of the respective
      journal/conference, so a bunch of people read a few
      papers. Once here, the aforementioned paper is either
      rejected or accepted. If accepted, it is published.

      After the paper is published, other scientists read
      the paper. If it is useful for their work, they may
      incorporate some of the ideas into their own work,
      at which point, they'll test the idea that they're
      borrowing to see if it makes sense.
      If it does make sense, they'll use it. If not, they'll
      tell the whole world, discrediting the work and
      embarassing the original author. Thus there is plenty
      of pressure to do good science. The people doing legitimate
      work far outnumber the charlatans just submitting
      gibberish.

      Matt
  • by Sean Trembath ( 607338 ) on Saturday December 14, 2002 @02:21PM (#4887765)
    We should crush all those who make foolish mistakes, just like that guy Karl Marx says in his "Communist Manifesto" (Marx, 65)
  • by ademko ( 32584 ) on Saturday December 14, 2002 @02:22PM (#4887766)
    I've also seen the case where scientists will constantly refer to their own, or their coleagues' papers. This is an easy way to increase the "cited" count of the refered paper, making one's work look more usefull, even when the citation has little or no relevance to the current topic.
    • by Monkelectric ( 546685 ) <[moc.cirtceleknom] [ta] [todhsals]> on Saturday December 14, 2002 @02:41PM (#4887866)
      I agree, citations is the scientific equivalent of "name droping".

      The root problem is papers are a form of scientific social capital. And when people think you are well read, your paper is worth more. I worked in a research facility where grad students were literally held hostage so they could produce more papers for the professors to take credit for. One student came to use with his masters and was held *7* years for his PHD. It was getting so bad the graduate department was *forcing* the director to graduate students by saying, "So and so has to leave by the end of the year -- with or without his degree." (and after 7 years, who could blame them)

      Add this to an already paper obsessed culture, and you have a serious problem.

      • by rgmoore ( 133276 ) <glandauer@charter.net> on Saturday December 14, 2002 @03:08PM (#4887991) Homepage

        This is a serious problem in a lot of disciplines, though I've heard of a rather elegant solution to the problem that's now become common (if informal) practice. The solution is that when a student thinks that he's done enough to justify getting a PhD, he starts applying for jobs that require a PhD. When somebody is willing to offer him one, that's proof that an outsider views his accomplishments as being worth a degree and his advisor has to let him write up his dissertation. It serves as a very effective independant outside check on the system.

    • Or, you cite colleagues who you suspect would be obvious editorial choices to peer review your paper. You just add a sentence like "This has been previously discussed by a number of eminent scholars (fucking huge list of citations here)".

      I've been in situations where I was basically done writing and then was told "um, okay, you should cite all these papers. just figure something out."
  • Not necessarily... (Score:5, Insightful)

    by Pentagram ( 40862 ) on Saturday December 14, 2002 @02:22PM (#4887767) Homepage
    The study seemed to be checking for typos in citations. Just because a scientist has copied the text of a (wrongly typed) citation does not mean s/he has not read the paper. There is no law that says someone writing a paper has to type up every citation they make from scratch.
    • by doublegauss ( 223543 ) on Saturday December 14, 2002 @02:48PM (#4887899)
      Exactly. Scientific research is my job, so believe me, I know what I am talking about. Sometimes you mis-cite papers you know extremely well just because you copy/paste an incorrect citation. That doesn't imply at all that you have not read the article.

      Proof: in one occasion, I misquoted a paper that I had written myself, just because I copied its title from the preliminary version, forgetting that eventually the title had (slightly) changed in the published version.

      IMHO, this is cheap sensationalism. On the other hand, it is true that the academic profession is too loaded with the "publish or perish" thing, which leads researchers (and eventually publishing) sloppy research.

      • I agree with you. I've propogated a citation error myself. I was reading article A which cited article B (but it didn't include the subtitle of article B in the reference section). I copied down the details and went to the library to get article B. After reading article B, I wrote article C which cited article B. In writing the references, I used the citation as it was written in article A (sans subtitle). Thus, I read article B, found it relevant, and cited it. However, I propogated the error.

        These days, there are a number of programs like procite and endnote that manage your citations. If you were to (as I do) type in new references as soon as you hear about them, you could propogate errors inadvertantly.

        This doesn't mean that the authors didn't read the papers they cited. It is an interesting finding, as it does show the spread of ideas -- but it doesn't indicate intellectual dishonesty.
    • by daoine ( 123140 ) <`moc.oohay' `ta' `3101hdaurom'> on Saturday December 14, 2002 @02:54PM (#4887929)
      Exactly.

      I think that the article itself is making a huge leap here, and it's not one I'm about to believe.

      They noticed in a citation database that misprints in references are fairly common, and that a lot of the mistakes are identical. This suggests that many scientists take short cuts, simply copying a reference from someone else's paper rather than reading the original source.
      They go even further...
      The pattern suggests that 45 scientists, who might well have read the paper, made an error when they cited it. Then 151 others copied their misprints without reading the original. So for at least 77 per cent of the 196 misprinted citations, no one read the paper.
      Copying the reference format from a paper does not mean that the scientist has not read the original. When writing my papers [research conferences, not just assignments] I'd often grab citations out of papers that were in the proper conference format. I had the paper in my hands -- but sometimes the citation information gets separated from the paper, and you need to rely on someone else's citation. That doesn't mean I didn't read the paper, nor does it mean that I was using another author's interpretation of the original work. It's an absurd leap.

      While I do believe that authors do skimp on what they've read and what they just pretend to have read, I'm not sure that using typos in references is the best way to determine the degree to which this occurs. What it *does* show is that people are clearly lazy when it comes to references, and many will copy a reference without checking to see if it's correct. I'd be interested to see if they've looked into lesser known papers -- a popular paper is more likely to withstand an error, since everyone knows what it is anyway...

      • Agreed. For example, I just saw somebody the other day look up some article references from a Published Works section in somebody's CV so they didn't have to bother with typing out the bothersome format of a journal article reference. If somebody else already did it, what the heck is wrong with cutting and pasting? In this case the woman I am referring to was one of the authors of the article, so I am quite certain she read it. But some typo or mistake in the source CV would have been propagated through to her CV and other locations where she was plopping it down.
    • I almost always try to copy the citation from one of those citation databases or another paper. I usually do it with a 3 inch thick pile of printed papers sitting in my lap. Those might not have the page numbers or the journal they were in on the them. Even if when they do it's better to be consistent with other people referencing the work unless they are obviously wrong. Plus, it's a pain to format the TEX with umlouts and the like.

      I have had to fix the spelling of my own name in a reference I copied...other than that it's a good thing. I try to be really careful when I have to type in the reference myself, usually a recent paper or something from another field. I have to decide whether to list all the authors or do an "et al" after who you think are the main authors. Whether to write out the complete name of the journal/conference or its common short name. I do try to proof read the reference, but sometimes I read the paper and used it but can't find it and the deadline is quickly approaching, it would be more wrong to leave it out than to use the possibly slightly off reference, esp if I used that reference to find the paper in the first place. If it had been far off enough to cause me trouble I would have remembered. Of course you want them to be correct since the editor will look at your references when looking for people to review your paper...

    • I was co author on a paper that was rejected because the citations were misquoted. We were accused of not reading our own works.

      Looking into it what happened was this. She had found some interesting citations at the end of a paper, looked up the articles and read them. Then when she was preparing our manuscript rather than track down the papers in a file drawer she just copied the citations as they had appeared in the original citing paper. thus she preserved the exact same typos. word for word.

      I have however seen another way that typos propagate. Say I was a graduate student on a series of papers where my advisor often trotted out the same citations to make the same points about the same seminal peices of work. Other papers I write with him as co-author get the same boilerplate. Eventually I write a paper on which he is not on the author list but I put in the same boiler plate. oops. I just cited a paper I never read myself. Believe me it happens, though it is not neccessarily inaccurate.

    • by CheshireCatCO ( 185193 ) on Saturday December 14, 2002 @03:36PM (#4888081) Homepage
      Excellent point. In fact, let me add to it:
      Many citiations now are copied from ONLINE SOURCES. We read the papers, but we hate typing in our bibliography from scratch. I can just go to ADS (http://adsabs.harvard.edu) and have it print out the reference in BIBTeX format for me. Now, there are quite a few typos in that database. I know because we're finding them while creating a bibliography for the upcoming Juptier book.

      None of this implies that we're not reading papers we cite. In fact, from experience, people in my field (astronomy) know the papers they reference pretty well, as a rule. There are times when we don't read an entire paper because we're just taking a few numbers or an equation, but as a whole, it pays to know the papers. If you don't, you'll usually find yourself under fire from your collegue who wrote the mis-quoted paper. Quite often, that collegue will be the journal referee who is reviewing the paper before publication. This is *not* a position you want to be in, so people generally work to avoid it.
  • Reasons (Score:3, Interesting)

    by NichG ( 62224 ) on Saturday December 14, 2002 @02:24PM (#4887779)
    I'm almost tempted to say that this is a side-effect of all those teachers who said 'I want at least 10 references and a 5 page paper'. At least, I can't think of any serious reason why, even if someone was just publishing fluff, they'd need to bulk up the references with irrelevant ones. The only other thing I can immediately think of is that a reference becomes somewhat standard, so they use it for something they learned and forgot where they learned it from (you can't exactly say [11], 11. Professor Ragan's Astrophysics 521 class or [12], 12. Two dozen vaguely remembered textbooks). Even then, I suppose its bad form not to find some reference with the relevant information just to prove you're not making it up (yes, pi IS 3.1415....).
  • Not the only problem (Score:5, Interesting)

    by SteweyGriffin ( 634046 ) on Saturday December 14, 2002 @02:24PM (#4887780)
    Another major problem with research papers is the "dissappearance" of those who actually do properly cite their sources.

    As many of you know, the Internet is a great research tool these days. But unfortunately, it's too dynamic for the research world. "Most URL references [stand] more than a 50 percent chance of not existing after only six months." (from a Cornell study at http://www.news.cornell.edu/chronicle/00/12.14.00/ web_citations.html [cornell.edu])

    I don't care as much if some researcher only reads parts and pieces of papers that they cite, but when the entier papers dissappear, that's a much bigger problem.

    "The study, using term papers between 1996 and 1999, found that after four years the URL reference cited in a term paper stood an 80 percent chance of no longer existing."
    • by zook ( 34771 )
      This highlights a potential problem for traditional publications as well.

      A research library used to serve a double role, both providing access to resources and in some sense backing up them up, but with many libraries moving their journal subscriptions from paper to web-based electronic ones, should the journal go away for some reason these resources have a much grater chance of simply disappearing.

      Electronic papers are great---they allow for better searches, easier distribution, and let me avoid peeling my butt out of my chair to go to the library. However, libraries really must endeavor to keep local copies of as much of their inventory as possible.

  • by grahamlee ( 522375 ) <(moc.geelmai) (ta) (maharg)> on Saturday December 14, 2002 @02:27PM (#4887798) Homepage Journal
    Is Slashdot written to the maxim "no news is new news"?

    Charles Darwin is known to have cited other people's work that he hadn't read (I forget the name of the author involved - not being in the field myself). Then there was the entire field of molecular biology in the 1990s, which suffered more scandals than a dyslexic shoe factory.

    Slightly more relevant (though still stretching back decades) is that some authors don't read the papers they co-author - look at all the people who co-authored papers with Jan Schoen, the team who, with Ninov, "discovered" Ununoctium, etc.

    Next you'll be telling us that (shock! horror!) some scientist pass off other peoples' work as their own, with a fascinating NEW revelation about Rosalind Franklin's work in the discovery of the DNA structure.
  • That's usual... (Score:5, Interesting)

    by Ektanoor ( 9949 ) on Saturday December 14, 2002 @02:29PM (#4887809) Journal
    Anyone who see how sme articles are written, knows perfectly that "bibliography" is usually created as a "necessary evil". Most scientific articles are done basically in the light of several "obligatory templates": abstraction, main article, citations, bibliography and notes. Frequently, real authors are not the ones you see first in the header of the article but someone in the end of it. Also, sometimes, certain people do the most flagrant plagiates out of the work of their students or co-workers.

    What I call "academical science" is full of huge problems, which sometimes reach the level of flagrant falsifications and demagogic manipulation of facts. While not being a scientist per se, I have seen how these things pass the limits ethics and moral in such a thing like Mars. There is one scientist who tragically died in a very strange situation. Apart of the conditions of the tragedy, there was one big "authority" on Mars who lied with all his teeth about the work of his deceased colleague. Frankly, it was shocking to see how this guy flagrantly and demagogically "reinterpreted" the intentions of the scientific work of his colleague. One should note that both guys were highly considered in the community. However, they were adversaries. One died, the other became a big scientific authority on Mars. One of the reasons, was that he made a lot to desmise the works that went against his theories
    • Anyone who see how sme articles are written, knows perfectly that "bibliography" is usually created as a "necessary evil". Most scientific articles are done basically in the light of several "obligatory templates": abstraction, main article, citations, bibliography and notes.

      I'll take this even further. If you are writing a paper on a particular topic and you fail to cite another researcher's paper(s) who has done similar work (but whose paper you have not read!), then you can run into some embarassing situations. I'll give an anecdote.

      A fellow grad student was at a conference a couyple years ago. A research had just concluded giving a talk of his paper when a man sitting in the audience started asking question after question. This is normal, but the questions started taking on a more accusatory tone, "Didn't you know that this has already been done?" "Why didn't you cite XXXX?", etc. It turns out that this guy was either XXXX himself or his close friend. On the surface, it appears that the presenter should have been more thorough in finding papers and articles, but on the other hand, it's quite possible that he just took another path to developing his results.

      There's a paranoia among researchers (and especially grad students) that if they don't cite every single work, no matter how minutely relevant, then this sort of thing will happen to them.

  • The next question is: "How many peer-reviewed papers are actually reviewed?"

    And what about the brothers who were awarded PHDs in physics for what looks like a hoax ala the Social Text incident?

    http://www.thepoorman.net/archives/001517.html [thepoorman.net]

  • in terms of reading what is cited (being an English major and soon to be English teacher, I know somewhat whereof I speak) I'd say the humanities are better on the whole about really reading what they cite. All we have to write about is what we have read. In the sciences one can experiment, test, etc. and write about those results, then go to the published literature for more info. The humanities do not offer that luxury, so to speak.
    • Unfortunately, your characterization of science is flawed. Rarely does a scientist go into a lab and perform an experiement that is 100 percent original. Generally, the origins of the experiment can be traced back to earlier work, that he/she learned about thru publications, conferences, etc. Furthermore, scientists try to be somewhat original. Therefore, considerable effort is spent researching the published literature to make sure you're not repeating something someone did 5 years ago. If you repeat it, you want to put your own "spin" on it. E.g., look at new aspects of the problem.
  • by AlphaHelix ( 117420 ) on Saturday December 14, 2002 @02:34PM (#4887825) Homepage
    This paper takes some very simple statistical models and turns them into what seem to be totally unfounded generalizations about the way science is done. Taking their statistical conclusions at face value, we find that 77% of the people who cited the paper didn't read it in its original form. But, they go on to conclude that a) the only source of information about the paper could have come from a single other paper (namely, the paper with the original citation), and b) misunderstandings about the conclusions drawn by a paper will spread "like wildfire." They do not actually demonstrate this latter conclusion, and don't show that any of the papers actually did misconstrue the science in the original paper.

    This is because heavily cited papers become very widely known and understood. Not everybody who's ever cited "The Origin of the Species" has read the whole thing, but it certainly then does not follow that they took their understandings of its conclusions from a single other citing paper.

    They end their article with a smug admonition to "read before you cite." These guys sound like the guy with a clean desk who never gets anything done complaining about all the clutter on your desk. Smug social scientists criticizing physicists for their lack of citation rigor does not impress me. There are plenty of better reasons to criticize physicists this year (e.g., Ninov and Schoen). This one seems a bit silly.
  • by k98sven ( 324383 ) on Saturday December 14, 2002 @02:35PM (#4887828) Journal
    This doesn't come as news for me..

    As a student starting my PhD studies, I once asked a researcher at the department about a paper. He told me he hadn't read it.
    The next day, I saw that he had indeed quoted that paper in one of his.

    However, it usually isn't such a big problem,
    when papers are cited without being read, since it usually only happens with papers periferial to the subject.
    (For example to justify a certain method or procedure that is common practice)

    Also, sometimes the relevant portion of an paper can be summed up in one sentence, or in the abstract.
    • by the gnat ( 153162 ) on Saturday December 14, 2002 @03:55PM (#4888177)
      Also, sometimes the relevant portion of an paper can be summed up in one sentence, or in the abstract.

      Of course, which is why the article sort of misses the point. For instance, if I were to mention offhand in an introduction that protein synthesis by the ribosome is done by catalytic RNA, there is an obvious reference to cite [Nissen et al. (2000) Science etc.]. I know this is correct, it's been extensively covered, and I have a copy lying around somewhere, but I've never actually read it all the way through. You can just look at the abstract and that's plenty for these purposes- if I were extensively discussing the mechanism I'd need to thoroughly read the paper, but for an introduction I just need to mention the proper source.

      Now, I could be making an error- what if they just pulled something out of their ass, or used sloppy methodology? Usually, people will just say "if it's good enough for the editors (and peer reviewers) of Science, who am I to argue?"
  • And that's how it all begins. You have academic staff being rated more for how much they publish than how much they teach; how much time do they have, really, to *teach* their assigned students, much less grade their assignments and papers? When grading their papers, how much time do they really have to pursue all the references etc.?

    I had one professor who would randomly check up on various references in papers submitted to him (the joys of statistical sampling :-). You did NOT want to be one who gets an email saying something along the lines of "I've never heard of this. Show me. "

    Knowing that there's some (realistic) percentage of being found out, presumably most people would be careful - but nonetheless since he cannot possibly check EVERYTHING, some people will be tempted to try their luck. At least some (most?) will get away with it. And for those who know their profs won't be hunting down everything... what's to stop them?

    And like most crimes, you just keep doing it and doing it... until (if) you get caught (I don't think Winona Ryder's *only* "shoplifting experience" was the one she went to trial for). So presumably at least a percentage of those doing scientific research had had undergrad + postgrad experiences of "getting away with it". Could get to be a habit.

    Heck, if even reusing *faked* graphs in multiple papers can be gotten-away-with... .

  • When I wrote my thesis for MS in Computer Science, my advisor strongly suggested that I include several references to his previous work, the work of several of his past students and a Professor at another school that would write reviews of his books (he would review the other Prof's books). All of this occurred during the final chop and two weeks from graduation. If it was up to me, I would not have included any of these references. But I was not the one signing off the last two years of my life as complete.

    The funny part is that I received the largest portion of help from a couple of Sun engineers who were able to get me through some code which my advisor could not and except for the acknowledgement, their contribution was poorly documented (at least in my mind, not the advisor's).

    So, if you read my paper, you would think that I am an idiot because some of the referenced work is so basic and at other times a super genius because the code was assisted by some great programmers (after all, how many people read the acknowledgements).
  • by MyNameIsFred ( 543994 ) on Saturday December 14, 2002 @02:35PM (#4887831)
    This suggests that many scientists take short cuts, simply copying a reference from someone else's paper rather than reading the original source.
    So they copy and paste, that doesn't imply that they didn't read it. I copy and paste references from old reports routinely, its called saving time. That doesn't mean I didn't read the reference.
  • Idiots. (Score:5, Interesting)

    by cperciva ( 102828 ) on Saturday December 14, 2002 @02:36PM (#4887837) Homepage
    Copying a reference string doesn't mean that you haven't read the paper in question. To take a personal example of what I've done:

    1. Find a reference to a paper which looks interesting.
    2. Walk down to the library, remembering that you're looking for Bob's paper about bars in the Journal of Foo.
    3. Arrive in the library, find the paper, read it, decide it is important.
    4. Walk back to computer, copy out reference string.

    It's quite easy to look up a paper from a slightly-wrong reference, and as long as the reference is close to correct, it's fairly easy to not realize that the reference was wrong in the first place.
    • This is very true. For my thesis, I wrote a perl-script to grab the BibTeX records directly from the NASA Astrophysics Data System [harvard.edu] (cut'n'paste was too hard. Had to be a script). I saw no reason to edit them, never touched them by hand. Of my references, there was just a couple of references I had to write by hand, which was not available from ADS.

      Yet, I don't think it is wise to disregard this. I think it may well be a problem. But then, I didn't RTFA... :-)

  • by Junior J. Junior III ( 192702 ) on Saturday December 14, 2002 @02:36PM (#4887840) Homepage
    Slashdot readers don't even read the articles they cite... What's this world coming to?
  • Anyone responding to posts with 'RTFA' will be considered guilty of recursion weithout a terminating condition.

  • by Flavio ( 12072 ) on Saturday December 14, 2002 @02:37PM (#4887846)
    It's pretty obvious whenever the authors add tons of mostly irrelevant references which are mostly irrelevant to the topic in an attempt to make their research look thorough and important. I don't see how this is news to anyone who's gone through college...

    Anyway, this comic [stanford.edu] seems appropriate.

  • I've looked into this topic and found some very interesting quotes regarding the subject. Scientific American probably states that "while a minority of scientists perform bad science, most do go through the process" (Scientific American, 3). Some magazines might goes as far to say "our scientists should be hailed for their rigor and attention to detail in these works" (Popular Science, XVIII). Some detractors have maybe said "hey, these scientists are free-loading off government cheese" (Popular Mechanics, 12XVII424CVV). I hypothesize that most of scientists out there read what they write about. As you can see from my rock-solid sources, there is no disputing this fact.
  • Flawed logic (Score:4, Informative)

    by nucal ( 561664 ) on Saturday December 14, 2002 @02:39PM (#4887854)
    They found it had been cited in other papers 4300 times, with 196 citations containing misprints in the volume, page or year. But despite the fact that a billion different versions of erroneous reference are possible, they counted only 45. The most popular mistake appeared 78 times.

    Gee ... most scientists use a program (like Endnote) to format bibliographies, using data downloaded from a database (like PubMed). I suspect that this is more a deficiency in proofreading reference lists and assuming that databases are correct, rather than a lack of reading the original material. Whether people read articles carefully is another matter, of course.

    In fact, a blatant miscitation of a given reference would often get caught during the peer review process. This happened to me once when I rewrote part of a paper and forgot to remove one of the references that no longer applied ...

  • by DirtyJ ( 576100 )
    • Ideally the point both in science and in academics in general is to understand the ideas.
    True, and most scientists go into the profession with this in mind. I think that most hold to this ideal as much as they can throughout their careers, but they also have to face the reality that their job security (achieving tenure, for an academic) and funding are based in part on their publication count. That's counter to the ideal situation. We'd all like to think of scientists locked away in their labs, very nobly trying to understand the world and explain it to their fellow citizens. Scientists would like to think that what they do is that romantic, too. But then they forge through grad school, get a post-doc somewhere, and realize that they damn well better publish a bunch of papers, or their career is going nowhere. Then maybe they get lucky and find a tenure-track job somewhere, and suddenly they have a teaching load to worry about, plus ever-increasing committee work within their department and university, plus smaller, but still significant tasks, like refereeing their peers' papers. It's a lot of work. More hours of work in a week than the majority of the working force has to put in. Research gets squeezed thinner and thinner, and your time becomes more precious. Yet you're still expected to remain productive and publish a bunch. So the romantic ideal of the lone scientist exploring the mysteries of the universe with a complete focus only on the nobility of the pursuit of knowledge for the sake of knowledge is a little far-fetched.

    This, of course, does not justify in any way the falsification of work (which, I think, is extremely uncommon - it's just that we've recently heard about a particularly egregious case of this), nor does it justify propagating misinformation as a result of improper literature citation. I'm just pointing out that the ideal mentioned by the submitter is just that - an ideal.

  • It's not a new thing (Score:5, Interesting)

    by KjetilK ( 186133 ) <kjetil AT kjernsmo DOT net> on Saturday December 14, 2002 @02:40PM (#4887862) Homepage Journal
    Well, you've all heard that the geocentric world system was abandoned because it was required to add more and more epicycles for the model to fit observations, right? That's what many text-books say, that's what Thomas Kuhn says, and that's what Encyclopedia Britannica said up to recently.

    To support the view that observations got better and better, requiring more and more circles, you'll probably find most of these sources citing a book by J.L.E. Dreyer, written in the beginning of the previous century, but it exists in a few editions published later.

    But Dreyer says the opposite:

    [...] One looks in vain [in Alfonso's work] for any improvement over Ptolemy; on the contrary, the low state of astronomy in the Middle Ages is nowhere better illustrated.

    Basically, if these people had actually read Dreyer, we wouldn't have had to struggle with this myth any longer. Of course, there's a lot more to this story than this, but I don't have time to write it now... :-)

  • Peer review (Score:4, Interesting)

    by Eric Smith ( 4379 ) on Saturday December 14, 2002 @02:44PM (#4887883) Homepage Journal
    Doesn't this point to a failure of the peer review process? Aren't the reviewers bothering to check whether the references are relevant, and for the ones that are, whether the paper actually interprets and builds on the prior work in a reasonable manner?
  • You don't have to necessarily read them as long as you make sure to use a well respected credible resources [improb.com]!
  • by SuicideKingOfHearts ( 267741 ) on Saturday December 14, 2002 @02:47PM (#4887893)
    I am probably one of the few people out there who has ever leafed through academic journals for fun. Still, those things are incredibly boring.

    The issue here is that people expect articles to have a certain shape, form, and style, including a literature review. And a lit review can be a pain. You don't want to read an article more than is required to get the basic gist of its relevance to your work. Sometimes, that can be done by reading just the abstract.

    The suggested rate of non-reading articles is also possibly overstated. That one has mis-cited a work does not necessarily mean that one has not read it. I can, for example, read an article ten years ago and remember the basic meaning I need to take out of it, and include it in my own references upon seeing it in the references of another's work without refreshing my knowledge of the work. Or I could just use another work's references as a reading checklist and not bother to correct it (or be unaware of the mistake if I sent a poor grad student or some other lackey to the library to copy the journal for me).

    I assume the full article by Simkin and Roychowdhury probably states the likely sources of commonly copied errors. I'm a tad curious to se whether the authors of those progenitor articles propagated their own mistakes in future articles or if they corrected them.

    While the article claims that "a billion different versions of erroneous reference are possible," in practice that may not be as true. With the errors being volume, page, or year, the most likely errors are transposition of two digits, deletion of a digit, insertion of a digit, or replacement of a digit. In the latter two, the error will most likely be the use of a neighboring number on the keyboard. A one is much less likely to be replaced by a nine than by a two. That is unlikely to lower the probably number of copied citations to below 50%, but it is still a possible source of error that may or may not be accounted for.

  • by Damned ( 33568 ) on Saturday December 14, 2002 @02:51PM (#4887920) Journal
    I wanted to take up the point in the article that many researchers are more interested in publishing than in solving the issues they investigate. I'm going to preface this by stating that I'm a psych. major and, as such, do not have much knowledge of the specifics of other fields, but I assume their requirements are similar.

    In university settings, it is all about how many papers you have published. When a professor is first accepted to the faculty of a university, he/she must "publish or perish" for the first 5(+[?]) years. If you do not publish often enough in those first years, you are not retained. Things get better after you get tenure; you are not required to publish as often. So, it should not come as too great a surprise if people are more interested in publishing than solving the issues.

    I personally think the requirements of universities should change so that we are not searching through a glut of papers, all saying many of the same things (or close enough). I am more concerned with the falsification of data, which totally throws everything off, than with a tendency to publish papers that don't necessarily solve the issues, which makes finding relevant research difficult but shouldn't substantially hurt the future of the field.
  • I can't speak for others, but I always read the papers I cite in mine. That's because I try to limit myself to citing papers that are actually relevant to what I'm talking about and have exerted some kind of influence on the contents of my paper. Now it's becoming clear to me why my papers always seem to have so many fewer references than the other papers I read.

  • Black people like fried chicken and watermelon, Italians men are as slutty as French women, and white men can't jump.

    Seriously, you should qualify your statements before you go creating new negative stereotypes. I've known my share of publish-aholics, but I've also known several scientists with deep personal integrity who only care about results.

    QUALIFY!
  • Duh! (Score:4, Insightful)

    by Starky ( 236203 ) on Saturday December 14, 2002 @02:59PM (#4887944)
    Of course most scientists don't read the entirety of the papers they cite. Is this news?


    Sometimes all someone wants is a certain result from a paper. Reading and understanding the full reasoning behind a result rather than the result itself may mean the difference between an afternoon of work and 3 weeks of work. Multiply that by the number of citations a paper has, and a hapless but well-meaning scientist would spend all their time digesting their citations rather than publishing papers and would soon be relieved of their position.


    Understanding the details behind cited results is certainly very important, but in the real world there are real tradeoffs that researchers constantly have to evaluate professionally regarding how much time they spend understanding and in how much detail they understand any given result.


    This posting is interesting, certainly, but it is not news.

  • > New Scientist has a good overview of the work.

    LOL. An obvious case of a submitter without ZERO sense of irony.

    "There's a new study which suggests scientists don't read the studies which they cite. ...Rather than read that study, why not glance at this news-edited abbreviation?"

    crib
  • Huh? (Score:3, Insightful)

    by Pedrito ( 94783 ) on Saturday December 14, 2002 @03:01PM (#4887961)
    ...most scientists don't read the papers they cite. This means that if one paper misreads a work the misreading propagates.

    If they're not reading the papers, why would it propagate?

  • With so many similar topics appearing all across the IP landscape, here's the trend I'm seeing:

    The simple capatalistic need to own and be given various forms of credit for ideas has taken precidence over the need to actually solve and understand problems.

    That's not to say that capitolism is at all bad, but this aspect of our modern version of it is something that appears can lead to eventual deadlock in societies' and individuals' ability to get anything done. Scientists need to work on something they can own, so many ignore many otherwise important topics. Inventors need to avoid anything in the commercial market, so many find their ability to improve things is greatly hampered. Writers and archivists must carefully avoid soemtimes broad concepts that are claimed by powerful interests, so must limit their imagination as important ideas rot en mass.

    Completely new ideas are a powerful thing, and should of course be encouraged - but the ideas that are actually useful to people are not often completely new. Our encouragement of new ideas should not be at the cost of the very usefulness of ideas in general! Exploitation of ideas is the overall idea behind copyright and the like, but one does not have to own the very core concepts themselves to exploit the ideas - to own the core concepts themselves ends up exploiting people rather than exploiting ideas, keeping everyone else from being able to bring many new ideas to society. New ideas don't often just spring from nowhere - people have to be able to combine concepts, using existing ideas.

    Ryan Fenton
  • I have lost over 100 lbs on a high-fat, low-carb, moderate-protein diet, while at the same time achieving good blood-sugar control, lowering my cholesterol, improving my HDL/LDL, dropping my triglycerides from the high 300's to the teens, and numerous other health benefits.


    I've grown tired of hearing members of the so-called 'medical' profession lecture me on how 'risky' my 'high-protein' diet is (seems most doctors are functionally deaf and/or immune to learning anything at all from a non-doctor). I gotta wonder how much more 'risky' my MODERATE protein is than being more than 100 lbs overweight. Seems doctors only read the conclusions of studies, and not the actual studies. I have come to the conclusion (based on my personal experience, and comparing notes with several dozen others in the same situation) that the typical 'research' paper follows these steps:

    1: Write down a conclusion

    2: Write a paper supporting that conclusion

    3: Do some 'research', carefully structured to support that conclusion

    4: Discount or discard any data that doesn't support that conclusion

    5: Get the paper reviewed by a group of associates that agree with your conclusion

    6: Publish the paper in some mutual-admiration society journal

    My favorite along these lines is one entitled "Type 2 Diabetics Benefit From Reducing Intake Of Animal Protein" [pslgroup.com]. If you read the summary very carefully, you will see that the 'researchers' removed the SUGAR from the diet, and then concluded, from the resulting health improvements, that animal protein causes type II diabetes. (!!) This is, unfortunately, typical of what passes for 'science' in the study of diet.

    • Hmm... it's hard to tell exactly what they did from this summary, and they certainly don't claim that animal protein causes type II diabetes. The author is quoted as explaining an indirect link between animal protein and a potential cause of insulin resistance. From the summary it doesn't sound like the author is making any extraordinary claims.

      Of course, we're arguing about a paper which neither of us has read... seems a bit amusing considering the topic of the post.

    • That is too funny...

      When I met my wife, she was a medical wreck. A sufferer of Type II diabetes since she was 8, her blood sugars were always off, she was underweight, listless, etc.

      Now my diet is one that makes most GP's cringe. I live off of cheesesteaks, hot dogs, etc. High fat, med carb, high protein. Add to that that I am very sensitive to Antibiotics, and I can not eat farm raised poultry and fish. So, it's red meat for me.

      Of course, my wife ended up adopting my diet after time, and her cholesterol is down, her blood sugars are normal, and she has achieved her target weight. Her doctor asked what she had been doing, and when she explained "Eating alot of Cheeseburgers" he refused to believe it.

      The problem with modern science is that it is based on too many antique fallacies. Much like the 10th century monk never thought about germs because GOD caused disease... When the devil didn't. Most of Einstein's later work was highly speculative musings on extra-dimensions and trans-dimensional physics. Most of his conclusions cannot be verified, and those that can have shown anomolies.

      This however, does not mean that super-string theory isn't still the basis for most high level physics research. Indeed, disagreeing with super-string theory is enough to convince many universities that you don't belong in their program. Gee, I always thought it was religion that placed so much importance on blind faith.

      Cut the strings!
      Physics doesn't demand
      Any vibrating band
      Of string.

      I won't step in your noose
      I don't believe in your loops
      Of string.

      Demenchuk "Cut the strings" - From "A 5th dimension of Beethoven"

      ~Hammy
  • In college... (Score:2, Insightful)

    by shylock0 ( 561559 )
    So, honestly... How many of you, in college, fudged footnotes and works cited every once in a while? That totally doesn't make it right, and I'm certainly not advocating doing so, but generally speaking my professors never checked up on stuff like that -- and who can blame them, in a class of 100 or more they certainly don't have the time. But it does foster the same thing down the line, which might be what we're seeing here...
  • Patent and Copyright! The whipping boys of slashdot. Of course, not just because we have problems with these two things, but because they are partially to blame. Scientists, like corporations, are concerned about money. They don't keep their jobs if they don't patent or copyright anything. Publish or perish. So rather than continue researching/experimenting until they arrive at the truth they will just strive to create something patentable or copyrightable whether it works or not. Then wait for someone else to do a 5 year study on the effects of what they've made.

    1. Write paper and cite other papers I haven't read.
    2. Publish paper.
    3. Profit!

    Alternative?
    work for greater good of humanity
    and starve.

    What would you do?
  • this proves little (Score:2, Insightful)

    by ksteddom ( 177014 )
    The article in question was cited 4300 times. That is a lot. This would suggest this must be a fundamental paper for that particular field. How many times has the paper been discussed in classes, discussions, literature clubs, etc.? If so, the scientists are probably very well acquainted with the work, with out having the paper on their desk while typing in the reference. You can easily grab another paper that cites the original, with out digging through your file cabinet full of papers. Managing references can be a huge task. I have a small collection of papers, but even this is over 250. I know others with over 1,000. Any yes, we have read every one of them. That doesn't guaranty I will pull the paper out of my drawer every time I cite it. Many others in this discussion have also mentioned inaccuracies in the databases. It does happen, if you don't agree talk to your local inter-library loan. Contrary to the media's perspective, you should never believe what you read until you have tested it yourself or others have confirmed the work. Didn't Einstein say "Believe Nothing"?
  • Pervasive Problem (Score:4, Informative)

    by jefu ( 53450 ) on Saturday December 14, 2002 @03:11PM (#4888000) Homepage Journal
    This is a problem that saturates academia. You don't get tenure and promotions for teaching or even for doing good research. You get T.and P. for publishing and getting grants. It doesn't matter how bad the research is, it only matters that it gets published or the grant is awarded.

    Take a look at the ACM or IEEE and the number of journals they support, then toss in folks like Springer Verlag. Figure out how many articles are published in these each year. Just from counting you might determine that many of these are pretty meaningless. Try reading a few at random and see if you change your mind.

    Now remember that the folks on a tenure/promotion committee know nothing about what a researcher might do - they're even more ignorant of the research field of someone else than they are of their own. So, how do they determine how good a researcher might be? They're sure as hell not going to wade through yet another meaningless paper. Its simple. They count. How many publications? How many grants? How many citation from other papers to the researcher's papers?

    And its an interesting feedback loop: even getting a publication or grant can depend on your publication and grant history. And if you suspect that someone might be reviewing your paper/proposal who works in the same area, you might want to make sure there are a couple of citations (always positive, naturally) of that persons work included.

    So, we know someone wants publications/grants/citations and they need p./g./c. to get p./g./c.. They do some research, it depends heavily on two or three other bits of research. But two or three citations aren't enough. So they might want to use the citations they find in the work they cite. OK. This citation looks good perhaps, but the original article isn't available in the local library and inter-library-loan will take a month to get it and the deadline is next week. Oh well. Cite away - the original author isn't likely to complain (after all this is another citation to his/her work).

    And so it goes.

  • by Jonathan ( 5011 ) on Saturday December 14, 2002 @03:13PM (#4888003) Homepage
    As someone who has written a number of scientific papers (and yes, sometimes, but not often, cited articles that I haven't read), I think there are a couple of reason contributing to the problem:

    1) Cost of journals -- often there is an article that ought to be cited in your work (because it was published before yours, and is related), but is in a journal unavailable at your university's library. There are thousands of journals, and their high costs (often thousands of dollars a year each) means that no library can have them all. But why not simply ignore an article you haven't read? Read on.

    2) Pride of Reviewers -- When a scientific article is sent to a journal, it is passed on to several researchers who are doing similar work for peer review. While it would nice to think that reviewers are not so petty, the fact is, if you haven't cited their work, they might get angry and reject the paper. So, authors feel that it is better safe than sorry and cite freely.
  • The Real Problem (Score:3, Interesting)

    by kldavis4 ( 585510 ) <kldavis4@g[ ]l.com ['mai' in gap]> on Saturday December 14, 2002 @03:17PM (#4888014)
    The real problem here is inherent in the academic system. Research faculty are in a situation where they are being judged by the amount of papers they put out, and not on the quality or the potential of their work. This leads to unscrupulous individuals doing "whatever it takes" to get ahead.

    What needs to be done is to reform the way merit is assigned in academia. Research funding and tenure need to be allocated based not only on the quantity of publications but on other factors which may be harder to measure, factors that would be better indicators of the value of their research.

    A somewhat related issue is that more and more private sector funding is flowing into universities and along with that funding comes the expectation of a quick return on investment. This creates more pressure to pursue short-term goals with little long-term impact on the field of study.

    Taken together, US scientific research is destined to fall behind and stop making new breakthroughs. Seemingly, the only apparent solution to this is to increase the amount of public funding available for basic research. It would seem, though, this is not likely to happen given the current regime in Washington. A more likely outcome will be that our scientific institutions will all be doing R&D for the big corporations in the near future.
  • by LothDaddy ( 169765 ) on Saturday December 14, 2002 @03:23PM (#4888034)
    Being a recent Ph.D., a current Post Doc., and a future Prof in Plant Pathology I understand this comment like few others:

    A lot of the ultimate problem is that many in research are concerned more about publishing than in solving the issues they investigate.

    The problem is that the higher-ups in the university system essentially mandate a certain number of peer reviewed publications for promotions, hell even to keep your job if you're not tenured. This, I feel, is part of the problem in that we're pushed so hard to get X number of publications per year. In a sense it's necessary to weed out the smucks (anyone can get a Ph.D. nowadays), but it also can cause the quality of the research to decline. The whole quality vs. quantity argument.

    Just my $0.02.

  • Where does this guy get off? Everybody knows that technical people always read every piece of pertinent information available to them! Case in point: Slashdot's readership reads every article before they think about posting. I think I've proven my point.

    *removing tongue from cheek*... I hope everyone got that.
  • I'm just learning how to be an academic and I've already done this a couple of times. Deadlines were pressing and I was trying to focus on the information I needed for my papers. I read the relevant parts quickly a couple of times, but certainly not thoroughly. Unless I later thought that I truly didn't understand something I didn't bother to look more closely.

    Plus even if you read and really do understand the main part of a paper you might make a simple mistake like missing a key assumption in the introduction (which I don't imagine too many people pay much attention to) and then end up stating a result without that assumption, and then anybody who uses your paper is getting bad information. It probably doesn't happen often, but even just a few times could cause huge problems.

    It's probably excusable when a novice like myself doesn't look too closely. After all, we're still learning. Plus I know it's a bad habit and I'm trying not to continue it. However, if this is happening with more experienced academics, it is quite scary!
  • So [singerco.com] including [eslnetworld.com] information [retroweb.com] for someone else's [liu.edu] benefit, [salvationarmyusa.org] that would [woodmagazine.com] have to be [utah.gov] researched anyway [york.ac.uk] in order [dennys.com] to understand [wired.com] a subject, [powa.org] is unimportant [google.com] now, just [supremecourtus.gov] because I'm [illuminati.org] a busy man [subgenius.com]?

  • When I write a research paper, first I determine what evidence I need, Google for a paper that has that kind of info cited, copy/paste their footnote entry, and voila. It runs completely counter to any intellectually honest attempt to actually figure out what is correct, but it's sure as hell the most efficient way to get an A on a research paper. It would take 10 to 20 times more time to do a research paper if I had to actually read my sources, and NO ONE I know has that kind of time. The system encourages this kind of thing.
  • This is pretty ironic - I'm sitting here in my lab at Stanford writing up a computational biology paper as I'm reading this (I'm a graduate student) and I have to admit it's kinda true. I wanted to reference the ways people have converted evolutionary sequence conservation into probability matrices, and so I found a fairly recent paper that also wanted to reference that, and I more or less copied those references. I did examine the papers, but I certainly did not read them thoroughly. But I would say that I indeed have read the most important references dealing with the center of my work. So I would argue that most references in paper introductions are not thorougly read, but anything referenced in 'methods' or 'results' sections are most-likely well-read and understood by the authors. And yes, there is incredible pressure to publish in science - your graduate school career is more or less completely judged by your publication output. If you only have 1 (or 0) papers, people will wonder what you were doing and are less likely to give you the killer Bioinformatics job you're looking for. :)
  • Personally, and I'm totally serious about this, I'd blame it on the assignments we get in both high school and college wherein the teacher/professor, in a well-meaning attempt to indoctrinate us in the ways of the academic, says "You must include (5/10/30) citations in your final paper!" (And no more then X may come from whatever bad thing students are using... encyclopedias in my day, now the Internet.)

    Totally naturally, we go out, find 1.5*X citations, winnow out the obvious losers, and randomly cite them at the end of our papers, having read maybe one of them. Because we all know the teacher/prof doesn't have time to check even one of them from each of our papers, let alone check them all. How many of us have completely manufactured a citation from whole cloth for one of these things and totally gotten away with it? (I haven't myself, but I certainly thought about it; the only reason I didn't is it was generally easier to just go get likely looking citations on the Internet. Teacher never realizes you "used the Internet" if you cite paper journals....)

    Certainly you don't think this habit is going to go away just because they got a degree, when the stakes are even higher? Everybody else's six-page research papers have 40 citations at the end, if yours don't you'll stick out, and that's bad.

    It would probably be better to require that students cite as appropriate, and require at least a spot check of the citations for at least one random assignment at some point in a student's career.

    I'm writing something in my spare time that might in some sense be considered an academic paper, but I just use footnotes as appropriate. Citations are often overrated when they are used as a cover for "We've known this and endlessly debated this in the field for the past 50 years, but I can squeeze seven pointless, information-free citations out of this" sorts of things.

    Note I'm not saying that citations are unimportent or that they should be abolished; they are legitimately importent and useful. I'm just saying the the stupid way they are handled in school has natural consequences in the resulting academics, and their value is unnecessarily diminished as a result.
  • The Hard Sciences (Score:3, Interesting)

    by starseeker ( 141897 ) on Saturday December 14, 2002 @03:49PM (#4888143) Homepage
    I think people forget that the Hard Sciences are made up of people, same as the social sciences, and also have the usual problems associated with using people to try to get stuff done. (Although I'm not sure I'd put not reading all of the papers you site real high on the list - if all you're after is one point in a long and complex paper that seems like a fairly inefficient use of time. Some of these papers are HARD to understand.)

    What gives the Hard Sciences the right to that title is that, eventually, someone will root out the bull that someone else has published, brand it as such, other people will check it and agree, and it dies. You can prove someone WRONG. Try that in the social sciences - has anyone ever heard of a huge scandal where someone faked results in the social sciences? They would get in trouble if they didn't do the studies and were found out, but can you prove that they cheated just by taking their conclusions, working with them, and crying foul when something doesn't work? In the Hard Sciences, you can. That's what makes them so strong and practical.

    Not that Social Sciences are worthless, mind you. It's just that BS seems to be a lot easier to get away with there. Sort of like in English class, when we were supposed to get the meaning out of a book. I never get the meaning the author's trying to convey (or at least what they say later he/she was trying to convey), but I wrote down something and got a good grade. Because how could they prove my thinking about the book wrong? I think the social sciences have a little of that problem in them somewhere. Controlled experiments are really tough to do, so you run into problems.
  • If science involved the mere writing of papers, it would be politics, and I would be worried about this.

    But, the Scientific Method is clearly not going anywhere, and reproduceability is a very strict standard.

    Now, I agree that there would be some difference between the experimental sciences and the evidential sciences (archaeology, etc...) in this regard, where the temptation to "promote" an idea is not as tempered by the fear of immediate embarrassment.

    However, certainly in experimental science, the lifespan of any unsupportable idea is inversely proportionate to the degree of interest in that idea, which is the perfect governor.

  • Publishing is part of the process, not the result of the process.

    Universities, governments, and corporate science divisions have been paying for raw output without validating the quality of that output. The result is a vast sea of crap masquerading as the truth.

    How often is a scientist given the job of vetting another's work? So how often do you suppose it happens? And how much do you suppose it's worth to a scientist to participate in validating the truth, and how much to participate in publishing over validating?
  • the Humanities (Score:3, Insightful)

    by ferrous oxide ( 208279 ) on Saturday December 14, 2002 @04:22PM (#4888287)
    I'm a PhD student in Literature (I know...) and although there's definitely a bit of a problem in the Humanities with people not responding to others in a useful dialogue at times, and there is certainly the same "publish or perish" imperative, it is really a *huge* faux pas to not have read the entirity of the paper/book you cite. In my field, you can easily be discredited for your entire academic carrer for that sort of thing.

    Incidentally, it seems to me that the peer review process that exists in both the humanities and the sciences ought to catch these people who are completely misreading their source material. If neither the people writing the papers nor the reviewers are familiar with secondary materials, a real problem exists.
  • by AlecC ( 512609 ) <aleccawley@gmail.com> on Saturday December 14, 2002 @05:43PM (#4888617)
    The logic benind the articel is very, very weak. The basis of the article is that misquotes in citations (wrong volume, page number etc.) propagate from one paper to another. Whech shows that the authors cut-and-pasted citations from earlier papers. Sure. But the researchers quoted claim that this means that the researchers didn't read the papers concerned. Rubbish.

    During the reserch shage of a project, you read the papers. Error in th citation - no sweat; you know authers and title, and a search engine will give it to you in nothing flat.

    Weeks or months later, it is writeup time. Open the first paper to cite it. And there are all the other references you followed (a little trouble in the lookup is long forgotten) and dutifully read. And - get this - it is easier to cut-and-past the citation than to go back to the paper and assemble - separately - the publication, title, authors and page numbers.

    Then only thing the research quoted proves is that papers are overwhelmingly circulated electronically ans the dead tree format is, for scientific papers, obsolete.
  • by Idarubicin ( 579475 ) on Saturday December 14, 2002 @08:42PM (#4889405) Journal
    The ridiculous and sensationalist New Scientist piece suggests that because there are errors in some footnotes, authors must (obviously) not be reading the papers that they cite.

    Yes, that is the tinfoil hat explanation.

    Now try this one: authors are human beings who make typos. They cut and paste erroneous references because they don't want to waste time retyping the reference. They read articles from the online versions of journals, and sometimes the citation info provided online is incorrect or altogether absent.

    One thing that does disgust me is the explosion in the number of footnotes associated with a typical academic paper these days. I recently submitted a paper with a not-particularly-important result to a not-very-important journal, and the paper had forty-one footnotes. (Most were added by my coauthor.) If you visit an mature university library, pull out a copy of an older periodical. Copies of Philosophical Transactions from the nineteenth century are a delight to read. I read a paper by Kelvin from (IIRC) 1807, and it had seven references. Seven!

    The growth of massive, searchable databases of papers (eg Medline) has led to many more footnotes per paper, and many more potential typos. For the record, the paper I mentioned above contained at least three errors in the footnotes that were noted and corrected by the journal publisher. Perhaps New Scientist should be writing a scathing expose on the decline of proofreading and rise of profligate namedropping in footnotes.

  • Lawyers don't either (Score:4, Interesting)

    by Black Copter Control ( 464012 ) <samuel-local@bcgre e n . com> on Saturday December 14, 2002 @09:57PM (#4889738) Homepage Journal
    I once represented myself in the BC Court of appeals. The lawyer I was up against cited some authorities to support her case. I don't think that she expected me to actually read the authorities because, when I did, I found that the authorities, taken as a whole supported my position more than they did hers.

    This wasn't a hick lawyer either.. She was senior partner in one of the largest law firms in BC, had a reputation for never losing a case, and became a judge a year or so later (Judgeship is more of a peer-review process in Canada than it appears to be in the US).

    This left me with a feeling that lawyers don't pay as much attention to their authorities as they could. Probably more so than scientists do with their citations.

To the systems programmer, users and applications serve only to provide a test load.

Working...