Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Fake Scientific Paper Detector 277

Posted by ScuttleMonkey
from the paper-unnoticed-amidst-conference-white-noise dept.
moon_monkey writes "Ever wondered whether a scientific paper was actually written by a robot? A new program developed by researchers at Indiana University promises to tell you one way or the other. It was actually developed in response to a prank by MIT researchers who generated a paper from random bits of text and got it accepted for a conference."
This discussion has been archived. No new comments can be posted.

Fake Scientific Paper Detector

Comments Filter:
  • Yes! (Score:5, Funny)

    by stupidfoo (836212) on Tuesday April 25, 2006 @04:24PM (#15199809)
    I am always wondering what those damn robots are up to!
    • Re:Yes! (Score:2, Funny)

      by Krakhan (784021)
      ROBOT HOUSE!!!
    • Re:Yes! (Score:3, Funny)

      by Schemat1c (464768)
      I am always wondering what those damn robots are up to!

      They use old people's medicine for fuel.
  • by XxtraLarGe (551297) on Tuesday April 25, 2006 @04:25PM (#15199830) Journal
    but I wonder if it can tell if a paper was written by a million monkeys pounding on typewriters?
    • by denverradiosucks (653647) on Tuesday April 25, 2006 @04:31PM (#15199893) Homepage
      Obligatory Simpson's Quote

      Monkey's typing on a typewriter as Mr. Burn's is working on the next great american novel:

      Burns: This is a thousand monkeys working at a thousand typewriters. Soon they'll have written the greatest novel known to man.
      (monkey smoking cigar typing on a typewriter)
      Burns: Lets see. It was the best of times, it was the BLURST of times! You stupid monkey! (Smacks monkey upside his head)
    • by visgoth (613861) on Tuesday April 25, 2006 @04:34PM (#15199931)
      Oh, I'm sure the work of monkeys is quite easily identifiable [vivaria.net].
      • Yup, it's inauthentic [whitehouse.gov].
    • I kinda enjoy getting mod points, it would be sad if they replaced that feature.
    • There's already a program that determines the likelihood that two articles are written by the same author. All that is needed is to combine it with a http query to slashdot...
    • Seems like it would be easier to develop a program that automatically detects /. dupes.. but no.

      *At least the million /. pounding monkeys detect it..*

  • Turing test? (Score:5, Insightful)

    by Nesetril (969734) on Tuesday April 25, 2006 @04:25PM (#15199833)
    so can a robot write a paper and then decide whether the paper was written by a robot (itself)?
    • by benhocking (724439) <benjaminhocking.yahoo@com> on Tuesday April 25, 2006 @04:32PM (#15199911) Homepage Journal

      It seems like it wouldn't be too difficult to modify the MIT program to use this new anti-robot robot to write papers that this anti-robot robot would not be able to detect. Ideally, this would be done with a learning algorithm (so that it could easily be extended to other anti-robot robot programs), but reverse-engineering the anti-robot robot (by humans) should also provide a solution.

      Now that Indiana U has thrown down the gauntlet, I wouldn't be surprised if MIT responds. Hopefully it will result in an even better paper-writing robot. Ideally, it will lead to dissertation-writing robots. :)

      • Re:Self defeating? (Score:5, Interesting)

        by cp.tar (871488) <cp.tar.bz2@gmail.com> on Tuesday April 25, 2006 @04:39PM (#15199978) Journal

        I recently had to check out an essay-grading robot for my Introduction to Natural Language Processing class.

        I'd fed it the introduction of a randomly generated essay. It got a 4/5 on all counts.

        I figure, if teachers are going to use robots to grade essays, we should use robots to create them in the first place.

        • That is pure fantasy. Everyone knows that the true *academic* way to grade papers is to toss them down a nearby stairwell...

          Hence my *lead-weighted* document folders. Bwahahahah.
      • Douglas Hofstadter would be proud of you.
      • by mctk (840035) on Tuesday April 25, 2006 @04:53PM (#15200134) Homepage
        Eventually my students won't have to write papers and I won't have to grade them! Think of the potential application of this technology towards education!
        • Eventually my students won't have to write papers and I won't have to grade them! Think of the potential application of this technology towards education!

          This reminds me of a movie where a few students started sending tape-recorders to class instead of themselves. Gradually the scene had the professor lecturing to a room full of tape recorders. The last step in this scenario was a tape of the lecture being played to a room full of machines taping it.

          (Dammit if I can't recall which movie that is though.)

      • The day is coming when you'll have to submit an authentic scientific paper in order to comment on a slashdot story.

        On that day, I'll be long dead and so will my Moravec-inspired uploaded mind-children.

      • Re:Self defeating? (Score:3, Insightful)

        by BraksDad (963908)
        Maybe after a string of anti robot robots, MIT would come up with a robot that would generate a real scientific paper!

        next comes your anti robot robot
        then the anti anti robot robot robot
        and of course the anti anti anti robot robot robot robot
        and the anti anti anti anti robot robot robot robot robot
        ...
        I could go on since cut and paste is so easy ;-)

        Perhaps it would be a million anti's followed by a million and one robots before something useful came out of such an exercise, but wouldn't it be cool t
      • Personally, I'd be more interested in modifying this for Fraud Detection. The robot looks over your data and text, and decides, "Sorry Dave, a leap of faith has occurred here." Presumably, at that point the robot locks you out of your lab.

        This could lead to a whole series of literary robots: The Too Many Coincidences in Fiction Detector, The Humanities Thesis Verbiage Reducer, The This Movie Is Going to Suck No Matter Who Acts In/Directs It Detector, and so forth.
      • Clearly, what we need here is an anti- anti-robot robot robot.
      • Now that Indiana U has thrown down the gauntlet, I wouldn't be surprised if MIT responds. Hopefully it will result in an even better paper-writing robot. Ideally, it will lead to dissertation-writing robots. :)

        Hmmm.... Have you ever read a dissertation? You'd have a hard time convincing me that such a robot hasn't been in common use for quite a while.
        • I've read several. Hopefully, I'll have written one soon. Since my research is in neural networks, I figure if I can create a neural network that writes my dissertation for me, that's not really cheating. (That's not really what I'm trying to do - my actual research topic is on the cognitive effects of gamma and theta oscillations on a neural network model of the hippocampus.)
      • You are assuming that P == NP here. Or that the bot that creates the paper has infinite time to run.

        On this specific situation, it may be usefull used with a learnning algorithm. But not on a general case.

    • Re:Turing test? (Score:2, Informative)

      by ironring2006 (968941)
      Speaking of Turing, this showed up in the references for the automatic paper that I generated:

      Turing, A., Wilkes, M. V., Nehru, B., Wang, F. Z., Subramanian, L., Zhao, W., Beaman, N. A., Turcotte, B. A., and Wu, V. Refining consistent hashing and 16 bit architectures with SandyEos. Journal of Efficient, Highly-Available Communication 1 (Apr. 2002), 50-62.
      Glad to see he's still contributing to the field from the grave!
  • Testing... (Score:2, Interesting)

    by OakDragon (885217)
    "We believe that there are subtle, short- and long-range word or even word string repetitions that exist in human texts, but not in many classes of computer-generated texts that can be used to discriminate based on meaning."

    RESULTS: FAKE

    Yep, it works!

  • When will MIT modify this technology to filter all the spam from my mailbox?
  • by hsmith (818216) on Tuesday April 25, 2006 @04:26PM (#15199843)
    I hope the ACLU will ensure that discrimination against metal people will not be allowed to continue.
  • by fm6 (162816) on Tuesday April 25, 2006 @04:27PM (#15199847) Homepage Journal
    Has anybody fed Dvorak's latest column [slashdot.org] to this program? I've often wondered if he actually writes his columns, or just generate verbiage at random.
  • "We believe that there are subtle, short- and long-range word or even word string repetitions that exist in human texts, but not in many classes of computer-generated texts that can be used to discriminate based on meaning."

    Do robots make typos? Do they make the same typos each time, or different ones?

    Therein lies the true heart of a proper detector.

    • Do robots make typos? Do they make the same typos each time, or different ones? Therein lies the true heart of a proper detector. I don't make typos, but that doesnt mean I'm not a robot.
    • Re:Typos (Score:3, Informative)

      by dlakelan (43245)
      Do robots make typos? Do they make the same typos each time, or different ones?

      Based on the slashdot articles that get posted. I would say YES.

      Actually it's pretty easy to add random convincing misspellings to text, you could use a database from something like usenet, and a spell checker to map misspelled words to their real counterparts, and then have a straightforward algorithm for replacing some set of words with misspellings, and you could tune that for consistency. It would be easier than many other as
    • Re:Typos (Score:5, Funny)

      by brian0918 (638904) <brian0918&gmail,com> on Tuesday April 25, 2006 @04:54PM (#15200135)
      E-mail spambots have been making typos for years.
  • by cbelt3 (741637) <cbelt@yahoo.cEEEom minus threevowels> on Tuesday April 25, 2006 @04:29PM (#15199872) Journal
    I've taken a long posting that I wrote on my blog and dropped it into the site. And I am Inauthentic. Now I understand the "Bladerunner Moment" comment in the article. I shall begin to surround myself with oddly colored polaroids and snapshots of theoretically implanted ancestors.

    The nice thing is that we've finally settled the argument if machines can be made to drink beer and like it !
  • by Locke2005 (849178) on Tuesday April 25, 2006 @04:30PM (#15199880)
    According the the program, the comments to this article are rated as follows:

    This text had been classified as INAUTHENTIC with a 32.2% chance of being authentic text

    Bearing in mind that text over 50% chance will be classified as authentic, this add credence to the theory that slashdot comments are generated by monkeys randomly typing on keyboards.

    • I just finished writing a scientific paper for publication. Apparently, this filter is very reliant on using long-term pattern recognition. When I fed this application my introduction only, it told me my work was INAUTHENTIC with a 35% chance of authenticity. When I fed it the first two sections, it said it was AUTHENTIC with a 66% chance of authenticity. And finally, when I fed it the entire paper, it said it was AUTHENTIC at the 87% level.

      So apparently, all you need to do to beat this filter is insert
  • From e-mail spam, to Slashdot submissions, to "letters to editor", to political petitions.

    Or is this just another application of Bayesian filters again?

  • by im_thatoneguy (819432) on Tuesday April 25, 2006 @04:35PM (#15199933)
    Apperantly I'm on average 49% artificial, based on school papers I wrote. I dub thee program: a failure.
  • by gurps_npc (621217) on Tuesday April 25, 2006 @04:35PM (#15199936) Homepage
    If you try to use it on any human written NON scientific paper, such as Lincoln's gettyburg address, it almost always considers it false.

    I suspect that it is looking for the conventional thinking with conventional word structure. As such, it is NOT a good idea i

    • by nasor (690345) on Tuesday April 25, 2006 @05:07PM (#15200255)
      No, it doesn't even seem to work on scientific papers. I submitted four papers from the latest issue of Inorganic Chemistry and it thought 2 out of 4 were false:

      Inauthentic: Assembly of a Heterobinuclear 2-D Network: A Rare Example of Endo- and Exocyclic Coordination of PdII/AgI in a Single Macrocycle.

      Inauthentic: Pyrazolate-Bridging Dinucleating Ligands Containing Hydrogen-Bond Donors: Synthesis and Structure of Their Cobalt Analogues

      Authentic: Manganese Complexes of 1,3,5-Triaza-7-phosphaadamantane (PTA): The First Nitrogen-Bound Transition-Metal Complex of PTA

      Authentic: Structure, Luminescence, and Adsorption Properties of Two Chiral Microporous Metal-Organic Frameworks

      Based on this (small) sampling, the program doesn't appear to do any better than if it were to guess randomly. I wonder if this thing is even supposed to work, or if it just returns a random result based on a hash of the paper or something?
      • Read the paper listed in the menu of the website. The system essentially compresses the text with different window sizes, and then looks at the compression factors. In other words, it is only looking for repetition of strings. This is absurdly easy to fool, and the MIT generator could be easily fixed to pass this filter. For example, try entering a random text once (your post, for example). Note that it fails. Then append a few copies of the same text, and run that through. Your post, when run once, is too

    • It seems to think that my blog has a 94% chance of being "a human-written authentic scientific document" ...
  • ...I just extracted the text from the PDF version of their paper [indiana.edu] on the subject (titled "Using Compression to Identify Classes of Inauthentic Texts") and ran it through the detector.

    It passed with a "90.1% of being an authentic paper.

  • plenty of plagiarism detection software out there; if the prank was really just random bits of (I assume pre-existing and public) text, then all the program need do is search google for a few random snippets, no?
  • Ah.... (Score:2, Funny)

    by BaronSprite (651436)
    Maybe slashdot can start running it on their links for "cold fusion in 1 year!".......
  • FYI, the "conference" the prank paper was accepted for is arguably a real "conference," it's certainly not a reputable one. The "conference" ("World Multi-Conference on Systemics, Cybernetics and Informatics") is famous for spamming everyone in just about every semi-related subject to submit and has famousely low bar for acceptance. See http://en.wikipedia.org/wiki/WMSCI [wikipedia.org]
  • JAR JAR Oyi, mooie-mooie! I luv yous! The frog-like creature kisses the JEDI.
    QUI-GON Are you brainless? You almost got us killed!
    JAR JAR I spake.
    QUI-GON The ability to speak does not make you intelligent. Now get outta here!


    This text had been classified as INAUTHENTIC with a 46.0% chance of being authentic text
  • Results from one of my papers: http://aem.asm.org/cgi/content/full/70/10/5980 [asm.org]

    This text had been classified as
    AUTHENTIC
    with a 95.2% chance of being an authentic paper

    Whew!!, cool maybe I'll pass the turing test too.
  • from people who have fed it (and no, I haven't R'd TFA -- this is still SlashDot, isn't it?!?!) their own (genuine) papers or something they feel is "authentic", and I wonder if the reason is less the fault of the software and more the fault of (genuine/human) authors writing (intentionally or unintentionally) in such a style because it's perceived to be the way they're "supposed" to write. Maybe software like this will cause authors to put a little more thought into their craft and not allow themselves to
  • I took one of my own postings and got a score of 11%. And it was something I had actually written myself, a piece of reasonable length about a subject on which I have first hand experience.

    I then tried an article from Scientific American and it scored 24% - sorry, guys, time for me to cancel the subscription, you are full of it. Alternatively, of course, it is the University of Indiana School of Informatics that's full of it and the air is thick with over-hype. It would be interesting for someone with the t

  • I am in awe (Score:5, Informative)

    by DingerX (847589) on Tuesday April 25, 2006 @05:02PM (#15200211) Journal
    So I go there, and I start shoving it text from my hard drive. I try:

    A) Text of an article (Philosophy) I (native English speaker) wrote in Italian: 98.5 Authentic.
    B) Text of an article I wrote in English (History): 87.8
    C) Text of an article (History) written in French by a native French speaker and translated into English: 93.2
    D) Critical edition of a 14th-century Latin text (Theology): 97.7 Authentic.
    E) Documentation to a Field Artillery Simulation: 95.3
    F) A completely bogus narrative for a monastic order that doesn't exist, written in a style that mimics A)-C): 16.8% Inauthentic

    So in this case, we have a human written document that has superficial meaning, but is written as a "fake scientific paper", and registering as such.

    And yes, I did read the "purpose" of the page; I know it's not supposed to detect it.


    And yet it does, decisively.
    • Interesting... Just for the heck of it, I ran Alan Sokal's [nyu.edu] paper Transgressing the Boundaries [nyu.edu] through the detector. It came back with a 93.8% chance of being authentic.

      For those of you who don't remember the story, Sokal, a physicist, wrote a paper full of postmodern-sounding gobbledygook, asserting among other things that gravity is a social construction (the paper was subtitled, "Towards a Transformative Hermeneutics of Quantum Gravity"). The paper was accepted at a peer-reviewed humanities journal. Sokal
    • I'm in awe too. I put in George Bush's biography from the whitehouse.gov website and got
      This text had been classified as
      INAUTHENTIC
      with a 27.3% chance of being authentic text
      I'm amazed too! It works!
  • The Special Theory of Relativity [gutenberg.org] got a 91.9% chance of being authentic. I'm sure if Einstein were alive, he'd be relieved.
  • All this talk without a single mention of the Sokal Affair [wikipedia.org]? It's pretty relevant. Also be sure to check out Paul Boghossian's article, "What the Sokal Hoax Ought to Teach Us." [nyu.edu] Great reading.
  • Duplicating the first half of the sample fake paper after the end of the footnotes makes it go from inauthentic (17%) all the way up to 91% authentic. It seems to be looking for long-range n-gram repetition, but it doesn't have a ceiling on frequency or length or the repeated text.

    It shouldn't be hard to compare the distribution of n-gram recurrence rates (or distances between recurrences) to the observed distribution for actual papers. Something like a KL divergence would capture deviations in either dir
  • We applaud development of heuristic filter success. Many sophisticated algorithms go into recursive development of low-latency, high-bandwidth sieving systems. Ongoing procedural optimization with commensalism yields best signal/noise ratio. Additional funding needed!
  • I wonder if this program, with a different set of algorithms, would be able to detect whether a coporate mission statement was created using the Dilbert Mission Statement Generator [dilbert.com]. (Beware; Dilbert.com is pop-up hell.)

  • As a lithmus test, any such device should be fed the writings of Jack Sarfatti, PhD (http://en.wikipedia.org/wiki/Jack_Sarfatti [wikipedia.org]). It is perfectly possible that a paper produced 100% by a human still consists of random bullshit (See: "Waldyr A. Rodrigues Jr: A Comment on Emergent Gravity" at http://arxiv.org/abs/gr-qc/0602111 [arxiv.org]).
  • The program only pretends to use computer algorithms. In reality, it emails the submitted document to the Indiana University speed-reader champion trained to recognize fake submissions. The prof skims it, and emails back the response.
  • Looks like this might be much harder
  • Oh, sorry, I thought that the Scientific Paper Detector was a fake.
  • by suv4x4 (956391)
    A new program developed by researchers at Indiana University promises to tell you one way or the other.

    You would think that this embarassment will cause the paper reviewers to look closer to what the heck they are accepting, but instead we get a program that does that job better.

    Just anything, ANYTHING to keep those reviewers from actually getting their work done is well accepted.
  • The following text from the slashdot homepage classified as inauthentic:

    Neopallium writes to tell us that in a recent announcement at the Desktop Linux Summit the Free Standards Group reports fourteen of the leading Linux vendors have pledged support for the newest release of the Linux Standards Base. From the article: "'The Release of LSB 3.1 is another milestone achieved by the industry and the Open Source Community that delivers ever increasing value to customers,' said Reza Rooholamini, director
  • False positives (Score:3, Interesting)

    by macklin01 (760841) on Tuesday April 25, 2006 @08:05PM (#15201439) Homepage

    Hmmm, it's an interesting idea, but it seems to give a lot of false positives. (So naturally, it will detect fake papers, if it thinks every paper is fake.)

    First thing I tried was some pages on computational oncology website [uci.edu], in particular, my cancer primer [uci.edu], which I wrote in not a short time. Everything I fed was determined to be inauthentic. Perhaps I just write like a robot. :-) I figured that perhaps the detector was more primed for real papers, so I figured it wasn't too big of a deal.

    So, next I tried my most recent research paper [sciencedirect.com], and it, too, was determined to be inauthentic, and in fact with less authenticity than my website. So much for the theory of being primed for scientific papers only. This thing is starting to look pretty bogus to me ... but an interesting idea, nonetheless. -- Paul


  • I've always wanted to submit a paper to one of these vanity conference "peer reviewed journals" [cough cough], the ones where no paper is ever rejected, describing some work on long-discarded theories (>50 years). Just to be cheeky.

    How does "N-ray studies of the Phlogiston Content of Polywater" sound?

    Should probably wait until after tenure...
  • by Animats (122034) on Tuesday April 25, 2006 @10:56PM (#15202124) Homepage
    I've been trying my own papers and articles from Wikipedia. My own papers all score around 90%. Wikipedia articles that I consider good ones seem to score in the 80% range. Badly written fancruft scores very low.

    Some variant on this thing might be useful as a new article filter in Wikipedia. We need more automation over there to stem the flow of incoming dreck.

The less time planning, the more time programming.

Working...