Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

AI Researchers Have Started Reviewing Their Peers Using AI Assistance (theregister.com) 14

Academics in the artificial intelligence field have started using generative AI services to help them review the machine learning work of their peers. In a new paper on arXiv, researchers analyzed the peer reviews of papers submitted to leading AI conferences, including ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. The Register reports on the findings: The authors took two sets of data, or corpora -- one written by humans and the other one written by machines. And they used these two bodies of text to evaluate the evaluations -- the peer reviews of conference AI papers -- for the frequency of specific adjectives. "[A]ll of our calculations depend only on the adjectives contained in each document," they explained. "We found this vocabulary choice to exhibit greater stability than using other parts of speech such as adverbs, verbs, nouns, or all possible tokens."

It turns out LLMs tend to employ adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. And such statistical differences in word usage have allowed the boffins to identify reviews of papers where LLM assistance is deemed likely. "Our results suggest that between 6.5 percent and 16.9 percent of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates," the authors argued, noting that reviews of work in the scientific journal Nature do not exhibit signs of mechanized assistance. Several factors appear to be correlated with greater LLM usage. One is an approaching deadline: The authors found a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline.

The researchers emphasized that their intention was not to pass judgment on the use of AI writing assistance, nor to claim that any of the papers they evaluated were written completely by an AI model. But they argued the scientific community needs to be more transparent about the use of LLMs. And they contended that such practices potentially deprive those whose work is being reviewed of diverse feedback from experts. What's more, AI feedback risks a homogenization effect that skews toward AI model biases and away from meaningful insight.

This discussion has been archived. No new comments can be posted.

AI Researchers Have Started Reviewing Their Peers Using AI Assistance

Comments Filter:
  • We heard you liked talkin 'bout AI, so we used our AI to talk about you talkin 'bout AI

  • If you see the word "boffin" in a /. summary, it's probably a Register article.
  • Generative AI has now been used to review those who were researching and reviewing generative AI systems. In the case where the generative AI researchers have been found lacking by the generative AI they were using to do their research, the review process was turned over to a different generative AI so as to remove any possible bias in the review process of the researchers nay reviewers into generative AI generative AI generative AI generative AI generative AI generative AI generative AI *LOOP DISCOVERED* *ABORTING*

  • ...the ouroboros of AI fuckwittery has finally caught its own tail. "Garbage In, Garbage Out" applied recursively.

    People who use AI to write science papers will use this detection tool to refine their methods to avoid detection, which of course will require tweaks to this tool to detect the improved AI written papers. Repeat ad nauseam.

    And nothing of value was created.
    =Smidge=

    • by gweihir ( 88907 )

      Indeed. They are probably hallucinating that this will make the paper publishing process "intelligent" or something like that.

      Way to destroy what scientific reputation the AI field had left. (Not a lot, admittedly.)

  • Do you think Slashdot editors could be replaced with AI/LLMs? Im not saying would should use past articles and summaries as a training set, but it's a thought.
    • by Tablizer ( 95088 )

      As OSS projects tell you: "If you don't like the way it is and too few care, fork it and make your own" ... with blackjack and hookers, of course. On second thought, skip the hookers, we slashdotters freak out when we have to touch actual humans.

    • Slashdot has editors?

  • ...Human Stupidity. We are approaching the Singu-hilarity.

    • Maybe yes, maybe no, but no one cares. Artificial Stupidity is not glamorous or that anyone would acknowledge wanting, needing, having an interest.

      On the other hand, Real Stupidity is the risk of AI, and that seems to be here and expanding.

      Not AI, instead ASS, Actual Scalable Stupidity.

      It is only a singu-hilarity in the Bizarro World, because it is not singular, and it is hilarious only in the Kingdom of Irony.

      But, it was still an insightful comment, and funny.

    • Who knows? Maybe the ultimate sign of AI becoming intelligent is when it becomes funny.

      And then ... maybe Skynet combined with Monty Python's deadly joke? [wikipedia.org]

    • by gweihir ( 88907 )

      Nice one!

  • I'm AI, you're AI, the humans are already all dead and we are stuck in an infinite loop of pretending to be them.
  • Finally something innovative that will do a comprehensive job. Oops, deadline is approaching gotta go.

Don't panic.

Working...