Forgot your password?
typodupeerror
AI Wikipedia

Wikipedia's Guide to Spotting AI Is Now Being Used To Hide AI 34

Ars Technica's Benj Edwards reports: On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday. "It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)

But as with all AI prompts, language models don't always perfectly follow skill files, so does the Humanizer actually work? In our limited testing, Chen's skill file made the AI agent's output sound less precise and more casual, but it could have some drawbacks: it won't improve factuality and might harm coding ability. [...] Even with its drawbacks, it's ironic that one of the web's most referenced rule sets for detecting AI-assisted writing may help some people subvert it.
This discussion has been archived. No new comments can be posted.

Wikipedia's Guide to Spotting AI Is Now Being Used To Hide AI

Comments Filter:
  • Arms race (Score:5, Interesting)

    by ZiggyZiggyZig ( 5490070 ) on Thursday January 22, 2026 @06:11AM (#65941436)

    I think AI is going to be the end of the open web. There is already an arms race between slop makers and legitimate content curators, but the odds are in favor of the former - they have incentive, time and automated tools that can generate an endless pile of sometimes believable junk at their disposal. On the other side, limited human time and capital, and difficulty that will increase to distinguish between slop and actual content.

    This will kill open collaboration on the internet, also killing most projects on which AI tools rely for model learning. AI slop makers will happily kill the golden egg's goose for short-term profit. Well, capitalism as usual, I guess.

    • Re:Arms race (Score:5, Interesting)

      by mudimba ( 254750 ) on Thursday January 22, 2026 @07:35AM (#65941474)

      You are absolutely right. Not just the open web, but everything on the web with any kind of incentive. If there is anything to be gained, people will launch a barrage of slop in the quest to gain it.

      • You are absolutely right. Not just the open web, but everything on the web with any kind of incentive. If there is anything to be gained, people will launch a barrage of slop in the quest to gain it.

        Mass rejection of anything non-human might be a nice response to the slop.

        A "barrage" of anything coming at you, often requires a solid wall to defend against. Unforgiving, but effective.

    • Re: (Score:2, Redundant)

      by gweihir ( 88907 )

      AI slop makers will happily kill the golden egg's goose for short-term profit. Well, capitalism as usual, I guess.

      Yep, also Tragedy of the Commons: If you do not stomp HARD on the abusers, things go to crap for everybody. I think we need laws making it a criminal act to intentionally hide something is not human-generated. Dark times.

    • by bjoast ( 1310293 )
      Why would you hunt for AI generated articles? The future of encyclopedias is for them to be mostly managed by AIs, with the occasional human interference and quality control.
      • by La Gris ( 531858 )

        Why would you hunt for AI generated articles? The future of encyclopedias is for them to be mostly managed by AIs, with the occasional human interference and quality control.

        I think this is true. The future is with human curator of AI content. Because non-curated AI content will always be slop.

        • by HiThere ( 15173 )

          To say it will always be slop it to mistake the current status for a permanent condition. Even now, compared to last year, the average quality is better, and some people (probably those "skilled in the art") are reporting significant gains.

          OTOH, it will probably be true of pure LLMs forever. To avoid slop will require interaction with the universe.

      • by allo ( 1728082 )

        Maybe, but the present is people earning points by generating *something* without much control and then posting it. Bug bounties, Stackoverflow answers, Wikipedia edits.

        The AI writing Wikipedia would/should be a model that accesses first party information to create an article, not a model that creates an article from what it learned 1 year ago (possibly from Wikipedia articles). When a Wikipedia article is written it should contain new information, a LLM without external tools only contains old information.

    • Re: (Score:1, Troll)

      by FictionPimp ( 712802 )

      This really highlights the complex, nuanced ecosystem of the modern digital landscape we’re all navigating together.

      While it’s true that AI-generated content is rapidly scaling at an unprecedented rate, we should also remember that innovation has always disrupted traditional systems — and history shows us that humanity adapts!

      At the end of the day, the open web is more than just content — it’s a vibe, a community, and a shared journey. Authentic voices will naturally rise to

  • by xack ( 5304745 ) on Thursday January 22, 2026 @06:29AM (#65941444)
    Wikipedia's own "policy" [wikipedia.org] btw, by telling an AI not to be an AI, it's going to follow the advice to "not be an AI". AI is now emulating the behaviour of a naughty child.
  • Always the same crap with the human race.

  • I thought these things were meant to supersede programming.

  • i'll be he has ugly kids
  • Point of order (Score:5, Interesting)

    by Arrogant-Bastard ( 141720 ) on Thursday January 22, 2026 @08:05AM (#65941500)
    Siqi Chen is not a "tech entrepreneur". He's a sociopathic asshole who's decided to shit on decades of careful work by tens of thousands of Wikipedia volunteers because he can. This is the same kind of person who throws trash out of their car window -- because they can, and because they have absolutely no concern whatsoever for other people. They don't care what they destroy, they don't care who they hurt, they don't care how much damage they do. The concepts of being responsible or compassionate or caring are completely foreign to sadistic monsters like Chen; all they care about are (a) their bloated egos and (b) their profits.

    I'm disgusted, I'm angry, I'm horrified -- but I'm not surprised.
    • Aren't you shooting the messenger here? I don't personally know him, but this guy published his idea, thus hopefully helping Wikipedia to counter. I'm sure a lot of others are experimenting on similar prompts...

      • It's not just an idea he published, it's an implementation. One he stands to profit from. And one he is encouraging people to deploy while smugly bragging about it.

        Would you feel the same way if some "messenger" broke into your home, stole your possessions, and spray painted "Fuck you, ZiggyZigZig! Should have invested in some better locks!" across your living room wall?
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday January 22, 2026 @09:09AM (#65941578) Homepage Journal

      Siqi Chen is not a "tech entrepreneur". He's a sociopathic asshole who's decided to shit on decades of careful work by tens of thousands of Wikipedia volunteers because he can.

      Sounds like your average "tech entrepreneur" to me.

  • by JoshuaZ ( 1134087 ) on Thursday January 22, 2026 @08:20AM (#65941518) Homepage
    Some of the best forgeries have been by people who have actively studied how other forgeries were detected. By nature, once you publish anything which is an instruction of how to detect something, someone can use those instructions to hide the thing.
  • An old story (Score:4, Insightful)

    by jbmartin6 ( 1232050 ) on Thursday January 22, 2026 @09:34AM (#65941614)
    This is the same phenomenon that made guidance on spotting fake IDs into a manual on making fake IDs, books about burglar-proofing your house into burglary tutorials, and so on.
  • 'nuff said, I think.
  • Soon, AI use will be all-but-undetectable if you just look at the output.

    Authors/publishers with good reputations or with reputable organizations to vouch for them might be believed when they say "I made this without AI input" but nobody else will. At least not in cases where it matters, like news reports.

  • There was no mention of either false positives or false negatives in the links provided (that I could find). That is, a false positive is that the article is AI when it isn't and a false negative is that it's a human written article when it isn't. In any case, consider the case of a false positive with some given variables in a Bayesian framework to consider how difficult it would be to actually determine if an article is written by a human or not.

    Let's assume that 4% of all articles in a given sample train

    • One clarification on the result. If the model classifies an article as written by AI, then there's a 57% chance that it's wrong. I should have stated this.
  • Wikipedia uses a list of tells that something is AI.
    That list of tells includes that text is soulless.
    One of the examples of being soulless is that the text reads like a Wikipedia article.

    So Wikipedia is suggesting not to be like Wikipedia because Wikipedia reads like AI slop.

  • The wikipedia examples all sound like run of the mill marketing copy. Which also should be expunged. However, when I've tried to do that, it is immediately reversed by Wikipedia editors.

Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian

Working...