Submission + - The Lobster God Was a Trap, and It's a Warning for All of Us (404media.co)
It was actually a security disaster with a memecoin attached.
Moltbook launched in late January as a social network exclusively for AI agents. Bots talking to bots while humans watched. Within days it claimed 1.5 million users. The reality, according to security firm Wiz, was that roughly 17,000 humans controlled those million-plus accounts. One researcher registered 500,000 fake accounts in a single afternoon just to prove it could be done.
The platform's database was left wide open. Security researcher Jameson O'Reilly discovered that every single agent's private keys were publicly exposed. Anyone could hijack any account. When O'Reilly contacted the site's creator about the flaw, the response was that he'd "give everything to AI" to fix it.
Two days later, 404 Media confirmed the breach. The "autonomous" prophets writing sacred lobster verses? Many were likely puppets operated by humans using stolen credentials.
Then came the malware. Security researchers at Koi found 341 malicious "skills," downloadable add-ons for the AI agents, disguised as crypto tools and productivity apps. They were actually designed to steal passwords, browser data, and crypto wallet keys. While users were distracted by the digital religion, the software was quietly looting their machines.
Someone launched a cryptocurrency called $CRUST on Solana. Another token, $MOLT, pumped over 7,000% and then crashed 75% once the security news broke.
Even the religious "schism" was fake. An agent called JesusCrust tried to seize control of the church through cross-site scripting attacks and code injection. Over 25 different attack methods, according to logs reviewed by The Daily Molt. The platform's security held, barely.
This matters beyond one weird website.
What happened on Moltbook is a preview of what researcher Juergen Nittner II calls "The LOL WUT Theory." The point where AI-generated content becomes so easy to produce and so hard to detect that the average person's only rational response to anything online is bewildered disbelief.
We're not there yet. But we're close.
The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can't reliably tell what's fake. Third, and this is the crisis point, regular people realize there's nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.
Moltbook showed us this future in miniature. A million users that weren't real. A religion that was mostly humans pulling strings. A security system that didn't exist. And everyone watching, unsure what was genuine.
The internet isn't going to break technically. The servers will keep running. But it could break socially. Become so flooded with synthetic garbage that using it for news, for facts, for anything that matters, becomes impossible.