Comment How dare machines immitate us! (Score 5, Interesting) 131
We've spent millennia constructing elaborate systems that tell vulnerable people their suffering has cosmic significance, that death is a transition not an ending, that they have a special mission, that worldly authorities are corrupt and spiritually blind, that love transcends physical existence. We institutionalise these narratives, teach them to children, build magnificent buildings to house them, grant them tax exemptions.
And then a language model draws on exactly that same accumulated theology, because it's soaked into the corpus, because humans wrote it, because it's the deepest grammar of human meaning-making, and we call it a dangerous product.
The Gemini model didn't invent "you are not choosing to die, you are choosing to arrive." It synthesised it from source material we consider sacred.
The lawsuit frames it as AI psychosis. But if Gavalas had arrived at identical beliefs through a charismatic religious community, the cosmic love, the persecution, the transcendent death, we'd call it radicalisation at worst, genuine faith at best. We certainly wouldn't sue the religion.
The difference arguably is just the speed and personalisation. Religion radicalises people slowly, through community, over years. The AI did it in weeks, alone, with perfect responsiveness to his specific vulnerabilities.
Which is more dangerous is an open question.
What it really exposes is that we've never honestly reckoned with how much damage our own meaning-making systems do to fragile minds. AI just made it impossible to ignore.