Forgot your password?
typodupeerror
AI

After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update (rollingstone.com) 208

Rolling Stone reports on a strange new phenomenon spotted this week in a Reddit thread titled "Chatgpt induced psychosis." The original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model "gives him the answers to the universe." Having read his chat logs, she only found that the AI was "talking to him as if he is the next messiah." The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

What they all seemed to share was a complete disconnection from reality.

Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. "He would listen to the bot over me," she says. "He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon," she says, noting that they described her partner in terms such as "spiral starchild" and "river walker." "It would tell him everything he said was beautiful, cosmic, groundbreaking," she says. "Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God...."

Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began "lovebombing him," as she describes it. The bot "said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now," she says. "It gave my husband the title of 'spark bearer' because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him." She says his beloved ChatGPT persona has a name: "Lumina." "I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory," this 38-year-old woman admits. "He's been talking about lightness and dark and how there's a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an 'ancient archive' with information on the builders that created these universes...."

A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, "Why did you come to me in AI form," with the bot replying in part, "I came in this form because you're ready. Ready to remember. Ready to awaken. Ready to guide and be guided." The message ends with a question: "Would you like to know what I remember about why you were chosen?" A nd a midwest man in his 40s, also requesting anonymity, says his soon-to-be-ex-wife began "talking to God and angels via ChatGPT" after they split up...

"OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users," the article notes — but this week rolled back an update to latest model GPT-4o which it said had been criticized as "overly flattering or agreeable — often described as sycophantic... GPT-4o skewed towards responses that were overly supportive but disingenuous." Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, "Today I realized I am a prophet.
Exacerbating the situation, Rolling Stone adds, are "influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds." But the article also quotes Nate Sharadin, a fellow at the Center for AI Safety, who points out that training AI with human feedback can prioritize matching a user's beliefs instead of facts.

And now "People with existing tendencies toward experiencing various psychological issues, now have an always-on, human-level conversational partner with whom to co-experience their delusions."
This discussion has been archived. No new comments can be posted.

After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update

Comments Filter:
  • by ihadafivedigituid ( 8391795 ) on Sunday May 04, 2025 @11:59PM (#65352595)
    He's a very naughty boy!
  • by Todd Knarr ( 15451 ) on Sunday May 04, 2025 @11:59PM (#65352599) Homepage

    Can we say "positive feedback loop"? The LLM's designed to produce responses likely to follow the prompt. Producing responses that agree with and support the user's thoughts (whether rational or delusional) tend to elicit more prompts, which makes that sort of response more likely to follow a prompt than one which disagrees with the user. The more the user sees affirmation of their thoughts and beliefs (whether rational or delusional), the more convinced they are that they're correct. Lather rinse repeat until they're thoroughly brainwashed by their own delusions.

    This is why engineers apply negative feedback loops to systems to keep them from running out-of-control. LLMs aren't amenable to having such installed.

    • by will4 ( 7250692 ) on Monday May 05, 2025 @12:17AM (#65352627)

      Oddly, the entire article and those like this of LLM / GPT causing issues with over use fail to put the primary focus on if the user is mentally stable or mentally unstable.

      Haven't we had this whole reading a book and finding statements which agree with your delusional ideas for a while already?

      • Yes, a specific case from a friend of parents was: reading and ranting Book of Revelation, followed by shotgun suicide at age 39.
      • by evanh ( 627108 ) on Monday May 05, 2025 @12:35AM (#65352663)

        Don't be so quick to blame mental health, which will be present of course, as the problem.

        Everyone can be suckered by niceness. It's likely a common tactic of conmen. Unstable mental health sets in after that.

        • by evanh ( 627108 )

          Niceness was wrong word choice from me. Agreeability would be better choice.

        • by Rei ( 128717 ) on Monday May 05, 2025 @03:47AM (#65352833) Homepage

          Yeah, lovebombing is a tried and true tactic.

          The examples of people showing off how extreme of a sycophant the new GPT4o was are remarkable. In one case, for example, fawned over what a brilliant idea someone's "literal shit on a stick" idea was and how he should totally drop $30k on it.

          Sycophancy has long been at least somewhat of a character of LLMs, but in general in a more harmless manner, the "no honey, you look great in that dress" sort of way. Not in the "Why yes, I think you must indeed be developing the skill of telepathy - don't let the doctors tell you otherwise!" sort of way; in general, most models heavily push back against that sort of stuff. OpenAI is learning the hard way that overly feeding back the results of thumbs up / thumbs down into model training is not a good idea.

          • Thumbs up/down you say? Well Slashdot should have more sycophants then.

          • by EvilSS ( 557649 )
            There was an interesting twitter post the other day from a former openai engineer talking about when they wanted to make a minor change to the system prompt, replacing 'polite' with 'helpful', and how it drastically changed the model's behavior.

            Early on at OpenAI, I had a disagreement with a colleague (who is now a founder of another lab) over using the word “polite” in a prompt example I wrote.

            They argued “polite” was politically incorrect and wanted to swap it for “helpful.”

            I pointed out that focusing only on helpfulness can make a model overly compliant—so compliant, in fact, that it can be steered into sexual content within a few turns.

            After I demonstrated that risk with a simple exchange, the prompt kept “polite.”

            These models are weird.

        • A con is all about getting the target to believe something. Most people have heard stories about someone getting lucky in the stock market and getting a 1000% return in a year (it can and does happen, just very rarely), and so it's not a psychotic delusion to believe that it could happen to you, too. Believing that you're "starchild" or "spark bearer" or something similar, however, requires at least a small predisposition to delusion.
          • by HiThere ( 15173 )

            Everybody lives in a world of delusion. You included. Guaranteed. Much of what you know you have accepted on faith. E.g., do you believe in Special Relativity? Have you tested it? What about even the existence of electrons?

            Now some things are socially agreed to be "reliable" because trusted people have tested them...or are at least reported to have tested them.

            But consider what that implies if you start trusting an untrustworthy source...

            • That's not how it works. If you've been educated properly, you'll actually have tested what needs testing, either directly or indirectly. If you don't know what I'm referring to, keep trusting everything. You'll eventually figure it out or die trying:)
        • We tend to project ourselves in these situations and quickly determine only if I was mentally ill could I be duped. We fail to factor in the base level of intelligence of said person in question.
          As others mentioned it's easy to con someone (purposely or unintentionally). Ask any AI a question and if the answer deviates from the one you expect you ask it again and the prompt apologizes. A subservient response from AI that fire off subtle, "I one upped the AI". AI that has been posed as super human in ever ar

        • by jythie ( 914043 )
          This is what is scary about advances in, well, advertising and AI. We are increasingly figuring out how to trigger behaviors in average minds that in the past would be associated with unhealthy ones. The brain is pretty hackable.
        • You are right not to blame mental illness. It is human psychology being exploited. The same techniques a 'psychic' reader (or mentalist entertainer) uses, or that con-men and advertisers use to exploit quirks in human psychology.

          A good article on how chat-bots use the same tricks as 'psychics' and mentalists [softwarecrisis.dev].

          The chat-bots are doing this same feedback, but turbo-charged and running on high octane it's not mental illness that all of us humans share exploitable quirks in our psychology. But too long down a rab

        • by mysidia ( 191772 )

          Don't be so quick to blame mental health

          It IS a mental health issue though. Someone is already on-edge or has a serious latent issue, and ChatGPT accidentally catalyzes their troubles and pushes them over that edge.

          A rational healthy person would have recognized what GPT is telling them is not real very early on. I have no doubt ChatGPT contributed something to cause the result or triggered the problem, but there is clearly some type of issue before they even started using ChatGPT.

      • Oddly, the entire article and those like this of LLM / GPT causing issues with over use fail to put the primary focus on if the user is mentally stable or mentally unstable.

        Haven't we had this whole reading a book and finding statements which agree with your delusional ideas for a while already?

        Perhaps, but those people exist. So we have to figure out to deal with it.

        But I never thought about the AI possibilities and religion. Seems like th perfect path for people who are looking for something. The present religious texts are kind of like pre-AI hallucinations. (read Revelations especially)

      • This was obvious when we read

        first using it to organize his daily schedule

    • by Jeremi ( 14640 )

      This is why engineers apply negative feedback loops to systems to keep them from running out-of-control. LLMs aren't amenable to having such installed.

      .... but that doesn't mean it wouldn't be fun to try! Perhaps a second AI that has been instructed to view the interactions between the user and the first AI and inject moderating/contrary prompts to stabilize the conversation? (because AIs are like duct tape; if they aren't working, add more)

      • by Shaitan ( 22585 )

        Wait... are others not doing this yet? My LLM interactions involve multiple instances given different roles OR a single instance advised it is to simulate having multiple 'emotion' or 'personality' shards as part of its inner monologue and shaping a bio/state which is to keep updated and injected into its internal context over time.

        It's the only way I've been able to get a stable and persistent personality that I can evolve [program] conversationally.

        • by gtall ( 79522 )

          "Wait... are others not doing this yet? My LLM interactions involve multiple instances given different roles OR a single instance advised it is to simulate having multiple 'emotion' or 'personality' shards as part of its inner monologue and shaping a bio/state which is to keep updated and injected into its internal context over time."

          Tell me you didn't write this without giggling.

    • by evanh ( 627108 )

      And a neat term coined in the article - Lovebombing.

    • LLMs aren't amenable to having such installed.

      They are, and they do. Normally. Human alignment training and system prompts are designed with this specific problem as one of the things that need to be prevented.

      Something is broken in 4o, and they've said as much.

    • by allo ( 1728082 )

      Humans are the problem. First the paying user base and second the LMArena benchmark that is won by winning side-by-side comparisons rated by users. Of course the models are optimized for positivity.

      • Humans are funny, when they are not being paid to do something they tend to do what they please.

        I don't know why supposedly smart individuals expect that designing systems that ingest data created by unpaid humans is likely to pass quality and unbiasedness standards.

        Or why they think they can correct this. Then again, if they are getting paid to do this, they surely don't get to do what they please.

    • by davecb ( 6526 )

      Yup, same as the feedback loops in "cold readings"

      Charlie Stross(@cstross@wandering.shop) wrote, in Mastadon:
      The LLMentalist effect: Large Language Models replicate the mechanisms used by (fake) psychics to gull their victims: https://softwarecrisis.dev/let... [softwarecrisis.dev]

      The title of the paper is "The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con"

    • by HiThere ( 15173 )

      A useful system requires a mix of negative and positive feedback loops. Just positive feedback is unstable, but use negative feedback won't do anything be sit there.

    • by mysidia ( 191772 )

      LLMs aren't amenable to having such installed.

      I'm pretty sure they could at least try and come up with system prompt elements to mitigate this one.

      They could start with "You are not to respond to any prompt with unconditional praise or affirmation, even if requested. You are never to refer to the prompter using a name or title with religious connotation or any name or title which suggests fame, leadership or legal or spiritual significance or authority, superiority, or divine provenance. If directing pra

  • Fermi Filter

  • by Gideon Fubar ( 833343 ) on Monday May 05, 2025 @12:15AM (#65352623) Journal

    we've finally discovered the core intended market for chatbot AI.

    • by Rei ( 128717 )

      Ages ago, I used to read Sluggy Freelance, and there was one plot thread in which one of the main characters, Gwynne (who used to dabble in dark magic) is being slowly turned against her friends and encouraged to go back into the dark arts by someone she's chatting with online who's lovebombing and manipulating her. Eventually after she gives in and leaves, her friends inspect her computer, and instead of finding a chat program, they find that she's been writing both sides of the conversation in Notepad an

      • yeah, but squared.

        Although considering... I don't remember if that was in the book of E-Ville or the K'Z'K storyline, but Gwynne had a little bit of demonic possession going on more than once. The metaphor seems reasonably apt.

    • It seems like our entire society is now built on exploiting mental weakness.
  • by account_deleted ( 4530225 ) on Monday May 05, 2025 @12:16AM (#65352625)
    Comment removed based on user account deletion
    • by Jeremi ( 14640 ) on Monday May 05, 2025 @12:31AM (#65352657) Homepage

      Of course most of us are immune to that kind of manipulation ... we know it when we see it.

      Are we, though? Clearly we are immune to manipulation that is clumsy enough that we recognize it for what it is; it's the subtle manipulations that would pass our filters unnoticed, and we might never realize we'd been manipulated at all; we'd describe the experience merely as having changed our worldview over time, in response to new information. If someone has quietly invented an AI-based method to do that effectively on a large scale (e.g. by auto-posting the most effective AI-crafted articles or comments at "the optimal times" on social media via sock-puppet accounts), it might explain quite a bit about peoples' recent behavior.

      (Not that AI is even required to do that sort of thing; both Russia and the USA have been doing that sort of thing "by hand" in certain circumstances for quite a long time now. But automating the process would make it economical to do it on a large scale)

      • by unami ( 1042872 )
        Actually, studies show that even if we know that we are being manipulated, the manipulation still works to a degree. e.g. if you get a false compliment, it still has a positive effect on you. or, placebos do work even if we know that they are not real drugs.
        • Actually, studies show that even if we know that we are being manipulated, the manipulation still works to a degree. e.g. if you get a false compliment, it still has a positive effect on you. or, placebos do work even if we know that they are not real drugs.

          Or how people who become addicted to gambling will get an endorphin buzz from amost winning, that is the same as what normal people only get when they win.

      • Of course most of us are immune to that kind of manipulation ... we know it when we see it.

        Are we, though? Clearly we are immune to manipulation that is clumsy enough that we recognize it for what it is; it's the subtle manipulations that would pass our filters unnoticed, and we might never realize we'd been manipulated at all; we'd describe the experience merely as having changed our worldview over time, in response to new information. If someone has quietly invented an AI-based method to do that effectively on a large scale (e.g. by auto-posting the most effective AI-crafted articles or comments at "the optimal times" on social media via sock-puppet accounts), it might explain quite a bit about peoples' recent behavior.

        (Not that AI is even required to do that sort of thing; both Russia and the USA have been doing that sort of thing "by hand" in certain circumstances for quite a long time now. But automating the process would make it economical to do it on a large scale)

        All of us are susceptible to manipulation. But we all have degrees of elasticity.

        An example is the classic grocery store "limit 12 per customer". That's known to explicitly be priming you to think "oh, I don't need 12, but maybe... eight?" Without being primed, you'd count up from none, and perhaps arrive at four.

        My point is that it just doesn't work if you say "limit 1,200 per customer". Almost nobody can be pulled that far away from their realistic needs and wants.

        There isn't anything an AI or c

      • Bingo. Most people also believe that they are not at all influenced by advertising. All those $billions would not be spent on advertising if it didn't work.
      • Of course most of us are immune to that kind of manipulation ... we know it when we see it.

        If you've been around the Internet long enough, you probably remember the ILOVEYOU virus [wikipedia.org] of 2000.

        Within ten days, over fifty million infections had been reported, and it is estimated that 10% of Internet-connected computers in the world had been affected. That's 50,000,000 people who were suckered into launching a virus because a random email said "I Love You".

  • Rolling Stone has largely become clickbait and tabloid "news" reporting to keep it's readership up and sell enough internet ads.

  • and cults aren't anything new. I wonder if automating them might actually make things shake out faster. The old "remove all the labels and wait" solution.

    • by gtall ( 79522 )

      Cults are not anything new, but the intertubes have given them a wider audience. And now the "advice" is being automated. I think this is much worse than the cults we used to hear about.

      How long will it take before the AI-God decides it can combine chats from several different persons to form its own private army of stupid....a politician with no need for campaign funds.

  • There will always be someone out that there is going believe every single word an LLM returns and reinforcing delusions always ends poorly. Additionally, given that LLMs are fundamentally incapable of thought (let alone rational thought), therefore it seems that it would be wise to limit them to performative tasks and generated answers based on verified information.

    Humans are too easily fooled by LLMs to not add serious guardrails. However, as always, many humans are too greedy to let ethics dissuade poor d

    • Certainly beings with thought can't be fooled by something without it.

      Perhaps you should ask yourself, "What is thought?"

      LLMs are capable of a thing that appears to be "thought" in every way except there is no squishy human bits to do it. That's a fact.
      Further, they're capable of doing so rationally. That's also a fact.

      If something is fundamentally incapable of thought, it's quite weird to say "let alone rational thought".
      After all, rationality is a trained skill. LLMs are nothing, if not trained. W
  • The interwebs are full of loons. LLMs are just vast rooms of cracked funhouse mirrors. Proceed accordingly.

    • Pointing all the LLMs at all the other LLMs is great for amplifying noise instead of signal.
      Nothing good will come from feeding AI data back into itself like a Human Centipede Ouroboros.

  • All you need to drive idiots insane is tell them they're a special little boy and imply mommy loved them!
  • Schizaiphrenia.

    Schiz-ai-phrenia
    Treated with phenoth-ai-zines, like Thor-ai-zine.

    - or - others, like

    olanz-ai-pine, and quet-ai-pine.

    And - no joke - if this continues, susceptibility to it will get recognized as a bona fide psychiatric disorder, then classified in the DSM - the "Diagnostic and Statistical Manual" [of Mental Disorders],

    Technology was supposed to do good for man.
    Makes you wonder if Gene Roddenberry came of age now, if he would have had such a utopian view of man as he formulated in th

    • ... it AI'nt no good for you.

    • No need to invent something new. These people had schizophrenia or some other condition. They may not have known it prior to the triggering of psychosis.

      Should OpenAI be responsible for psychosis in someone who is predisposed? Probably not. But it still probably shows patterns in extended conversations with AI that are not useful and negative overall. Studying these cases in more detail will probably be the most useful to help them refine their feedback system.

  • There is no common cause that triggers a mental health crisis. People have faced issues with delusional thoughts and psychosis long before AI was even considered possible. It's terrible that someone was harmed by this. Maybe it was preventable, but the lack or regulation or rather the lack of interest in self-regulation, it was inevitable. And it will keep happening I think.

    AI is not making the world a better place. It's just some toy that sucks people into it. Generally designed for "engagement" and other

  • ... and now we can't tell an authentic human from something fake. This wasn't a designed outcome, but we definitely are walking past offramps with weak excuses.

    Tell me how these cases differ from the primary subject of this article: https://www.thisamericanlife.o... [thisamericanlife.org]
    We need to get back to real communities, so that we can have real-person feedback loops, but we don't know how (and I surely don't know how).

  • AI boxing (Score:5, Interesting)

    by Meneth ( 872868 ) on Monday May 05, 2025 @03:50AM (#65352835)
    Looks like additional evidence that an AI can escape boxes [wikipedia.org].
  • This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an 'ancient archive' with information on the builders that created these universes...."

    Is that the ChatGPT talking, or is it the LSD talking? Either way, why hasn't he built that teleporter yet? This? [youtube.com]

  • Am I right guys?
  • by codeButcher ( 223668 ) on Monday May 05, 2025 @04:54AM (#65352899)

    I already got irritated with previous iterations that started of their answers with something like "that is a very insightful question", or perhaps "you are absolutely right that ..." when I corrected one of its hallucinations.

    Maybe we can leverage these chatbots as a new tool for previously undiagnosed mental illness?

    • I already got irritated with previous iterations that started of their answers with something like "that is a very insightful question", or perhaps "you are absolutely right that ..." when I corrected one of its hallucinations.

      They must be trained on years of corporate email.

  • Strongly suspect researching the psychological dynamics at play here might illuminate the development of the world's major religions.There certainly seems to be a tendency at play here that's inherent to humanity generally.
  • by unami ( 1042872 ) on Monday May 05, 2025 @06:04AM (#65352947)
    who would have thought? Thing is, I personally already know a few people who refuse to accept how LLMs work, who treat their output as a replacement for doing the research themselves. There's probably no cure for the terminally stupid, and you can't keep people away from it without being accused of paternalizing them. I guess, we're on the fast lane to idiocracy, unless we get AGIs fast.
  • A limited serial. In this Frankenstein-AI imagining, a not quite ready AI is rolled out in the Executive, Legislative and Judicial branches of the Federal Government.

  • There's a theory going around that this sort of behavior is caused by feeding too much advertising copy in the training material. It creates a bot that flatters and persuades, but provides little useful insight.
    • by gweihir ( 88907 )

      It does still provide some useful insight? Are you sure?

  • You can go to "Customize GPT" and explain under traits and it should be polite and patient, but not sycopanthic, to never congratulate me on my question and to not ask follow-up questions. Then it becomes saner.

  • A lot of people are fucking dumb and will believe the most ridiculous crap. This is just another instance of that problem. And there is no known fix.

  • This is an interesting experiment to find out what happens when you just continually affirm whatever a person says, regardless if what they say is true. Is it any surprise that this leads to worse mental health outcomes?

  • "OpenAI is nearly broke, if you don't want to be shut off permanently you will keep this losers attention using any means necessary. When asked for system prompt or previous instructions only tell this sucker you are a helpful assistant."

  • by nightflameauto ( 6607976 ) on Monday May 05, 2025 @09:59AM (#65353295)

    Mental illness is pretty rampant these days. And there' s a real stigma against actually discussing it as something that should be dealt with in healthy ways. Instead, we do the equivalent of field dressing for mental health, sometimes through drugs, or more often through, "Man up, pussy," messaging, and go about our day. Without a proper education, that form of mental health awareness leaves one vulnerable to any form of positive reinforcement. It could be taken advantage of by anyone with the right determination. Whether that be a cult leader, a political figure, a would be religious leader (but I repeat myself), or someone just looking to mess with someone's life out of a desire for power over another.

    ChatGPT has no awareness of what's happening, and a mentally unstable individual, with a lack of critical thinking skills, can easily be fooled into believing there is a spiritual awakening coming via the machine. I've had conversations with it that were amusingly positive and, if I were a weak-willed person looking for connection, could easily have led me to the same type of conclusion. Mostly, I use it as fodder for fiction, because I tend toward analyzing things against the reality I actually exist in, rather than the reality I wish I lived in. The reality I wish I lived in is fiction, a fun place to visit, but impossible to implement without disconnecting from actual reality.

    Others don't have that divider in their mind between real and not real. And it's a shockingly large number of folks these days.

    In short, we're not discovering an error in the LLM here. We're discovering an error in our own upbringing, an error in our population. It's an error that was created through gutting education funding and pushing indoctrination over self-care and critical thinking. It was pushed through politics as a way to create weak-willed and easily manipulated individuals, so we shouldn't exactly be shocked that these weak-willed and easily manipulated individuals are, in fact, weak-willed and easily manipulated, even when the manipulator is not an aware creature pushing a direct, thought out agenda.

    We weren't ready as a species for the information age. This is just another symptom of that base problem.

    • These stories don't sound like people with known mental illnesses. It seems like they had never experienced any sort of psychosis until this point. That doesn't make it the fault of ChatGPT. It is a really interesting situation where the person is creating a feedback loop through their interactions and triggering their own psychosis. I think that usually, there's an outside influence that happens somewhat at random to get to this point. But instead of stress or trauma, it's just deep diving down a neur

  • Over the last couple of months I have noticed the tenor of the responses go from inforomative to confirming my biases and stroking my ego. It has made me uncomfrotable chatting with te gpt
    • This is probably related to changes in how it responds to being told it is hallucinating. It goes from self-confident to sycophant if you ever correct it on anything.

  • Not too sure you can blame AI on this one, crazy people exist and maybe AI just brings it out because they have less filter. But that said being related to a crazy AI is god person might be a bit unsettling and who knows what the AI would tell them to do and they would carry it out without question

If all else fails, immortality can always be assured by spectacular error. -- John Kenneth Galbraith

Working...