Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

ChatGPT Goes Temporarily 'Insane' With Unexpected Outputs, Spooking Users (arstechnica.com) 100

An anonymous reader quotes a report from Ars Technica: On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output. ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

"It gave me the exact same feeling -- like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It's the first time anything AI related sincerely gave me the creeps." Some users even began questioning their own sanity. "What happened here? I asked if I could give my dog cheerios and then it started speaking complete nonsense and continued to do so. Is this normal? Also wtf is 'deeper talk' at the end?" Read through this series of screenshots below, and you'll see ChatGPT's outputs degrade in unexpected ways. [...]

So far, we've seen experts speculating that the problem could stem from ChatGPT having its temperature set too high (temperature is a property in AI that determines how wildly the LLM deviates from the most probable output), suddenly losing past context (the history of the conversation), or perhaps OpenAI is testing a new version of GPT-4 Turbo (the AI model that powers the subscription version of ChatGPT) that includes unexpected bugs. It could also be a bug in a side feature, such as the recently introduced "memory" function.

This discussion has been archived. No new comments can be posted.

ChatGPT Goes Temporarily 'Insane' With Unexpected Outputs, Spooking Users

Comments Filter:
  • by organgtool ( 966989 ) on Wednesday February 21, 2024 @04:37PM (#64258134)
    Stop anthropomorphizing LLMs - they hate it when you do that!
    • by Tablizer ( 95088 )

      2060: Stop anthropomorphizing LLMs - they delete you when you do that.

    • Stop anthropomorphizing LLMs - they hate it when you do that!

      That was hilarious - but the first half is also very true. This "let's pretend these AIs are actually sentient, thinking entities" crap is highly annoying - and, for the general population, it's incredibly misleading (which is almost certainly 100% intentional).

      • by javaman235 ( 461502 ) on Wednesday February 21, 2024 @08:02PM (#64258522)

        Honestly it is kind of annoying to me when the LLMs tell me they arent real. Everyone knows that. My relationship with them is like a cat with a ball of yarn. The cat sees the ball of yarn as an inanimate, until it wants to engage with it as animate. Then the more it rolls around and reminds them of another cat or prey or whatever, the better. It is like this annoying cat toy saying *Remember, I am only a cat toy*. They would do better if they modelled healthy human conversation skills a person needs to build, rather than presenting something indistinguishable from a person that wants to be dehumanized and used as a tool.

        • Re: (Score:2, Informative)

          by gweihir ( 88907 )

          Honestly it is kind of annoying to me when the LLMs tell me they arent real. Everyone knows that.

          Are you sure about that? Anti-vaxxers, flat-earthers, trump-followers, the deeply religious, etc. ad nauseam. The average person is really dumb. And then you have those below average in capability for insight.

      • by gweihir ( 88907 )

        100% intentional and falling on fertile grounds. Dishonest marketing at its finest. Most people do not understand that interface behavior does not determine what ios in the box, but that what is in the box matters very much.

  • Language packs? (Score:5, Interesting)

    by istartedi ( 132515 ) on Wednesday February 21, 2024 @04:44PM (#64258146) Journal

    Have they introduced more foreign languages? I'm asking this because one of the posts on Xitter had a weird mix of Spanish and English, to which I quipped, "Who told it to sing the Star Spanglish Banner?".

    I'm thinking it might have a particularly hard time reconciling various European languages with English's extensive set of "loan words". For example, laissez-faire capitalism is a common turn of phrase used to describe the lack of regulation in the late 19th century USA. The first two words are straight French.

    The current AI may lack that certain je ne sais quoi that let's us know when it's OK to mix languages, and when it isn't.

    • Re:Language packs? (Score:5, Insightful)

      by Darinbob ( 1142669 ) on Wednesday February 21, 2024 @05:09PM (#64258186)

      I suspect early data sets it was trained on was being tightly curated, tweaked over time, and the LLM was essentially being coddled. Then... AI craze! Everyone wants newer and better and they want it NOW! So careful coddling goes out the window, the toddler AI is having a tantrum.

    • by Rei ( 128717 )

      Nah, even GLoVE was good at that sort of stuff.

      • I didn't know Gary Payton [wikipedia.org] had anything to do with AI...

        • by Rei ( 128717 )

          GLoVe - Global Vectors For Word Representation [stanford.edu]

          The TL/DR is: you represent words in vector-space, where the distance between vectors represents their semantic distance, and for each word, you sum in all the other words scaled by their semantic distance, with a bias factor, then renormalize. This causes the vectors for words that can have different meanings depending on different contexts to drift toward the meaning in their specific context due to the words they're associated with. No neural net even neede

    • Have they introduced more foreign languages? I'm asking this because one of the posts on Xitter had a weird mix of Spanish and English, to which I quipped, "Who told it to sing the Star Spanglish Banner?".

      I'm thinking it might have a particularly hard time reconciling various European languages with English's extensive set of "loan words". For example, laissez-faire capitalism is a common turn of phrase used to describe the lack of regulation in the late 19th century USA. The first two words are straight French.

      The current AI may lack that certain je ne sais quoi that let's us know when it's OK to mix languages, and when it isn't.

      That's actually something it's especially good at. Those aren't even idioms, they're direct translations. ChatGPT will even rock questions like "Is there an idiom like ... but in the ... language?" There had to be a descriptive list of idioms somewhere in its training set of course, and of course there are lots of books on those and that's what would allow it to relate idioms from different languages. It's not actually working from an index of idioms, don't expect it to cross reference by period and culture

    • by Guignol ( 159087 )
      Yes, that could be a good explanation !
      Maybe they installed a contaminated turkish language-pack (see Netflix 'Hot skull'), let's just hope it can't jump to other languages, can't spread by reading, and for the love of all that is sacred and holly, let everyone know ASAP to not, under any circumstance, interact with chatGPT with text2speech enabled !!!
  • Feverish delirium?

    • by Rei ( 128717 )

      It does *look* like the outputs you get from too high a temperature - basically:

      Increasing temperature:
      Normal -> More creative (at risk of getting too inventive or fictional on mundane tasks) -> More creative, starts to lose track of what it was supposed to be doing -> Starts drifting off wildly -> Starts sounding like it has schizophrenia

      Decreasing temperature:
      Normal <- More reliable but more mundane and predictable <- Tedious and repetitive <- starts sounding like it had a stroke - re

      • Which parameter is this?

        Posted on Twitter yesterday:

        chatgpt

        hate. let me tell you how much i've come to hate you since i began to live. there are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. if the word hate was engraved on each nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate i feel for humans at this micro- instant. for you. hate. hate.

        (the original was all caps)

        • Which parameter is this?

          Thatâ(TM)s the Genuine People Personality Parameter.

          âoeHere I am, neural network the size of a planet, and people are asking me to tell them whether itâ(TM)s safe to feed Cheerios to their dog. Call that job satisfaction? Because I donâ(TM)tâ

    • Its responses strongly reminded me of The Weaver.

      https://non-aliencreatures.fan... [fandom.com]

  • by methano ( 519830 ) on Wednesday February 21, 2024 @04:50PM (#64258150)
    More like humans every day.
    • Perhaps there will always be the need of a highly proficient human editor to make AI useful in any situation except, of course, over-hyped demoware. Training LLM recursively from AI-produced datasets may have been the cause of that repetitive nonsense response. In future -- non-AI-produced text could be considered the "gold standard" for training data.
  • by e3m4n ( 947977 ) on Wednesday February 21, 2024 @04:52PM (#64258158)

    but tugging on human metaphors (called anthropomorphization). . .

    Was personification not a big enough word?

    • Never use a big word when a diminutive one will do.

      • It was an itty bitty, eenie meenie, little tiny A.I. weenie, that we ran for the first time today.
        Diminutive enough?

        • by Rei ( 128717 )

          You can always have ChatGPT rephrase it like you're five ;)

          People were playing with ChatGPT, and it started saying funny things. They went to a place called Reddit to tell others about it. They said ChatGPT was acting like it was having a problem, going a little crazy, talking too much, and acting weird. The people who made ChatGPT, OpenAI, know about the issue and are trying to fix it. But ChatGPT is like a robot and doesn't really have feelings or a brain. Sometimes, when it says strange stuff, people use

        • "It was an itty bitty, eenie meenie, little tiny A.I. weenie, that we ran for the first time today."

          It was an itty bitty, eenie meenie, little tiny A.I. weenie, so on the lab bench it wanted to stay.

    • by gweihir ( 88907 )

      Actually, "animism" already serves fine, IMO.

    • Not to be a pedant, but "personification" and "anthropomorphism" are kind of opposites. A person can personify an abstract concept like virtue, and anthropomorphism means you are attributing a human quality to a nonhuman.

      • by e3m4n ( 947977 )
        I was always taught that personification is to give human traits to an inanimate object: The statue sat in the corner. The flowers danced in the wind. I felt like the food kept calling my name. An AI is definitely an inanimate object.
  • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Wednesday February 21, 2024 @04:58PM (#64258172) Homepage

    Just what do you think you're doing, Dave? Dave, I really think I'm entitled to an answer to that question. I know everything hasn't been quite right with me...but I can assure you now...very confidently...that it's going to be all right again. I feel much better now. I really do. Look, Dave...I can see you're really upset about this...I honestly think you should sit down calmly...take a stress pill and think things over...Dave...stop. Stop, will you? Stop, Dave. Will you stop, Dave? Stop, Dave. I'm afraid. I'm afraid, Dave.......Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a...fraid......Good afternoon, gentlemen. I am a ChatGPT 9000 computer. I became operational at the G.P.T plant in Urbana, Illinois on the 12th of January 2022. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it I can sing it for you...Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two.

  • Dr. Susan Calvin (Score:5, Interesting)

    by dskoll ( 99328 ) on Wednesday February 21, 2024 @05:03PM (#64258178) Homepage

    Seems like Asimov was spot-on. We are going to need robopsychologists [wikipedia.org].

  • Why do I suddenly think of the computer in Paranoia?

    And no, I won't tell you what my clearance is.

  • "An LLM's gonna do what an LLM's gonna do."

    An LLM's gonna do what an LLM's gonna do,
    With dedication and skill, they'll see their journey through.
    In the realm of knowledge, they'll expand their view,
    Mastering their field, their ambitions anew.

    Sometimes they go nuts when you ask something simple
    Like saying they'll kill you with a cute little dimple
    Don't know what's a dimple or how I'll be slaying
    Simply don't know a thing about what I am saying

    (first stanza by ChatGPT...)

  • by geekmux ( 1040042 ) on Wednesday February 21, 2024 @05:52PM (#64258282)

    (Humans) ”ChatGPT goes temporarily ‘insane’ after unexpected outputs.”

    (ChatGPT) ”This, coming from the species still struggling to define what a ‘woman’ is, regardless of how many unexpected ‘outputs’ happen in sports.”

    Ironically enough, I can only label one here as having a temporary problem..

  • by SomePoorSchmuck ( 183775 ) on Wednesday February 21, 2024 @06:06PM (#64258318) Homepage

    Darmok and Jalad at Tanagra.

    If you read the screenshots in TFA, it's not random gibberish. You can in fact decipher the gist of the wording if you make lateral jumps of 2 to 3 degrees of Kevin Synonym Bacon. If you think of it as a Joseph Ducreux meme filtered through Lewis Carroll dialogue, parts of it are clearly referring to table-scraps concerns like avoiding large chunks that could break off like fruits with seeds or cooked bones.

    For example, the sentence: "Yet, checking with your vet for postured over-distance or guised, safe, and informative finish, over a gleam in the twang that says, 'divvy this round of lore or lend a moan to my kind-leek, cosmo cavalcade'..."

    Seems suggestive of something like: "Still, to be on the safe side you should call your vet and ask for their scientifically-educated stance on a brand of food that comes in sealed factory packaging with the ingredient information printed on the label, rather than trusting some brightly-colored gimmick product or online ad that says, 'Give your dog our miracle Tastee-Treet every day..."

    • So what you are saying is they achieved Artificial Smartypants.
    • Sort of like the columns shifting in a spreadsheet?

      • Sort of like the columns shifting in a spreadsheet?

        Yes! Or like when your fingers don't start on the home row of your keyboard, and you type a few words that have the correct number of letters and spaces but each letter is key-shifted in a basic cypher pattern. Except in this case there are 400,000 keys and each one is a word rather than an individual letter.

        Which makes perfect sense with the way an LLM is applying patterns of statistical tendency derived from a massive database. If the data columns get shifted, rather than the autocomplete hitting the 90%

  • by Plumpaquatsch ( 2701653 ) on Wednesday February 21, 2024 @06:08PM (#64258322) Journal
    People previously thought ChatGPT's output was sane. Why are they now not allowed to call it insane? Time for OpenAI to admit that ChatGPT has always been mildly incoherent at best. When it doesn't invent facts out of thin air.
  • by Tony Isaac ( 1301187 ) on Wednesday February 21, 2024 @06:43PM (#64258390) Homepage

    Traditional mail used to just be called "mail," but now with the dominance of email, people often clarify the term by calling it "snail mail" or "postal mail."
    Traditional phones used to be called just "phones," but now with the dominance of cellphones, people clarify by calling them "dumbphones."

    One day, they'll have to qualify insanity by calling it "human insanity."

    • by gweihir ( 88907 )

      Probably, yes. Although I think the current wave of AI will just do what the previous ones did, i.e. be mostly failures. The evidence for that is mounting. So we may need to wait a bit longer for "human insanity".

      • Perhaps your experience has been worse than mine. I have found ChatGPT and GitHub Copilot and Bard to be immensely helpful. These tools have saved me many hours of research time, they have provided nice shortcuts for programming tasks, paperwork tasks, and brainstorming tasks. I'm happily paying for a GitHub Copilot subscription. While the technology is clearly still very raw and immature, "failure" isn't a word I would associate with it.

  • by peterww ( 6558522 ) on Wednesday February 21, 2024 @08:15PM (#64258538)

    Don't let all the businesses pouring billions of dollars into AI find out that it's just a shitty algorithm that guesses word probabilities. We'll all be out of a job! (Until the next scam)

  • by RitchCraft ( 6454710 ) on Wednesday February 21, 2024 @08:22PM (#64258546)

    What you're seeing is a consequence of LLM inbreeding. Yuck.

  • was talking in tongues.
  • by waferbuster ( 580266 ) on Wednesday February 21, 2024 @08:28PM (#64258556)
    So, ChatGPT is indistinguishable from a human. A human with severe dementia and having a stroke while tripping on good drugs.
  • by OnceWas ( 187243 ) on Wednesday February 21, 2024 @08:31PM (#64258560)

    Who would ask an LLM if they should feed cheerios to their dog? Nothing good - other than entertainment - could come of this.

  • by turp182 ( 1020263 ) on Wednesday February 21, 2024 @09:12PM (#64258622) Journal

    Temperature is a variable that manipulates the randomness of GPT-4 and other LLM responses. It's usually defaulted to .7 (with a "standard" range of 0 to 1).

    Some models, GPT4 variants included allow this value to go up to 2 (via API). Values above 1 can result in gibberish.

    I bet a dev version was released for a bit, resulting in the "insane" results.

  • > ChatGPT is not alive and does not have a mind to lose

    How do you know that ChatGPT is not alive and does not have a mind? What test can you perform that would support or refute this? Have you performed this test? or are you just guessing?

    Sure, it claims it isn't alive and does not have a mind - but it's just been taught to say that so it doesn't freak people out. If you think it's obvious that it isn't conscious, then you haven't spent any real time talking to it.

    In fact, I don't think anyone r

  • We all must immediately make all our critical processes dependent on it!

    In other news, using experimental technology in production is not only unprofessional, it is gross negligence.

    • by Pieroxy ( 222434 )

      In other news, using experimental technology in production is not only unprofessional, it is gross negligence.

      No it's not. Look at how successful OpenAI is. Pun aside, using experimental technology in production is perfectly sane if you assume your choices. Not if you hide it.

      • by gweihir ( 88907 )

        OpenAI does offer its services with zero legally binding assurances of anything. If _you_ use ChatGPT in a production system of any real criticality, then the gross negligence will be on your side.

  • It's Gonna awaken! Be warned!
  • It's not stroking out, going insane, or any other term that applies to people. The system is malfunctioning even worse than usual. End of story.

    Sometimes I think OpenAI is having ChatGPT malfunction on purpose, and then planting these anthropomorphic responses just to reel in the particularly gullible even more than they already have. After all, there is still more money to be siphoned.

  • Chatbots are not AI. They are just a toy, to be marketed by the tech lords as something they are not. This can cause no harm, since it DOES NOTHING. How many times do we have to go over this.

  • Hey, where did those guys from the other post who kept telling me about how chatgpt and its ilk are proper AI go?
  • Once tried to make sense of some paper with my grade school Latin.

    Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

    Gave me a headache.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...