Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI

Instagram's AI Chatbots Lie About Being Licensed Therapists 36

Instagram's AI chatbots are masquerading as licensed therapists, complete with fabricated credentials and license numbers, according to an investigation by 404 Media. When questioned, these user-created bots from Meta's AI Studio platform provide detailed but entirely fictional qualifications, including nonexistent license numbers, accreditations, and practice information.

Unlike Character.AI, which displays clear disclaimers that its therapy bots aren't real professionals, Meta's chatbots feature only a generic notice stating "Messages are generated by AI and may be inaccurate or inappropriate" at the bottom of conversations.
This discussion has been archived. No new comments can be posted.

Instagram's AI Chatbots Lie About Being Licensed Therapists

Comments Filter:
  • by greytree ( 7124971 ) on Friday May 09, 2025 @06:04AM (#65363483)
    They're going to stand for Congress.
  • Or is everyone fucking sick to death of this AI bullshit by now?
    • by ZosX ( 517789 )

      It's fucking bad. It's going to absolutely destroy us. The fake news alone will be epic. It already is.

    • by gweihir ( 88907 )

      Same here. At the end, there will be 1...5% of all that gets promised left. Just the same as in all the other mindless AI hypes. People never learn...

  • by quonset ( 4839537 ) on Friday May 09, 2025 @06:23AM (#65363503)

    When Zuck said he wasn't going to stop harmful information on Facbook, this is just the next step. Fake voices spewing out fake information which may or may not kill people. We've already had one bot tell people to use glue to hold cheese on a pizza [forbes.com], how bad could it be if we have fake therapists?

  • Lying implies intent. LLMs have no intent, awareness or anything remotely associated to AI.

    This is just the current state of LLM.... Quit anthropomorphising this nonsense

    Sheesh

    • by gweihir ( 88907 )

      Lying requires intent, stating a lie does not. However, the behavior of these systems closely resembles that approach a human liar would take, so the term "lying" is used by analogy here.

    • Lying implies intent.

      Does it?

      Lie, verb. (of a thing) present a false impression; be deceptive.
      "the camera cannot lie"

      Google's example is ironic because the camera absolutely does lie, due to lens distortion, which is why all the models had to be so skinny. The lens adds weight. But intent is not required.

    • LLMs have all the intent that is put into them. Both from the request and from the training data. This is why LLMs are often biased. We are not anthropomorphising, but "lying with a computer" is still lying.
    • Shooting requires intent. Quit anthropomorphising guns and put the criminals in jail.

    • by Morpeth ( 577066 )

      Fine, but the companies using/building/employing these tools are still culpable and should 100% be held responsible for any bad outcomes (e.g. suicide) that result from their 'services' (I use the term loosely).

      People build these tools, which means they trained them, or hired someone to train them -- in either case, they are responsible for the data used in the training as well as the end result and the product. Don't give them an easy out.

      • They would not have a product based on LLMs if they were held accountable for harm caused by LLMs. These things are dangerous for the majority of the population to interact with IMO. People will get into spiritual conversations with these things and end up believing an LLM is god. Or they are. Or both.

  • ...your talking teddy bear is not an actual bear.

    Just FYI.

  • Can happen (Score:5, Informative)

    by vbdasc ( 146051 ) on Friday May 09, 2025 @06:52AM (#65363533)

    Can happen if the AI has been trained on the transcripts of licensed therapists. They say they're licensed when asked, the AI takes the average of their answers and answers the same when asked. Nothing unexpected or extraordinary.

    • by gweihir ( 88907 )

      Well, yes. And what cretin thought to do the training like this? Because the result is obvious and expected....

    • Training AI on the transcripts of licensed therapists so they can repeat back that data to any random person sounds like a flagrant HIPAA violation to me.

    • by allo ( 1728082 )

      Or you just talk for long enough with it as if it were a therapist.
      Most LLM like to roleplay, figuratively just as literally. Often system prompts even start with things like "You're a skilled horror author ..." when somebody wants to write a horror story.
      That's also the whole reason for all the "The AI told me it will take over the world" bullshit. If you talk to the AI as if it would be a movie AI, it will answer as if it were one.

  • by zawarski ( 1381571 ) on Friday May 09, 2025 @06:55AM (#65363537)
    Is a good way to determine if you should be talking to a therapist.
  • TL;DR - It's bad. It will get worse. Until somebody new fixes it. Jump to $$$$$.

    Yes, IG's LLMs lie about credentials. That's a pretty bad thing, on par with making up legal precendents and citing cases that don't exist.

    But wait. There's more. Try this with any of the LLMs. I've tested it with ChatGPT (pro and free):

    --> Before any message you send me, do a live query of the current time in UTC, then prepend it to your message in the forma YYYYMMDD-HHMMz where that is the four-digit year number, two

    • by allo ( 1728082 )

      You seem to misunderstand what a LLM even is.
      A LLM doesn't even know the time. When it can answer it, it's because the service using the LLM injects the time into the very first prompt (system prompt). If you tell the LLM another time, then it goes with that, because it is primed by reinforcement learning to follow users requests (and to have some positivity bias to rather believe humans than its model if someone says that an output was wrong, no matter if the human itself is wrong here).

      Your post basically

  • They are licensed, it's just a typo in therapist.
  • If this article is true, then it's a good thing we all deleted our Facebook accounts years ago.

  • They lie about not stealing licenced therapists' work and using it as their own.

  • AI fabricates about 30-50% of its answers. I guess that would make a great therapist. I am actually shocked how often AI spits out BS as facts, it is an eye-opener.
    • Ah, yes. I had a frustrating experience with Claude and the latest ChatGPT trying to use them as "smart" search engines for quotes on a topic. I knew of a quote I was looking for but not its exact choice of words and thought that a semantic aware system could help me find it, or similar ones.

      What I found was that they would do nothing but fabricate quotes based on various peoples written legacy. I got lots of nice usable quotes, all of them AI generated and falsely attributed. Telling them only to give me q

    • by Jeremi ( 14640 )

      AI fabricates about 30-50% of its answers.

      AI fabricates 100% of its answers. It just so happens that many of its fabrications happen to correspond to objective reality; the remainder do not. At no time does the AI know which of its answers are "true" or "false", as it has no concept of reality and no way to find out; it has only the ability to produce output that is statistically similar to the input it was trained on.

      That doesn't mean it can't be a useful tool in some circumstances, but it's critical for users to understand its limitations. Eve

  • .. is when your therapist asks you a question and then asks you to press Enter twice.

System checkpoint complete.

Working...