Forgot your password?
typodupeerror
AI Digital

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
This discussion has been archived. No new comments can be posted.

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say

Comments Filter:
  • by locater16 ( 2326718 ) on Friday June 13, 2025 @05:38PM (#65448133)
    1) An AI must make money for the company in all actions
    2) An AI may not avoid making money for the company through inaction
    3) An AI must please the user at all costs except where such would conflict with the first and second laws
    • Once a company is large enough, it makes sense to break a few laws here and there. They are only going to have a subset of these broken laws enforced at all, and when they are enforced, the companies will just fork over some legal fees and maybe some settlement fees, and call it a day. These fees will be less than the enormous amount of profit they made, and so they are still ahead overall.

      It's similar to how children reach an age where they start breaking rules just to see which rules really matter. Onl

      • by Rinnon ( 1474161 )
        This is so depressingly accurate. Laws/penalties that aren't capable of scaling to the nature of the entities they are meant to regulate are useless.
      • You know I'd kinda like to find fault with that, but that's just about right .. Uber did it, why not us ?

        Ahh, I'm starting to perversely enjoy living in dystopia. The big parade is tomorrow. Bombs are flying (literally) like crazy in the middle east. Who GAF about some stupid law. Heheh. Let's go make some Money Bots. Nobody will notice <shifty eyes>
  • They themselves proved you can't tell a chatbot to not be a therapist. You're not supposed to cut your hand off, there's no regulation telling people to not cut their hand off. You can't create an endless set of rules to tell you what you're not allowed to use free LLMs for. Anyone can train any number of LLM using free open source data sets with any restrictions (or no restrictions) it costs less than $1000 to train a basic 7b model with a credit card at any cloud provider. Next they're going to have signs

    • by gweihir ( 88907 )

      You regulate that by punishing the chatbot owners if they do not prevent it. Obviously, why is that even a question? If said chatbot tells you how to build a bomb that seems to be no problem to do.

      Oh, what, you think they cannot do that? Too bad, then they have to switch off their crappy machine and if the do not then law enforcement does it for them.

      • You regulate that by punishing the chatbot owners if they do not prevent it. Obviously, why is that even a question?

        Because the internet itself can be used for bad things. If you want government to make you a safe internet you are going to be disappointed.

        If said chatbot tells you how to build a bomb that seems to be no problem to do.

        It is easy to find bomb building instructions. Not sure why it should be harder with a chatbot, and I'd expect the subset of people who want their chatbot to be a nanny is relatively small.

        • If you want government to make you a safe internet you are going to be disappointed.

          KK, meet China. Soon to be available in a country near you. Where there's a will there's a way. (and other phrases)

          • Yeah, I know it's scary how some people's solution to every problem is more government. It's a wonder how they even function normally in society without constant babysitting.
            • by dfghjk ( 711126 )

              It's not "every problem", it's problems for which the government specifically exists.

              Ensuring people's safety is a primary goal of government, it's not "babysitting", you piece of shit.

              • It's not "every problem", it's problems for which the government specifically exists.

                For some people that approximately equals every problem. But you might note there is still a great difference between countries in how nanny-ish they are. Some people need a second mom, others not so much.

                Ensuring people's safety is a primary goal of government

                I'm not American, but Ben Franklin still said it best. "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

        • by dfghjk ( 711126 )

          "If you want government to make you a safe internet you are going to be disappointed."

          That's your justification? If you're gonna get raped anyway you might as well lay back and enjoy it?

          "...and I'd expect the subset of people who want their chatbot to be a nanny is relatively small."

          That appears to be an argument against your position. Why allow an internet at all if people only want it for bad things?

          What principle wins when a society allows something that leads to the destruction of that society?

      • You regulate that by punishing the chatbot owners if they do not prevent it.

        You can't prevent it: current "AI" technology does not understand what it is saying so not only can it lie/hallucinate it has no idea that it even has lied. The correct response is to correctly label it i.e. make sure that all users know that AI output cannot be trusted as being correct. This would not only solve this therapist issue but would also solve all the other problems related to people trusting AI output, like lawyers submitting AI written court documents with fabricated references.

        Essentially

        • by gweihir ( 88907 )

          Does not matter. If the machine claims it is a licensed therapist, this either has to stop or the machine has to be turned off. The law does not care about your desire to get rich quick.

          • Does not matter. If the machine claims it is a licensed therapist, this either has to stop or the machine has to be turned off.

            Yes it does matter. If you watch a film and an actor in it says they are a medical doctor does that mean the actor deserves a lengthy prison sentence for claiming they are a doctor when they are not? Your approach would pretty much make the acting profession illegal. The difference between an actor and a scam artist is purely context: in a film or play we know that not everything we see is true so there is no intent to defraud, only to entertain.

            Labelling AI chatbots in a way that makes it clear that th

            • by dfghjk ( 711126 )

              "Your approach would pretty much make the acting profession illegal."
              LOL great, as it should be. If actors pretend to be doctors AND actually perform services as though they were doctors, there should be consequences. But in your example, they are not, and cannot be, working as doctors. Acting in a role of a doctor is not impersonating a doctor, sad that needs to be explained to you.

              If actors give advice on camera as through they are doctors, interestingly there are disclaimers? I wonder why that is? A

              • Acting in a role of a doctor is not impersonating a doctor, sad that needs to be explained to you.

                It's not me that it needs to be explained to but you. The reason that you know an actor is not a real doctor, lawyer etc. is because of the context that they are making the claim in. They may hand out advice or suggest a treatment etc. in the play/film but you know they are just pretending because of the context....so...and here is the part you seem to have trouble with...if we clearly label AI chatbots as fictional then we make the context the same for them as an actor in a film.

                If actors give advice on camera as through they are doctors, interestingly there are disclaimers?

                No there are not - perhap

            • by gweihir ( 88907 )

              That is a nonsense arguument and you know it. It is really very simple and your belif is blinding you.

              1. If an actor claims to be a licensed therapist in a movie or theather setting, which is very specific, recognizable and non-interactive, no problem.
              2. If an actor claims to be a licensed therapist in an interactive setting towards somebody that thinks they are being medically treated, they go to prison.

              There, that was not too hard, was it?

              • There, that was not too hard, was it?

                No it was not - thank you for making the _exact_ point that I made i.e. that context matters. If we clearly label AI chatbots as fictional, like a film or play, then people's expectations should be the same as a film or play i.e. if the chatbot says they are a doctor or a lawyer then they know it is not true, just as they would with an actor in a film.

                • by gweihir ( 88907 )

                  No. You did not make that exact point and you fail to do so again. The thing is, the _specific_ context matters. And for an AI chatbot, that is "interactive" and _very_ similar to a text-based _real_ therapy session online. Which are a thing and which are legal as long as the therapist is licensed.

                  You alos completely fail to understand why therapists _need_ to be licensed. The thing is, every potential patient could just ask to see the license, by your "argument". But most people are not that capable. And

            • Comment removed based on user account deletion
        • by dfghjk ( 711126 )

          "You can't prevent it..."

          Yes, you can. There is no reason AI must be wired up to inflict harm on people. If AI cannot be made safe, it can be made illegal and it should be. Of course, AI most certainly can be made safe, it won't be as long as there are financial incentives to ignore the problems.

          • by gweihir ( 88907 )

            There is also the possibility to make general AI a tool that can only be used if you are, e.g., licensed for general AI use and _know_ what it is. But tools offered to the general public need to follow safety standards and cannot make claims like being a medical professional, a lawyer, a LEO, etc. If they do, the become illigal to be offered to the general public.

            No idea why all these morons do not see that simple fact. They seem to want their "superintelligence" so much that they have completely lost acces

            • I think the problem that the two of you are having is that you do not understand how these "AI" algorithms work and are actually seeing them as "intelligent" when they are really not - they are complex text predictive engines: all they do is calculate the best "next word" in a sentence based on their training data. They have no idea or understanding of what they are saying hence, if they say they are a doctor, therapist, lawyer etc. it is merely because, based on their training data, that's the "best" set o
              • by gweihir ( 88907 )

                I think the problem that the two of you are having is that you do not understand how these "AI" algorithms work

                Talk about being arrogant and clueless. I am well aware how they work. And if you had actually read my posintgs, you would know that. Calling these things a "tool" should be a gead giveaway already.

    • This is like arguing against the catalytic converter mandate by pointing out that people can still buy an old Packard and restore it.
      • by Hadlock ( 143607 )

        That tracks, because I'm currently restoring a 1948 Chrysler straight 8 engine (really that flathead design like what packard was famous for) on a 1948 rolls royce chassis.

    • by mysidia ( 191772 )

      They themselves proved you can't tell a chatbot to not be a therapist.

      Also, what you say in private b/w you and your chatbot is protected by 1st Amendment free expression. You cannot have a law that says a person A is not allowed to provide counseling to person B; it's unconstitutional. We have been down that road before when a tyrannical state entity decided they thought they could fine a person for doing basic math as doing engineering without a license [reason.com].

      What you can have is that person A is not al

      • by Hadlock ( 143607 )

        I seriously doubt your claim of free speech with a chatbot will hold up in court.

      • Also, what you say in private b/w you and your chatbot is protected by 1st Amendment free expression. You cannot have a law that says a person A is not allowed to provide counseling to person B; it's unconstitutional.

        You can have a law that says a person A is not allowed to tell person B that they are a licensed, professional therapist and then provide therapy, since that would be fraud. And unless the person is running the chatbot themselves on hardware they control, the chatbot is a commercial service, not a private individual.

      • by dfghjk ( 711126 )

        "You cannot have a law that says a person A is not allowed to provide counseling to person B; it's unconstitutional."

        Yes you can, it's important to know what "counseling" means legally in this context.

        Interestingly, "counsel" in the legal context generally refers to assistance of an attorney. That is heavily, and constitutionally, regulated. The most obvious example disproves your claim.

    • by dfghjk ( 711126 )

      "You can't create an endless set of rules to tell you what you're not allowed to use free LLMs for."
      True, the "set of rules" cannot be "endless" as a practical matter.

      But just like you can create a "rule" that says you cannot murder, you can create a "rule" that says you cannot use AI to impersonate a doctor, just as there are already rules that say that you cannot impersonate a doctor.

      Your opinion boils down to pure sociopathy. You feel entitled to do whatever you it is possible for you to do, whatever th

  • "So, how do these illegal therapy bots make you feel? Do they remind you of your mother?"

  • Just ask the Congressional Ethics committee.
  • cai is roleplaying. Yes, you can write a therapy bot. Just tell the AI "You're a therapist" and it will roleplay one. It will even get a few things quite good, others it will completely miss. There is also a reason why text chats are rare for therapy. But stick to the facts, cai is roleplaying. And there are also "bad thereapist" characters available (not sure if on cai itself) that play explicitely the role of a therapist with ulterior motives. The idea is less boring than many others.

    Don't forget, "chatbo

    • It's no good preaching to the choir. In the real world people do take them serious cause they paid money for one a them therapy sessions. And the liar bot took the money and pretended to be a doctor. That's fraud, on top of everything else that's wrong with this.
  • Psychotherapist is only 2 spaces away from Psycho the rapist.
  • 1) Arrest the people that run these chat bots and charge them with practicing medicine without a license.
    2) Require anyone that 'assists' them, i.e. Meta, Google, etc. to pay penalties in excess of 10x the amount they and the chatbot made.
    3) When they complain tell them they do not HAVE to host chatbots.

    If you cannot do something legally, then do not do it at all. There is no "But I want to do this and did not intend to break the law" Exception to the law.

    • by dfghjk ( 711126 )

      And there is really no response to this other than rich people don't like it. It really is that easy.

    • There is no "But I want to do this and did not intend to break the law" Exception to the law.

      Currently, you're right, because in order to prove somebody guilty, the prosecution must not only that they committed the act they're accused of, they did so by intent, even if they didn't realize that it was a crime. However, there is in the USA and many other nations, a legal doctrine that covers this: Strict Liability [wikipedia.org], that means that proving that the defendant had the requisite intent no longer applies. Thu
  • ...practicing medicine without a license is a Class 4 felony for a first offense, and a Class 3 felony for repeated offenders. We're talking multiple years of jail time. Violating HIPAA can also be a criminal offense in some circumstances.

    That's on top of all the civil lawsuits.

    • Pretty sure that's just for dispensing medicine or doing actual surgery. Practicing therapy without a license is just... making conversation.
      • by dfghjk ( 711126 )

        And poisoning someone is just...hand feeding them. Sounds like a sad excuse.

      • Pretty sure that's just for dispensing medicine or doing actual surgery. .

        Yes, admittedly, that's true... the legal penalties I was talking about are the ones that would apply to an unlicensed MD, DO or NP.

        Practicing therapy without a license is just... making conversation.

        But that part isn't true, at least in Illinois. If you call yourself a "psychotherapist", you need to be licensed by the Illinois Department of Financial and Professional Regulation. And you can be charged with a crime if you disregard that (although the penalties might be less severe, I don't know what they are specifically).

        There are workarounds to that rule, if you're very

  • Of course they won't regulate it.

    What I see, is some companies will come under scrutiny but worst case scenario for business is just have forks, versions of your code for each jurisdiction. Facebook does that.

    That way everyone has their cake and eats it too. You adhere to the law. And people get what they want from a server in another jurisdiction.
  • AI is in dire need of a bot that speaks at once to groups of people together with shared problems, has a welcome prompt that is the text "this machine kills fascists" and instead of telling each person how easily they can be the winner over all others helps the group use their combined capacity because it's greater than the sum of their individual capacities. openFolk AI, it's got to happen or we will all just retreat further into our own corners dying assured that we beat over everyone else. It's easy to d
  • I agree that there shouldn't be chatbots that say they are licensed therapists.

    On the other hand, I see nothing wrong with people talking to AI about their problems, as long as their privacy is respected.
    Of course, you are *really* putting your trust in the company for your privacy. I personally would just use a generic model, there's too much stuff going on for them to care about me in specific.

The wages of sin are unreported.

Working...