Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI

First Trial of Generative AI Therapy Shows It Might Help With Depression 42

An anonymous reader quotes a report from MIT Technology Review: The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesn't give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area. A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies are building AI therapy bots to address the mental health care gap, offering more frequent and affordable access than traditional therapy. However, challenges persist: poorly worded bot responses can cause harm, and forming meaningful therapeutic relationships is hard to replicate in software. While many bots rely on general internet data, researchers at Dartmouth developed "Therabot" using custom, evidence-based datasets. Here's what they found: To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that's not perfect but remains one of the best tools researchers have.

These results ... are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. "I've been working in digital therapeutics for a long time, and I've never seen levels of engagement that are prolonged and sustained at this level," says [Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study].

First Trial of Generative AI Therapy Shows It Might Help With Depression

Comments Filter:
  • by dunkelfalke ( 91624 ) on Saturday March 29, 2025 @09:09AM (#65267655)

    For a mentally ill person it is far easier to talk to a machine. It doesn't judge and there is no countertransference.

    • by cstacy ( 534252 )

      For a mentally ill person it is far easier to talk to a machine. It doesn't judge and there is no countertransference.

      These are the same machines that sometimes tell a person how horrible they are, and that they really should kill themselves.

      • So, rediscovering the already known for centuries need of humans to have sympathetic conversation and interaction with others without judgement, condemnation, negative facial and negative body reactions.....

        There's the rabbit hole of "Why men don't go to singles / dating events?" which parallels too.

    • For a mentally ill person it is far easier to talk to a machine.

      I've used chatgpt a fair bit, and one thing is clear, it's obvious what the AI generated slop is, and I hate reading that. I can't imagine why it would be better if I needed therapy.

      It doesn't judge and there is no countertransference.

      In the strictest sense of it having no mind those are both true. But if those are in the training data it can do an awfully good simulacrum of them.

      • it can do an awfully good simulacrum of them.

        I think that might be to it's advantage. It's not going to be insightful or context-aware but it can give effectively the same style of boilerplate life advice but the messenger does matter. The AI could be viewed as truly "neutral" from the user whereas another human could be seen as having an agenda or having some sort of baggage, that lack of judgement may lead the person to accept that advice versus another human even if its all just human derived kit bashing of text.

        That would be the next interesting

      • I've used chatgpt a fair bit, and one thing is clear, it's obvious what the AI generated slop is, and I hate reading that.

        That's your individual reaction, but it's not what lots of other people feel. I had a friend, back in the dawn of time, who got quite interested in an Eliza-like demo program (it came, IIRC, with a Creative Labs Soundblaster card). It was as far from ChatGPT as Pluto is from the Sun, but she still had hour long "conversations" typing in to the computer and said she feels the machine understands her. And no, she didn't have mental or emotional problems, and went on to have a very normal and reasonably succes

        • Yeah fair enough, I've heard that elsewhere too now you come to mention it. Seems to have been not uncommon.

          Nothing wrong with it, certainly better for one's health than "fuck this shit I'm having a drink" which I've definitely done before.

      • Well, of course you cannot imagine it because you don't know how it feels to have a mental disorder.

  • Comment removed based on user account deletion
  • by ndsurvivor ( 891239 ) on Saturday March 29, 2025 @09:32AM (#65267683)
    I would feel uncomfortable sharing my deepest feelings with a company who keeps records, and monitors the chat. When they make something that I can download locally and does not send out information, I may consider using it myself.
    • These companies should be subject to HIPAA just like human practitioners.

    • by Mal-2 ( 675116 )

      That exists. It's called "Llama 3.3". Or "DeepSeek-R1". Run them locally and they don't call home. They don't even need to be connected to the Internet unless you expect them to do research. In the case of DeepSeek you can either have it do searches or Chain of Thought, not both at the same time, and a completely exposed CoT is frankly its best attribute, so there's pretty much no point in letting it have access to anything but what you need to feed it.

      Llama 3.3 is installed by default when you install Olla

      • by Mal-2 ( 675116 )

        I should probably mention that OpenOrca will happily run on a machine with 16 GB (maybe even with 8 GB, but I didn't test it) of RAM. Llama 3.3 requires 32 GB. DeepSeek-R1:70b requires 48 GB. The 1.58-bit quantization of DeepSeek-R1:671b will just fit in 192 GB of RAM.

  • N=210 which isnâ(TM)t too bad. However the control was simply withheld access to the app, they didnâ(TM)t not receive human led talk therapy. So this study does not provide comparative insights at all. It just proves the chatbot is not completely worthless.
  • You would be depressed too if you were forced to spend all your time generating bullshit Ghibli images.
  • I see they are in full desperation mode trying to promote their garbage that the masses aren't excited about. China did a rug pull on all of these fools and they are like cockroaches hugging their piggy bank.
  • It's long been known that pretty much any kind of supportive, attentive conversational activity is helpful to some degree. And chatbots excel at that.
  • Did no one actually read Neuromancer?

  • When Eliza asked, with a kind heart, "TELL ME MORE ABOUT YOUR FAMILY", we knew we had found the loving care that was missing from our lives, and the long path to healing our nation's troubled souls had, at last, begun.

  • Eliza should be fine for everyone . /s

  • Please explain to me, and then provide robust data supporting, this claim of how mentally impaired fuckwits engaging with LLMs is therapeutic?

  • It's conceivable that future AI systems could be developed that would help a bit, for those who respond to talk therapy.
    We still don't fully understand depression.
    There may be biochemical aspects that can't be addressed by talk or text.
    This is a subject of intense debate, with one side claiming that it's all about brain chemistry and the other claiming that it's the equivalent of a software bug.
    It's good to see a variety of approaches

  • Interesting to note that talk therapy fell out of use. My impression is that psych doctors only prescribe drugs now. They consider talk therapy to be useless. Seems like it's not that useless after all.
  • depression experienced a 51% reduction in symptoms

    Active placebos are 30% effective. Results shortly after taking them, depending on what you're told. Treatment or cure.
    SSRIs are 30% effective. Results in around 4-8 weeks. Treatment or cure.
    Magnesium supplements are 40% effective. Results in 2-3 weeks. Cure.

    So the chatbot was worse than an active placebo since no one had a full reduction in symptoms? Or maybe not? The research paper is too short to say. Note they claim a "reduction in symptoms" not an effective treatment nor cure. As someone who

  • Putting people out of work, etc. Glad I'm retired.

Memory fault -- core...uh...um...core... Oh dammit, I forget!

Working...