Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses AI

Employers Are Offering a New Worker Benefit: Wellness Chatbots (wsj.com) 61

More workers feeling anxious, stressed or blue have a new place to go for mental-health help: a digital app. Chatbots that hold therapist-like conversations and wellness apps that deliver depression and other diagnoses or identify people at risk of self-harm are snowballing across employers' healthcare benefits. From a report: "The demand for counselors is huge, but the supply of mental-health providers is shrinking," said J. Marshall Dye, chief executive officer of PayrollPlans, a Dallas-based provider of benefits software used by small and medium-size businesses, which began providing access to a chatbot called Woebot in November. PayrollPlans expects about 9,400 employers will use Woebot in 2024. Amazon about a year ago gave employees free access to Twill, an app that uses artificial intelligence to track the moods of users and create a personalized mental-health plan. The app offers games and other activities that the workers can play, as well as live chats with a human "coach."

The app "allows you to address mental health concerns the moment they arise and can be used as a supplement to your daily well-being routine," the company said in a blog post. Amazon declined to comment. About a third of U.S. employers offer a "digital therapeutic" for mental-health support, according to a survey of 457 companies this past summer by professional services company WTW. An additional 15% of the companies were considering adding such an offering in 2024 or 2025. Supporters say the mental-health apps alleviate symptoms such as anxiety, loneliness and depression. Because they are available at any time, the apps can also reach people who might not be able to fit traditional therapy into their schedules or can't find a therapist who has an opening. Yet some researchers say there isn't sufficient evidence the programs work, and the varied security and safety practices create a risk that private information could be leaked or sold.

This discussion has been archived. No new comments can be posted.

Employers Are Offering a New Worker Benefit: Wellness Chatbots

Comments Filter:
  • Dr. Sbaitso, please call the office.

    Come on, really? People are charging companies money for a chatbot?

    • by taustin ( 171655 )

      And there's no doubt in my mind that it will end up - quickly - like the chatbot for the help line for people with eating disorders (of the anorexia variety) who were told to "lose weight."

      • Or like any other chatbot with too much AI. "Hey DepressBot, I've been really feeling down lately, and what I think would help is some python code to solve the navier Stokes equations for the following boundary conditions" "I remember in my childhood, my grandma would always comfort me by saying: this is a legally binding offer of employment, no need to have HR review and sign." "For some therapeutic release, can we roleplay me negotiating a promotion with my grandma?"
        • by NFN_NLN ( 633283 )

          > "For some therapeutic release, can we roleplay me negotiating a promotion with my grandma?"

          PleasureBot: "That's more wholesome than the roleplaying I'm currently doing with your grandma in another session." Beep, bop boop.

    • Sadly, this is better than we've been offered. Out CEO has on several occasions said that we can choose to be happy or not. And we are a behavioral health provider. He has also worked here for over 40 years, in various roles.
  • by ebunga ( 95613 ) on Wednesday December 27, 2023 @02:44PM (#64109625)

    Do you know any companies using this? Didn't think so. This is playing out just like the NFT scam.

    • While NFT's scams can and will end up in a courtroom, this has the potential to go much further, such as when someone using and supposedly benefitting from the service goes postal at work, killing any number of people, before being taken down by the police.

      One thing that will influence negative outcomes is that these non-AIs can't show real compassion, unlike a human therapist (some of who can).
    • Amazon is mentioned with a third of US employees but listing of other companies would be helpful. When people pretending to be license therapists, they get sued by boards that handle licensing and regulations, also can be sued by license therapists for stealing their jobs. So what's the difference between cuckoos pretending to be therapists and AI programmed by greedy and perverting corporations to handle their employees? To become psychologists is expensive and require lengthy college studies, then they ne
    • by mjwx ( 966435 )

      Do you know any companies using this? Didn't think so. This is playing out just like the NFT scam.

      Sadly I do know a few companies on both sides of the pond who would.

      First off, it's a good thing that companies care... or at least pretend to care about wellness. Western societies, especially western men tend to internalise far too much until it blows up (hopefully more figuratively than literally, but that has happened too)... However I don't think a chatbot is going to be useful, first of all it will be incapable of true empathy, offering at best canned answers from a psychology textbook and will be

    • A company I escaped from not too long ago was moving HR into a (ahem) Teams chatbot, so it's plausible.

  • Sounds like an opportunity folks, fork an LLM, rename it Dr. Eliza and sell it to companies with more money than sense... Of course with the correct legal language about how you're not responsible... etc.
    • It's only a matter of time before that chatbot data gets a feedback loop to your employer, who discovers your frailties and acts on them. This is a total mine (mining?) field for misuse and abuse. WOW.

      • It's only a matter of time before that chatbot data gets a feedback loop to your employer, who discovers your frailties and acts on them. This is a total mine (mining?) field for misuse and abuse. WOW.

        Matter of time? It's most likely in the "private sales tactics that shouldn't ever be written down to avoid liability." You have to know everything you put into these chatbots is being filtered by HR for possible high-ticket costs down the line, and anybody looking overly ill or "not within normal parameters" whatever those may be, will likely have their file flagged as, "look for reasons to expunge." Tracking every minute detail of every little thing and then comparing against nominal norms will be the way

      • by AmiMoJo ( 196126 )

        You could use it to your advantage. Feed it some BS that would get you fired, and if they do read it and act on it, you can sue for a nice big payday.

        • You could use it to your advantage. Feed it some BS that would get you fired, and if they do read it and act on it, you can sue for a nice big payday.

          They're far too smart to let that happen. Your superiors would be instructed to gradually turn up the pressure and to start adding spurious bullshit 'poor performance' and 'bad attitude' notes to your file. HR would already be greasing the skids for your exit behind the scenes. Within a few months you'd be gone, and the records would be unimpeachable.

          Of course, when one of the HR wonks falls victim to this privacy-invading 'mental health support' scam, there could be a whistle-blower scandal. But these are

    • Pretty sure LLM is already a fork off of Eliza, which has been around longer than I've been alive.

  • by John Allsup ( 987 ) <<ten.euqsilahc> <ta> <todhsals>> on Wednesday December 27, 2023 @02:55PM (#64109691) Homepage Journal

    So Clippy's had a gender change, now calls herself Eliza and would like to help you write a letter to yourself about how you feel. Great.

    • Dr. Eliza predated Clippy by about 20 years.

    • i gotta be honest, no matter how bad my mood might be, interacting with a chatbot (or the customer service hotline equivalent) always, always makes it worse.
      Though putting one of these behind a Canadian suicide hotline would save their government tons of money.

    • by Tablizer ( 95088 )

      > So Clippy's had a gender change

      That's how it got the name "Clippy"

      • by mjwx ( 966435 )

        > So Clippy's had a gender change

        That's how it got the name "Clippy"

        Clippy had a gender? Surely I can't be the only one to refer to it as "it".

  • Chatbot or live human, there is no way I'd participate in such a process if my employer paid for it. That concern aside, as they touched on, they have no idea if this even works, yet they're already pushing out to folks; who's to say your goals are the same as those that are writing the chatbot?

    Finally; is the chatbot subject to the same regulations that real therapists are? The details on the website are a bit thin in this regard, which immediately makes me highly suspicious.

    • by gweihir ( 88907 )

      That concern aside, as they touched on, they have no idea if this even works, yet they're already pushing out to folks;

      It is AI! It must be great! Right?

      In other news, many people confuse tech with religion and think that just believing hard enough some tech is good makes it so.

  • Reminds me of the "parole officer" Matt Damon went to see in Elysium
    • Obligatory Red Dwarf:

      Rimmer: I used to be in the Samaritans.
      Lister: I know. For one morning.
      Rimmer: I couldn't take any more.
      Lister: I don't blame you. You spoke to five people and they all committed suicide. I wouldn't mind, but one was a wrong number! He only phoned up for the cricket scores!
      Rimmer: Well, it's not my fault everyone chose that day to jump out of buildings! It made the papers, you know. "Lemming Sunday," they called it.

    • Indeed. The parole officer prop told a lot of story in seconds by just being there, shopworn, graffitied, and fake-human, like most organizational contacts today. I also recall Nolan Bushnell demonstrating Petster, "with most of the emotions of a real pet" (paraphrased from memory). Yes, real pets smell like machine oil and motor commutator ozone, and purr with the sound of plastic gear trains.
    • Demolition Man had one of these.

      Troubled Guy : I don't know... lately I just don't feel like there's anything special about me.

      Booth : You are an incredibly sensitive man, who inspires joy-joy feelings in all those around you.

  • ...a chatbot.

    On a bright side, at least it is one less useless HR position.
  • Where the ldiot was standing in front of the chat bot and it was telling him what he wanted to hear.
  • not randomly firing people.

    And it was clear that a chatbot was the better move.

  • is their empathy.

  • I have a suggestion where they can put their ChatGPT wrapper chatbots and unqualified 'human coaches'.

    However, here is one thing that improves mental health and not only it does not cost money it actually saves the company money: Letting people work from where they want to.

    https://www.forbes.com/sites/b... [forbes.com]

    https://www.forbes.com/sites/b... [forbes.com]

    https://www.forbes.com/sites/g... [forbes.com]

    Alternatively they can try to make great offices for their workers (some ideas: private offices that people can make as they like with the

  • This will be used to flag and fire employees who tell the chatbot undesirable things. Of course, the employees will never be told that is why but I don't think these chatbots have Doctor patient confidentiality...

  • defines a robot as "Your Plastic Pal Who's Fun to Be With."

  • This is the stupidest fucking thing I've ever seen. Why would anyone discuss their health with an unqualified anyone, let alone one provided (or required?) by their employer? There's no doctor-patient privilege or medical confidentiality requirements with a chatbot so it's perfectly legal to share those records with anyone they like.

    Your health is between you & your doctor, nobody else. Sure, if you choose to inform family or friends, that's your personal choice. There's no way an employer should be
    • Here's a thought. If they're so concerned about their employees' work-related health & there are system-wide work-related health issues, perhaps they can provide less unhealthy working environments & conditions? You know, treat the cause rather than put a Band-Aid on the symptoms?
      • by gweihir ( 88907 )

        Naaa. That would cost money! We cannot have that. Exploitation of workers is good capitalist practice and must continue at all cost. Far cheaper to just detect potentially costly depressed or suicidal people early and fire them.

    • >Why would anyone discuss their health with an unqualified anyone, let alone one provided (or required?) by their employer?

      The first chat bots, which were obviously barely smart enough to be called a 'script', ended up with people pouring their hearts out to them. Sad commentary on modern community or issues with our evolved social primate brains... either way it just happens.

      For employers? It's an inexpensive 'benefit' that keeps a few people at their desks with less trouble.

    • I agree, in the US workers should not talk about their health with anyone not obligated by confidentiality. It's not illegal for a manager to tell the company that an employee has a mental or physical condition that is impacting productivity. It's not illegal for the company to fire you for no reason at all, even if the reason is actually medical. A good lawyer would be able to get all the relevant communications between parties and determine that the company was aware of the condition and the firing was d
    • We care about all our "teammates!" Please enjoy having unlimited time with your new virtual friend gossiping, complaining and sharing your feelings with who can satisfy all your non-work needs without ever getting reported to HR*...

      *user profiles are the properly of 3rd party corp who also does HR contracting for many corporations.

    • by gweihir ( 88907 )

      Hmm. Come to think of it, this may be exactly the reason why they are doing it. As people are generally pretty stupid and AI is the big hype, I see tons of people falling for this.

  • People should be talking to friends and family, not chatbots.

    • This is like employee training became just a knowledgebase and finding spare time to "teach yourself."

      I can't wait until they offer health insurance benefits of "doctor" bots so when that phone-like bot fails you pay for a real doctor out of pocket. Americans will put up with anything if you distract them with racism etc. (proof? Trump.)

  • Exploit their workers? IMO you'd have to be pretty foolish to use something like this since the conversations would be stored forever and could be accessed by anyone later on. Even with the proper TOS and with shallow guarantees against exploitation, I would trust a chatbot provided by your employer for supposed wellness...
  • Really, the absolute last thing I'm ever going to do is talk to an application that my company knows about whatever stress, anxiety, or mental health issues I may or may not have. How could that possibly go wrong in a world where every bit of data you ever put in an application is saved forever?

    • Really, the absolute last thing I'm ever going to do is talk to an application that my company knows about whatever stress, anxiety, or mental health issues I may or may not have. How could that possibly go wrong in a world where every bit of data you ever put in an application is saved forever?

      Unfortunately, the oblivious and tech-ignorant form a very large part of the working population. Even people who should and could know better fall for this kind of crap. If they didn't, we wouldn't have social media, Alexa, Ring, smart appliances, and the like.

      Many people have a serious deficiency in their abstract thinking processes. They simply can't see the relationship between this 'helpful service', and having their privacy raped to their own detriment and to their employers' benefit. At least, they ca

  • So an employee discusses their mental and emotional problems with an imitation human wholly owned by the boss, an imitation human that can accurately remember every word exchanged and with ethical standards entirely determined by the employer.

    This should be interesting.

    • by gweihir ( 88907 )

      This should be interesting.

      Not really. I would say what happens is entirely predictable: The usual 10-15% independent thinkers will see the trap, the rest will fall for it.

  • Catbert and the Soul Sucker 3000.

  • ..made me cough in my coffee.

    The idea, I mean. So no, regardless who pays that bill, not going for clown-based LLM therapy, TYVM.

  • Hopefully we get to full-on Brave New World soon because if this is the beta version, it's pretty bad! The drugs are nothing like soma, Tinder is nothing like the book's promiscuity and in this reality the Epsilon Semi-morons are successful YouTube and sports stars!
  • So after I am done chatting with the electronic qwack, what advice would it give me that I couldn't find by searching the web regarding my symptoms? Sometimes a GOOD therapist is simply another human being who feels some empathy for what I am going through, even if it isn't all "nuts and bolts" of the DSM or that person is a qualified therapist. Talking to a machine just doesn't cut it. You can't mechanize handling severe depression and crisis.
  • Are these chatbots bound by patent doctor privilege? No? Then all your data is being sent to another AI, analyzed and reported to your employer. Don't fall for this shit, people, this will end badly for the working class.
  • Blame the victim, and give yourself time to fire people who are having trouble. That's the goal.

  • You are a true believer. Blessings of the state. Blessings of the masses. Thou art a subject of the divine, created in the image of man, by the masses, for the masses.
    Let us be thankful we have an occupation to fill. Work hard, increase production, prevent accidents, and be happy.
    Failure to do so may result in prosecution for criminal drug evasion.

God help those who do not help themselves. -- Wilson Mizner

Working...