Eating Disorder Helpline Fires Staff, Transitions To Chatbot After Unionization (vice.com) 117
An anonymous reader quotes a report from Motherboard: Executives at the National Eating Disorders Association (NEDA) decided to replace hotline workers with a chatbot named Tessa four days after the workers unionized. NEDA, the largest nonprofit organization dedicated to eating disorders, has had a helpline for the last twenty years that provided support to hundreds of thousands of people via chat, phone call, and text. "NEDA claims this was a long-anticipated change and that AI can better serve those with eating disorders. But do not be fooled -- this isn't really about a chatbot. This is about union busting, plain and simple," helpline associate and union member Abbie Harper wrote in a blog post.
According to Harper, the helpline is composed of six paid staffers, a couple of supervisors, and up to 200 volunteers at any given time. A group of four full-time workers at NEDA, including Harper, decided to unionize because they felt overwhelmed and understaffed. "We asked for adequate staffing and ongoing training to keep up with our changing and growing Helpline, and opportunities for promotion to grow within NEDA. We didn't even ask for more money," Harper wrote. "When NEDA refused [to recognize our union], we filed for an election with the National Labor Relations Board and won on March 17. Then, four days after our election results were certified, all four of us were told we were being let go and replaced by a chatbot."
The chatbot, named Tessa, is described as a "wellness chatbot" and has been in operation since February 2022. The Helpline program will end starting June 1, and Tessa will become the main support system available through NEDA. Helpline volunteers were also asked to step down from their one-on-one support roles and serve as "testers" for the chatbot. According to NPR, which obtained a recording of the call where NEDA fired helpline staff and announced a transition to the chatbot, Tessa was created by a team at Washington University's medical school and spearheaded by Dr. Ellen Fitzsimmons-Craft. The chatbot was trained to specifically address body image issues using therapeutic methods and only has a limited number of responses. "Please note that Tessa, the chatbot program, is NOT a replacement for the Helpline; it is a completely different program offering and was borne out of the need to adapt to the changing needs and expectations of our community," a NEDA spokesperson told Motherboard. "Also, Tessa is NOT ChatGBT [sic], this is a rule-based, guided conversation. Tessa does not make decisions or 'grow' with the chatter; the program follows predetermined pathways based upon the researcher's knowledge of individuals and their needs."
The NEDA spokesperson also told Motherboard that Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating. "As the researchers concluded their evaluation of the study, they found the success of Tessa demonstrates the potential advantages of chatbots as a cost-effective, easily accessible, and non-stigmatizing option for prevention and intervention in eating disorders," they wrote.
According to Harper, the helpline is composed of six paid staffers, a couple of supervisors, and up to 200 volunteers at any given time. A group of four full-time workers at NEDA, including Harper, decided to unionize because they felt overwhelmed and understaffed. "We asked for adequate staffing and ongoing training to keep up with our changing and growing Helpline, and opportunities for promotion to grow within NEDA. We didn't even ask for more money," Harper wrote. "When NEDA refused [to recognize our union], we filed for an election with the National Labor Relations Board and won on March 17. Then, four days after our election results were certified, all four of us were told we were being let go and replaced by a chatbot."
The chatbot, named Tessa, is described as a "wellness chatbot" and has been in operation since February 2022. The Helpline program will end starting June 1, and Tessa will become the main support system available through NEDA. Helpline volunteers were also asked to step down from their one-on-one support roles and serve as "testers" for the chatbot. According to NPR, which obtained a recording of the call where NEDA fired helpline staff and announced a transition to the chatbot, Tessa was created by a team at Washington University's medical school and spearheaded by Dr. Ellen Fitzsimmons-Craft. The chatbot was trained to specifically address body image issues using therapeutic methods and only has a limited number of responses. "Please note that Tessa, the chatbot program, is NOT a replacement for the Helpline; it is a completely different program offering and was borne out of the need to adapt to the changing needs and expectations of our community," a NEDA spokesperson told Motherboard. "Also, Tessa is NOT ChatGBT [sic], this is a rule-based, guided conversation. Tessa does not make decisions or 'grow' with the chatter; the program follows predetermined pathways based upon the researcher's knowledge of individuals and their needs."
The NEDA spokesperson also told Motherboard that Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating. "As the researchers concluded their evaluation of the study, they found the success of Tessa demonstrates the potential advantages of chatbots as a cost-effective, easily accessible, and non-stigmatizing option for prevention and intervention in eating disorders," they wrote.
Pathetic (Score:5, Informative)
Re: (Score:3, Insightful)
That's kind of what they had before. There was a script, and they had to stick to it. Just like your 1st level technical support when you call them. Describe the issue, they look it up, it tells them what to tell you. This just removes the middle-man.
These help lines are not intended to be therapeutic. They aren't counselling services or psychiatric care. They are basically a highly informed and guided google search to provide vetted resources and answers.
Re: (Score:2)
Also, you can't develop and deploy an AI chatbot on commercial scales in 4 days. This is clearly something that's been in the works for a while.
Re:Pathetic (Score:4, Insightful)
So, without getting into the specific fears, isn't this exactly not what we want to use internet trained chatbots to be doing?
This is a highly specific area I know nothing about, but my human guess is that a real person is what's needed for distress and crisis lines.
And there's nothing said that this was trained on the internet. But, stupid humans do stupid stuff. And who quite honestly is going to call a platitudes line? They know there won't be anyone there. Maybe that's the idea. Did I smell FL?
Re:Pathetic (Score:5, Informative)
This is not a distress or crisis hotline, it's not emergency services. It's an information line with a very specific topic. Kind of like poison control.
Sure, you can google "my kid ate an shitload of vitamin b tablets, what do I do"? and you will get back a lot of information, some factual and some not. But if you call poison control you can be reasonably certain they will understand what your concern is, how to get the relevant information out of you (what kind of vitamin b? what is the dosage per pill, etc.) before providing a recommendation.
Similar to poison control this service will help identify the persons specific needs and point them towards the most appropriate resources.
And that is actually something an appropriately trained AI chat-bot might be quite good at.
Re: Pathetic (Score:3)
Re: Pathetic (Score:4, Insightful)
Re: (Score:3)
The robot is always there because it doesn't need to sleep and it's not going to feel the need to cut a conversation short bec
Re:Pathetic (Score:4, Insightful)
Before: people that follow the script, which has a tree of query-answer sets that you go through.
Now: machine that follows the script, which has a tree of query-answer sets that you go through.
My guess, machine is going to be superior at this, as it's going to be much more consistent.
Re: (Score:2)
Depends on how it handles accents and pronunciation if it's a verbal bot or spelling and typos if it's a chat bot.
Not to mention humans can ask more questions to figure out if the caller is stating things accurately or the caller is not making sense and the issue could be something else.
Re: (Score:2)
You're talking about things that someone who's at top level of this sort of a call center is allowed to do. If even them, considering what this sort of call center does. Their "last tier" is a real face to face encounter with an actual professional.
Re: (Score:2)
This news makes me want to puke. And eat. And puke... and eat....
Re: (Score:1)
At least the robot would be correct once in a while.
Re: (Score:2)
Hispanic guy who hates blacks and gays. You can't even make this stuff up.
Re: (Score:2)
Spanish and Italian are both descended from Latin.
Correlated but... (Score:5, Interesting)
The interesting nuance here is that it's a nonprofit. These chatbots don't spring out of nowhere. They take time to develop. This one has been in place and running for several months, so they've had some time to shake it out and see it's limitations. Clearly this eventuality is not a spur of the moment decision based solely on the unionization effort.
Re:Correlated but... (Score:5, Informative)
I spent many years of my career in the non-profit world, and non-profits can pretty much do anything a for-profit does, except distribute profits. There's some difference in budgeting principles, and ethically you have some kind of purpose you've been incorporated under that ethically *should* take precedence over any other goals (although there's nobody to hold you accountable to this), but otherwise you can do everything a for-profit does, including create and market products and services.
One important way in which non-profits are rthe same as for-profits is that you have to follow the same laws, including labor laws, which make retaliation against unionization legally suspect.
Re: (Score:3)
Deciding to replace union workers with automation is not retaliation against the workers for unionizing if the automation is cheaper. One can consider future costs in business decisions as well and employees, both union and nonunion, are only going to get more expensive while automation is likely to get cheaper.
Re: (Score:3)
Deciding to replace union workers with automation is not retaliation against the workers for unionizing if the automation is cheaper.
Does it lead to better outcomes?
Re: (Score:2)
Even equivalent outcomes at a slightly lower cost could justify such a shift.
As well, sometimes outcomes will be different but adequate or better.
For example, replacing 80% of your customer service reps with bots would likely allow "bot" service to your customers 7/24/365 instead of just M-F 8A-10P EST with the remaining 20% of reps providing "human" service during the original hours for complex problems or questions.
Was replacing telephone switchboard operators with automation a "better" outcome? I think i
Unionization isn't spur of the moment (Score:3, Insightful)
That said, it's very possible they were always going to fire the employees. There's a *massive* automation push coming. ChatGPT has put the idea of computers taking jobs in the heads of CEOs and money people. They are positively salivating over the prospect of mass layoffs and a high unemployment rate that follows leading to lower wages for what jobs are left.
As somebody pointed out, in the future robots will write poems and paint pictures and h
Re:Unionization isn't spur of the moment (Score:4, Interesting)
They are positively salivating over the prospect of mass layoffs and a high unemployment rate that follows leading to lower wages for what jobs are left.
Which should instantly qualify them for a permanent spot in an insane asylum, with mandatory 24/7 attendance requirements. No modern society that exists today can survive under such conditions. To try and force them to anyway should be met with the instant hostility and apathy that they show others. To the point of forced removal from power and forfeiture of their assets if need be. Such is the punishment to be handed out to those that abuse capitalism to the determent of society.
As somebody pointed out, in the future robots will write poems and paint pictures and humans will work minimum wage jobs. Not the future we were supposed to have.
Only if the 99% lets them. The 1% has far outlived their welcome. If they think that the 99% is going to roll over and die so some screens show bigger numbers, they've got another thing coming. Sure that may not look like it right now, but things change when enough people get desperate enough. Doesn't mean it's easy, (freedom has a price, and it isn't free) but it will happen eventually if they keep pushing this nonsense.
Re: (Score:2)
Meanwhile, employee, employer and judge can play a round of golf.
Re: (Score:2)
I'd guess the unionization effort didn't spring out of nowhere in a moment, either.
Very likely the signs of this were long in the baking.
Re: (Score:2)
The people doing the job were literally reading from a script they were not allowed to deviate from ... ..the only change is that a chat bot is reading the script now ... it was the most simple possible change
"Tessa was tested on 700 women... (Score:5, Insightful)
And the other 325 - are they still alive? Just asking...
Re:"Tessa was tested on 700 women... (Score:4, Interesting)
All you can conclude is that 325 (under 50%) gave ratings under 100%. You also don't know what the average ratings were for all those who were served by people.
Given that a certain amount of people (and a non insubstantial number at that) will bitch and complain no matter who helped them or how well they did, it doesn't seem like the bot is doing particularly poorly.
Re:"Tessa was tested on 700 women... (Score:4, Insightful)
All you can conclude is that 325 (under 50%) gave ratings under 100%.
You can reasonably conclude significantly more than that. If those other 325 women had given a favorable rating, NESA would almost certainly have mentioned that at least in passing. So it's reasonable to conclude that 325 people gave the chatbot an unfavorable rating.
Re:"Tessa was tested on 700 women... (Score:5, Informative)
At the call centers I've worked at, they expect the vast majority of ratings you receive to be perfect 10s. It was a decade ago now, but IIRC a 10 rating was positive toward your quota, a 9 was neutral, and anything 0-8 counted negatively toward your quota.
Maybe people are more willing to give a chat bot an imperfect rating, but at the places I worked, half your ratings being less-than-perfect would be grounds for being put on a performance plan.
Re:"Tessa was tested on 700 women... (Score:4, Insightful)
It’ll be interesting to watch as these unscientific perfect 10 rating systems are all suddenly replaced by “Better than 5 more than half the time” now that there’s no near minimum wage scrub to kick around.
Re: (Score:2)
This is a mental health hotline. The rules for other call centres are not really applicable. There will be some people you cannot make happy.
Re: (Score:3)
It's also a completely meaningless figure, when you go and see a psychologist or counsellor you almost always come away feeling better even if you've talked to a complete charlatan simply because someone sat and listened to your problems, didn't interrupt or tell you what to do, and was sympathetic to you which you're often unlikely to get anywhere else.
What they should be measuring is who, six months after talking to this chatbot, was actually helped by it. My guess is the figure will be in the single di
Re: (Score:2)
Its a study, assuming the journalist is being honest, I assume that the missing ratings are from people who decided not to take the study??? It would be a little strange for everyone to rate the experience perfectly, but I guess a bot can have some very elegant preprogrammed responses physiologically proven to fill the readers with whatever emotions they want.
Re: (Score:2)
53% of the subjects gave the chatbot a 100% score. That may be an entirely legitimate result, but since we don't know what the study population is and what the full range of outcomes from the study are, that tells us nothing about the suitabilty of using this system with people who may be facing a psychologicl or medical crisis. If 53% of the user gave the system full marks and 10% immediately committed suicide after using it, that would not be a good result.
Re: (Score:3)
Re: (Score:2)
Literally: survivor bias?
Re: (Score:2)
Pre-Recorded Voice (Score:1, Flamebait)
Stop Eating!
Next problem....
Re: (Score:2)
Chatbot: Listen, honey, an eating disorder is obviously the LEAST of your problems!
Re:Pre-Recorded Voice (Score:4, Insightful)
Re: (Score:2)
Bulimia is about eating too much and then getting feelings of shame and fucking around with your throat to activate the gag reflex to the extent where you throw up.
Microsoft will build the bot for free (Score:2)
Pigs Is Pigs (Score:2)
non profits are some of if not the sketchiest (Score:1, Flamebait)
Re: (Score:3, Insightful)
To be fair, there are decent finds at Goodwill. I consider it to be a for-profit neighborhood "free" bin. "Almost free". It wouldn't survive long-term if it lost money in the red. Each region is administered somewhat separately.
Re: (Score:2)
Many people mistake Goodwill stores as being the "charity" part of Goodwill. They aren't. Goodwill stores are the tool Goodwill uses to raise money for its career training classes.
https://www.goodwillhouston.or... [goodwillhouston.org]
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re:non profits are some of if not the sketchiest (Score:4, Informative)
As charities go, Goodwill is one of the good guys. Give.org rates charities based on 20 criteria. See their rating here: https://give.org/charity-revie... [give.org]
Goodwill stores are NOT a charity. They are the fundraising arm of Goodwill Industries. The money raised by the stores funds job training programs for disadvantaged people.
Hack the bot (Score:2)
Where will the "savings" go? (Score:5, Informative)
There was a thread on Slashdot not that long ago about whether LLM style AI would do harm. This is an example of how AI already is hurting people. Whether it is LLM chat-bots or rule based AI is not the point, it's the ability of AI to replace human interaction.
When large numbers of workers are laid off and replaced by AI the impact will be severe. Besides the people who will become unemployed, all sorts of services will become unusable and unreliable. Medical, financial, law enforcement, government, retail will all have user interaction that will be incompetent and not help when things go wrong.
Note that the number of test subjects was 700. Saying that 375 of the test group was 100% satisfied (just over half) is a clear sign that the real results are being spun. Not comparing the rates of human interaction positives with AI positives is a dead giveaway. Additionally old system interacted with hundreds of thousands of people, so how can such a small sample be used to justify a replacement?
This is about greed pure and simple. AI is a technobabble excuse to screw people and siphon resources to those at the top of the heap.
Re: (Score:1)
And if the union lawyers can prove it was retaliatory, the owners can go to jail instead of collecting more money. Actually the entire thing is sketchy in light of labor law, I expect to see a few more news items about this. Completely canning everyone after they organize has been illegal for almost a hundred years now.
Re: (Score:2)
it's the ability of AI to replace human interaction.
Not everything requires human interaction and the loss of these jobs is not always a net negative to society. The best example of this is banking. The very same people who bitched and moaned about talking to a computer when they called their bank, or having to type in the purpose for a visit when going to a branch and getting a little barcode and letternumber combination for you to wait until you're called praised the fundamental move to online banking and deposit ATMs.
The only relevant metric for "hurt" is
Re: (Score:2)
Why blame the machine, and not the executive? Same old, guns don't kill people, people kill people.
Welp (Score:3, Interesting)
Time to overload the bot with requests and then spam unhelpful ratings.
Re: (Score:2)
Why? No really what do you hope to achieve by this? Management won't care and will write it off for the silly thing it is. All you'll be doing is taking a time slot away from someone who needs it for a legitimate purpose.
Re: (Score:2)
The bot is not legitimate, and no one needs it. I'm sure these helpline people needed the job moreso than this useless bot ever needed to see the light of day.
For a nonprofit (Score:2)
Re: (Score:3)
Nobody pays to use this service. Who pays their bills?
Jokes on You for Expecting to Find Human Empathy (Score:4, Insightful)
Oh... you thought another human cared about you? And was willing to listen to your problems?
Haha fck you, heres a bot you can talk to.
What a great charity.
Re: (Score:3, Interesting)
These phone helplines are rarely about any of that. They're usually about directing you to how to get help where things like treatment can be gained. Empathy for a bulimiac is in many ways a counter-productive thing for said bulimiac. She'll just go binge eating and throwing up and then call to cry about it, and then keep doing it until she dies.
This isn't a joke. Look up how bulimiacs usually die. It's things like stomach acid from her ruptured stomach melting down her insides while in the locked toilet tr
Re: (Score:2)
Huh? Empathy is only an understanding, not a fix - For anything. Although it will likely help in making a diagnosis.
Re: (Score:2)
Ask anorexia and bulimia specialists who're treating the illness (as opposed to vocal internet activists who enable it) what they think about it.
They'll tell you that one of the biggest problems with treating those two is the activists who think like you that empathy is so important, and end up enabling these girls to kill themselves due to acting out their psychiatric issues.
Treating those two illnesses is about being consistently stern in telling the girls that no, you're not in fact fat. You're sickening
Re: (Score:2)
Now I see you just don't understand the difference between empathy and enabling. You care about people with eating disorders, and you sense their pain. That is empathy you have. You can be empathetic and still do the right things, psychologically, to help them get better. Empathy is not the same as enabling.
Once again I thiught someone was insane when they just had a warped view of what a word means. But dictionaries DO still exist.
Re: (Score:2)
And then you apply this pedantry to real life and look what's happening in big cities in US right now. That is what happens with unchecked empathy, as most people are not able to be cruel and callous enough to be both empathetic and also apply harsh methods needed to give the insane wandering the streets, trying to medicate their abject misery away with drugs something that would give them a shot at the cure.
In case you don't remember, the main argument for closing down asylums was empathy toward the insane
Re: Jokes on You for Expecting to Find Human Empat (Score:5, Interesting)
Oh, ChatGPT, you generate the craziest comments.
Re: (Score:2)
Re: (Score:2)
For healthy people, yes.
For people with psychiatric illnesses, this approach fails when psychiatric illness itself is about incorrectly calibrated feelings. In those cases, the opposite approach is often necessary. A way to ignore, override or bypass feelings and find an anchor to reality that goes against the feelings. In those cases, empathy is not only counter-productive, but can cause relapses in those who are on the way to being cured.
This is why to a normal person, asylums looks cruel and counterprodu
Re: (Score:2)
Re: (Score:2)
You got it right.
Re: (Score:2)
I talk about empathy that matters in this concept. I find irrelevant edge cases rich in number and poor in relevancy to be as counterproductive as empathy is counterproductive in saving the insane who's very conception of reality and feelings about it are warped to the point of being the opposite of reality.
Again, I point toward objective reality of the nightmare that happened in US after asylums were closed due to misguided empathy toward the insane. And the horrific outcomes of this that are visible on th
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You're again thinking from a perspective of a person that is sane and well regulated about a person that is like you.
While the subject is people who are insane and disregulated.
This is exactly what I describe when I'm talking about misguided empathy that comes from a sane and regulated mind attempting to project a golden rule upon insane and disregulated one. Solutions fit for one do not function for the other. But those are the solutions toward which empathy leads us, because primary element of empathy is
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You can describe 133769,42 types of empathy if you want.
I describe the one that matters. The one that lead to current homeless crisis in US.
Re: (Score:2)
Re: (Score:2)
Except that, of course, it is. As it is a direct consequence of misguided empathy of sane mind projecting the golden rule upon treatment necessary to give the insane a chance to get a grip on reality again.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Didn't know they made functional LLMs four decades ago. TIL.
Re: Jokes on You for Expecting to Find Human Empa (Score:2)
Heres hoping tomorrow you learn about jokes.
Re: (Score:2)
>This isn't a joke. Look up how bulimiacs usually die.
No, really, look it up. This is the stuff that gives nightmares to experienced first responders who're used to "guts and glory" shit in real life.
Re: Jokes on You for Expecting to Find Human Empa (Score:2)
No really... look online for the update on this story. On VICE's website: "Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff"
Lede: âoeEvery single thing Tessa suggested were things that led to the development of my eating disorder.â
Guess you were wrong huh.
Re: (Score:2)
Wait, did you just cite VICE as a source?
The same VICE that is known to spin, lie, and twist everything to maximize outrage clicks?
For the record, I 100% expect rage bait sites to intentionally set up a fake account, call in a few times and then be outraged about outcome and write a few stories about it. Remember the recent "white Karen assaults innocent Black men over bike rental and should be fired for it" case for example? And I 100% expect this sort of thing to get cancelled as a result, because these h
Re: Jokes on You for Expecting to Find Human Empa (Score:2)
Wow you REALLY have a hard time admitting you were wrong. Ok go to Google News, type in "eating disorder chatbot", sort by date, and read it from whatever website your feeble mind is able to believe isn't trying to trick you.
Lets see... wow NPR, The Guardian, Engadget, Fortune, Gizmodo, The Register, New York Post, NBC... they ALL seem to be in on these same scam that VICE instigated to trick you into believing this chatbot dispensed horrible harmful advice, and real EmPaThEtIc humans did a better iob. What
Re: (Score:2)
Actually, if you look at my posting history, I'm one of the really rare posters on slashdot to actually post mea culpas when I get something wrong.
This isn't one of such cases. Because this is exactly how this kind of activist sabotages these sorts of rollouts. It's a well oiled process in these circles. And these NGOs are very poor at taking attacks from this sort of activism.
Vice is on the record actually doing this exact kind of activism in the past. The "everyone else reposts" on the other hand is not a
Earlier You Said Something About Your DInner (Score:2)
Please go on.
Bots? (Score:4, Insightful)
Re: (Score:2)
The AI probably beats some call center humans (Score:2)
We all love calling call centers these days, talking to somebody who can barely speak English, and reads from a script. If you ask a question that's not on their script, they have no idea how to answer. This AI probably wouldn't do any worse!
Re: (Score:2)
No, it's worse than that. They DO answer, namely give an answer that is in their script, even though it has nothing to do with your question.
Hand everything to 4chan (Score:1)
Yee-up (Score:3)
The NEDA spokesperson also told Motherboard that Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating.
54% of the time, it works every time. [youtube.com]
New name (Score:2)
The advice giver's new name: FatbotGPT.
I suggest a name change (Score:2)
They can rename it the LMGTFY help line.
Illustrates an inherent problem with unions.... (Score:2)
Unions, always, and usually by design, raise the cost of employing people. Ultimately reducing the overall standard of living by eliminating jobs and businesses while increasing costs and prices for everyone.
Technology on the other hand relentlessly makes it possible to do more with the same number of people, or, equivalently, to do the same thing with fewer of them. This greatly increases the general standard of living. Jobs still can be eliminated in the short term, but the businesses themselves surviv
Re: (Score:3)
Not historically, just recently. As others may point out, unions have done a lot of good over the past 150 years or so. Just not recently.
Re: (Score:2)