Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Eating Disorder Helpline Fires Staff, Transitions To Chatbot After Unionization (vice.com) 117

An anonymous reader quotes a report from Motherboard: Executives at the National Eating Disorders Association (NEDA) decided to replace hotline workers with a chatbot named Tessa four days after the workers unionized. NEDA, the largest nonprofit organization dedicated to eating disorders, has had a helpline for the last twenty years that provided support to hundreds of thousands of people via chat, phone call, and text. "NEDA claims this was a long-anticipated change and that AI can better serve those with eating disorders. But do not be fooled -- this isn't really about a chatbot. This is about union busting, plain and simple," helpline associate and union member Abbie Harper wrote in a blog post.

According to Harper, the helpline is composed of six paid staffers, a couple of supervisors, and up to 200 volunteers at any given time. A group of four full-time workers at NEDA, including Harper, decided to unionize because they felt overwhelmed and understaffed. "We asked for adequate staffing and ongoing training to keep up with our changing and growing Helpline, and opportunities for promotion to grow within NEDA. We didn't even ask for more money," Harper wrote. "When NEDA refused [to recognize our union], we filed for an election with the National Labor Relations Board and won on March 17. Then, four days after our election results were certified, all four of us were told we were being let go and replaced by a chatbot."

The chatbot, named Tessa, is described as a "wellness chatbot" and has been in operation since February 2022. The Helpline program will end starting June 1, and Tessa will become the main support system available through NEDA. Helpline volunteers were also asked to step down from their one-on-one support roles and serve as "testers" for the chatbot. According to NPR, which obtained a recording of the call where NEDA fired helpline staff and announced a transition to the chatbot, Tessa was created by a team at Washington University's medical school and spearheaded by Dr. Ellen Fitzsimmons-Craft. The chatbot was trained to specifically address body image issues using therapeutic methods and only has a limited number of responses.
"Please note that Tessa, the chatbot program, is NOT a replacement for the Helpline; it is a completely different program offering and was borne out of the need to adapt to the changing needs and expectations of our community," a NEDA spokesperson told Motherboard. "Also, Tessa is NOT ChatGBT [sic], this is a rule-based, guided conversation. Tessa does not make decisions or 'grow' with the chatter; the program follows predetermined pathways based upon the researcher's knowledge of individuals and their needs."

The NEDA spokesperson also told Motherboard that Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating. "As the researchers concluded their evaluation of the study, they found the success of Tessa demonstrates the potential advantages of chatbots as a cost-effective, easily accessible, and non-stigmatizing option for prevention and intervention in eating disorders," they wrote.
This discussion has been archived. No new comments can be posted.

Eating Disorder Helpline Fires Staff, Transitions To Chatbot After Unionization

Comments Filter:
  • Pathetic (Score:5, Informative)

    by jdawgnoonan ( 718294 ) on Friday May 26, 2023 @06:06PM (#63554201)
    Just what someone in distress needs. A robot giving them advice.
    • Re: (Score:3, Insightful)

      by Hodr ( 219920 )

      That's kind of what they had before. There was a script, and they had to stick to it. Just like your 1st level technical support when you call them. Describe the issue, they look it up, it tells them what to tell you. This just removes the middle-man.

      These help lines are not intended to be therapeutic. They aren't counselling services or psychiatric care. They are basically a highly informed and guided google search to provide vetted resources and answers.

      • by Rei ( 128717 )

        Also, you can't develop and deploy an AI chatbot on commercial scales in 4 days. This is clearly something that's been in the works for a while.

    • Re:Pathetic (Score:4, Insightful)

      by byronivs ( 1626319 ) on Friday May 26, 2023 @06:25PM (#63554241) Journal

      So, without getting into the specific fears, isn't this exactly not what we want to use internet trained chatbots to be doing?
       
      This is a highly specific area I know nothing about, but my human guess is that a real person is what's needed for distress and crisis lines.
       
      And there's nothing said that this was trained on the internet. But, stupid humans do stupid stuff. And who quite honestly is going to call a platitudes line? They know there won't be anyone there. Maybe that's the idea. Did I smell FL?

      • Re:Pathetic (Score:5, Informative)

        by Hodr ( 219920 ) on Friday May 26, 2023 @06:55PM (#63554329) Homepage

        This is not a distress or crisis hotline, it's not emergency services. It's an information line with a very specific topic. Kind of like poison control.

        Sure, you can google "my kid ate an shitload of vitamin b tablets, what do I do"? and you will get back a lot of information, some factual and some not. But if you call poison control you can be reasonably certain they will understand what your concern is, how to get the relevant information out of you (what kind of vitamin b? what is the dosage per pill, etc.) before providing a recommendation.

        Similar to poison control this service will help identify the persons specific needs and point them towards the most appropriate resources.

        And that is actually something an appropriately trained AI chat-bot might be quite good at.

         

        • People call services like poison control because they want real advice from human professionals with real expertise, compassion and accountability in a potentially dangerous emergency situation, which you don't get from a search engine spitting out results from internet articles which may or may not be accurate or applicable and might take multiple tries to even find what you're looking for. So of course what better idea than to replace those people with a chatbot that gets its information from those same i
          • Re: Pathetic (Score:4, Insightful)

            by jythie ( 914043 ) on Saturday May 27, 2023 @12:30PM (#63555505)
            Though more importantly : a human can be interacted with and corrected. Humans, when they encounter a novel situation like someone who doesn't know the word for something or isn't using the expected one can go back and forth to determine what they mean. Chatbots are terrible at that, they have no individual feedback loop and nothing to fall back on when things go too far off script.
    • A robot is probably what they do need. It has infinite patience and nothing better that it wants to be doing. Most human beings don't want to deal with someone else's shit (they've probably got enough of their own) unless it's a family member or a close friend. Someone who's mentally ill in any kind of way is a lot less likely to have that kind of social support group around them.

      The robot is always there because it doesn't need to sleep and it's not going to feel the need to cut a conversation short bec
    • Re:Pathetic (Score:4, Insightful)

      by Luckyo ( 1726890 ) on Friday May 26, 2023 @07:13PM (#63554369)

      Before: people that follow the script, which has a tree of query-answer sets that you go through.

      Now: machine that follows the script, which has a tree of query-answer sets that you go through.

      My guess, machine is going to be superior at this, as it's going to be much more consistent.

      • Depends on how it handles accents and pronunciation if it's a verbal bot or spelling and typos if it's a chat bot.

        Not to mention humans can ask more questions to figure out if the caller is stating things accurately or the caller is not making sense and the issue could be something else.

        • by Luckyo ( 1726890 )

          You're talking about things that someone who's at top level of this sort of a call center is allowed to do. If even them, considering what this sort of call center does. Their "last tier" is a real face to face encounter with an actual professional.

    • This news makes me want to puke. And eat. And puke... and eat....

  • Correlated but... (Score:5, Interesting)

    by Petersko ( 564140 ) on Friday May 26, 2023 @06:14PM (#63554215)

    The interesting nuance here is that it's a nonprofit. These chatbots don't spring out of nowhere. They take time to develop. This one has been in place and running for several months, so they've had some time to shake it out and see it's limitations. Clearly this eventuality is not a spur of the moment decision based solely on the unionization effort.

    • Re:Correlated but... (Score:5, Informative)

      by hey! ( 33014 ) on Friday May 26, 2023 @06:46PM (#63554297) Homepage Journal

      I spent many years of my career in the non-profit world, and non-profits can pretty much do anything a for-profit does, except distribute profits. There's some difference in budgeting principles, and ethically you have some kind of purpose you've been incorporated under that ethically *should* take precedence over any other goals (although there's nobody to hold you accountable to this), but otherwise you can do everything a for-profit does, including create and market products and services.

      One important way in which non-profits are rthe same as for-profits is that you have to follow the same laws, including labor laws, which make retaliation against unionization legally suspect.

      • by uncqual ( 836337 )

        Deciding to replace union workers with automation is not retaliation against the workers for unionizing if the automation is cheaper. One can consider future costs in business decisions as well and employees, both union and nonunion, are only going to get more expensive while automation is likely to get cheaper.

        • Deciding to replace union workers with automation is not retaliation against the workers for unionizing if the automation is cheaper.

          Does it lead to better outcomes?

          • by uncqual ( 836337 )

            Even equivalent outcomes at a slightly lower cost could justify such a shift.

            As well, sometimes outcomes will be different but adequate or better.

            For example, replacing 80% of your customer service reps with bots would likely allow "bot" service to your customers 7/24/365 instead of just M-F 8A-10P EST with the remaining 20% of reps providing "human" service during the original hours for complex problems or questions.

            Was replacing telephone switchboard operators with automation a "better" outcome? I think i

    • it takes months and months to make Unionization happen.

      That said, it's very possible they were always going to fire the employees. There's a *massive* automation push coming. ChatGPT has put the idea of computers taking jobs in the heads of CEOs and money people. They are positively salivating over the prospect of mass layoffs and a high unemployment rate that follows leading to lower wages for what jobs are left.

      As somebody pointed out, in the future robots will write poems and paint pictures and h
      • by codebase7 ( 9682010 ) on Saturday May 27, 2023 @12:02AM (#63554767)

        They are positively salivating over the prospect of mass layoffs and a high unemployment rate that follows leading to lower wages for what jobs are left.

        Which should instantly qualify them for a permanent spot in an insane asylum, with mandatory 24/7 attendance requirements. No modern society that exists today can survive under such conditions. To try and force them to anyway should be met with the instant hostility and apathy that they show others. To the point of forced removal from power and forfeiture of their assets if need be. Such is the punishment to be handed out to those that abuse capitalism to the determent of society.

        As somebody pointed out, in the future robots will write poems and paint pictures and humans will work minimum wage jobs. Not the future we were supposed to have.

        Only if the 99% lets them. The 1% has far outlived their welcome. If they think that the 99% is going to roll over and die so some screens show bigger numbers, they've got another thing coming. Sure that may not look like it right now, but things change when enough people get desperate enough. Doesn't mean it's easy, (freedom has a price, and it isn't free) but it will happen eventually if they keep pushing this nonsense.

      • I think it is more complicated than that. I guess that in a few years, there will be AI lawyers that will cheaply defend your rights as an employee. Of course, the employer will have a top of the line AI lawyer to handle all those other lawyers). Since there are that many lawsuits, the judge is actually replaced by an AI packet as well.
        Meanwhile, employee, employer and judge can play a round of golf.
    • I'd guess the unionization effort didn't spring out of nowhere in a moment, either.
      Very likely the signs of this were long in the baking.

    • The people doing the job were literally reading from a script they were not allowed to deviate from ... ..the only change is that a chat bot is reading the script now ... it was the most simple possible change

  • by ffkom ( 3519199 ) on Friday May 26, 2023 @06:16PM (#63554221)
    ... between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating."

    And the other 325 - are they still alive? Just asking...
    • by Petersko ( 564140 ) on Friday May 26, 2023 @06:19PM (#63554227)

      All you can conclude is that 325 (under 50%) gave ratings under 100%. You also don't know what the average ratings were for all those who were served by people.

      Given that a certain amount of people (and a non insubstantial number at that) will bitch and complain no matter who helped them or how well they did, it doesn't seem like the bot is doing particularly poorly.

      • by 93 Escort Wagon ( 326346 ) on Friday May 26, 2023 @06:23PM (#63554233)

        All you can conclude is that 325 (under 50%) gave ratings under 100%.

        You can reasonably conclude significantly more than that. If those other 325 women had given a favorable rating, NESA would almost certainly have mentioned that at least in passing. So it's reasonable to conclude that 325 people gave the chatbot an unfavorable rating.

      • by sound+vision ( 884283 ) on Friday May 26, 2023 @06:57PM (#63554339) Journal

        At the call centers I've worked at, they expect the vast majority of ratings you receive to be perfect 10s. It was a decade ago now, but IIRC a 10 rating was positive toward your quota, a 9 was neutral, and anything 0-8 counted negatively toward your quota.

        Maybe people are more willing to give a chat bot an imperfect rating, but at the places I worked, half your ratings being less-than-perfect would be grounds for being put on a performance plan.

      • It's also a completely meaningless figure, when you go and see a psychologist or counsellor you almost always come away feeling better even if you've talked to a complete charlatan simply because someone sat and listened to your problems, didn't interrupt or tell you what to do, and was sympathetic to you which you're often unlikely to get anywhere else.

        What they should be measuring is who, six months after talking to this chatbot, was actually helped by it. My guess is the figure will be in the single di

    • Its a study, assuming the journalist is being honest, I assume that the missing ratings are from people who decided not to take the study??? It would be a little strange for everyone to rate the experience perfectly, but I guess a bot can have some very elegant preprogrammed responses physiologically proven to fill the readers with whatever emotions they want.

      • by hey! ( 33014 )

        53% of the subjects gave the chatbot a 100% score. That may be an entirely legitimate result, but since we don't know what the study population is and what the full range of outcomes from the study are, that tells us nothing about the suitabilty of using this system with people who may be facing a psychologicl or medical crisis. If 53% of the user gave the system full marks and 10% immediately committed suicide after using it, that would not be a good result.

    • This is some of the best "lying with statistics" that I've ever seen. The way that a normal person might see that statistic is that 57% of the testers gave it an entirely helpful rating (whatever that means). But, by reframing the numbers, they can get that "100% helpful rating" key phrase in there.
    • Literally: survivor bias?

    • Oblig xkcd [xkcd.com] quote...
  • Stop Eating!

    Next problem....

    • Chatbot: Listen, honey, an eating disorder is obviously the LEAST of your problems!

    • by alvinrod ( 889928 ) on Friday May 26, 2023 @06:51PM (#63554321)
      It's actually more likely to be the opposite. I'd bet that at least half of the calls, if not more, are related to people with anorexia or bulimia. You probably wouldn't want the chat not to tell them to stop eating. Unless a person is binge eating, being fat isn't generally recognized as an eating disorder. It's more of a problem with having a shitty diet and sedentary lifestyle more than anything else.
      • by Luckyo ( 1726890 )

        Bulimia is about eating too much and then getting feelings of shame and fucking around with your throat to activate the gag reflex to the extent where you throw up.

  • And people calling the hotline can do Bing searches at the same time! /s
  • Maybe for the Executives only.
  • if you've never worked for one you wouldn't believe it. The vast majority are just money making operations for the owner. The most famous being Goodwill, which is technically a non-profit even as the owners rake in millions.
  • Somebody needs to hack that chatbot so it refuses to work without being pad union wages.
  • by Required Snark ( 1702878 ) on Friday May 26, 2023 @06:50PM (#63554315)
    Into the pockets of the executives running the non-profit.

    There was a thread on Slashdot not that long ago about whether LLM style AI would do harm. This is an example of how AI already is hurting people. Whether it is LLM chat-bots or rule based AI is not the point, it's the ability of AI to replace human interaction.

    When large numbers of workers are laid off and replaced by AI the impact will be severe. Besides the people who will become unemployed, all sorts of services will become unusable and unreliable. Medical, financial, law enforcement, government, retail will all have user interaction that will be incompetent and not help when things go wrong.

    Note that the number of test subjects was 700. Saying that 375 of the test group was 100% satisfied (just over half) is a clear sign that the real results are being spun. Not comparing the rates of human interaction positives with AI positives is a dead giveaway. Additionally old system interacted with hundreds of thousands of people, so how can such a small sample be used to justify a replacement?

    This is about greed pure and simple. AI is a technobabble excuse to screw people and siphon resources to those at the top of the heap.

    • And if the union lawyers can prove it was retaliatory, the owners can go to jail instead of collecting more money. Actually the entire thing is sketchy in light of labor law, I expect to see a few more news items about this. Completely canning everyone after they organize has been illegal for almost a hundred years now.

    • it's the ability of AI to replace human interaction.

      Not everything requires human interaction and the loss of these jobs is not always a net negative to society. The best example of this is banking. The very same people who bitched and moaned about talking to a computer when they called their bank, or having to type in the purpose for a visit when going to a branch and getting a little barcode and letternumber combination for you to wait until you're called praised the fundamental move to online banking and deposit ATMs.

      The only relevant metric for "hurt" is

    • by ezdiy ( 2717051 )

      Why blame the machine, and not the executive? Same old, guns don't kill people, people kill people.

  • Welp (Score:3, Interesting)

    by Bahbus ( 1180627 ) on Friday May 26, 2023 @07:04PM (#63554351) Homepage

    Time to overload the bot with requests and then spam unhelpful ratings.

    • Why? No really what do you hope to achieve by this? Management won't care and will write it off for the silly thing it is. All you'll be doing is taking a time slot away from someone who needs it for a legitimate purpose.

      • by Bahbus ( 1180627 )

        The bot is not legitimate, and no one needs it. I'm sure these helpline people needed the job moreso than this useless bot ever needed to see the light of day.

  • They might be running on a shoestring budget, and have to make choices like this.
  • by vistic ( 556838 ) on Friday May 26, 2023 @07:06PM (#63554357)

    Oh... you thought another human cared about you? And was willing to listen to your problems?
    Haha fck you, heres a bot you can talk to.

    What a great charity.

    • Re: (Score:3, Interesting)

      by Luckyo ( 1726890 )

      These phone helplines are rarely about any of that. They're usually about directing you to how to get help where things like treatment can be gained. Empathy for a bulimiac is in many ways a counter-productive thing for said bulimiac. She'll just go binge eating and throwing up and then call to cry about it, and then keep doing it until she dies.

      This isn't a joke. Look up how bulimiacs usually die. It's things like stomach acid from her ruptured stomach melting down her insides while in the locked toilet tr

      • by evanh ( 627108 )

        Huh? Empathy is only an understanding, not a fix - For anything. Although it will likely help in making a diagnosis.

        • by Luckyo ( 1726890 )

          Ask anorexia and bulimia specialists who're treating the illness (as opposed to vocal internet activists who enable it) what they think about it.

          They'll tell you that one of the biggest problems with treating those two is the activists who think like you that empathy is so important, and end up enabling these girls to kill themselves due to acting out their psychiatric issues.

          Treating those two illnesses is about being consistently stern in telling the girls that no, you're not in fact fat. You're sickening

          • by vistic ( 556838 )

            Now I see you just don't understand the difference between empathy and enabling. You care about people with eating disorders, and you sense their pain. That is empathy you have. You can be empathetic and still do the right things, psychologically, to help them get better. Empathy is not the same as enabling.

            Once again I thiught someone was insane when they just had a warped view of what a word means. But dictionaries DO still exist.

            • by Luckyo ( 1726890 )

              And then you apply this pedantry to real life and look what's happening in big cities in US right now. That is what happens with unchecked empathy, as most people are not able to be cruel and callous enough to be both empathetic and also apply harsh methods needed to give the insane wandering the streets, trying to medicate their abject misery away with drugs something that would give them a shot at the cure.

              In case you don't remember, the main argument for closing down asylums was empathy toward the insane

      • by vistic ( 556838 ) on Friday May 26, 2023 @08:42PM (#63554535)

        Oh, ChatGPT, you generate the craziest comments.

        • Comment removed based on user account deletion
          • by Luckyo ( 1726890 )

            For healthy people, yes.

            For people with psychiatric illnesses, this approach fails when psychiatric illness itself is about incorrectly calibrated feelings. In those cases, the opposite approach is often necessary. A way to ignore, override or bypass feelings and find an anchor to reality that goes against the feelings. In those cases, empathy is not only counter-productive, but can cause relapses in those who are on the way to being cured.

            This is why to a normal person, asylums looks cruel and counterprodu

            • Comment removed based on user account deletion
              • by vistic ( 556838 )

                You got it right.

              • by Luckyo ( 1726890 )

                I talk about empathy that matters in this concept. I find irrelevant edge cases rich in number and poor in relevancy to be as counterproductive as empathy is counterproductive in saving the insane who's very conception of reality and feelings about it are warped to the point of being the opposite of reality.

                Again, I point toward objective reality of the nightmare that happened in US after asylums were closed due to misguided empathy toward the insane. And the horrific outcomes of this that are visible on th

                • Comment removed based on user account deletion
                  • Comment removed based on user account deletion
                  • by Luckyo ( 1726890 )

                    You're again thinking from a perspective of a person that is sane and well regulated about a person that is like you.

                    While the subject is people who are insane and disregulated.

                    This is exactly what I describe when I'm talking about misguided empathy that comes from a sane and regulated mind attempting to project a golden rule upon insane and disregulated one. Solutions fit for one do not function for the other. But those are the solutions toward which empathy leads us, because primary element of empathy is

            • Comment removed based on user account deletion
        • by Luckyo ( 1726890 )

          Didn't know they made functional LLMs four decades ago. TIL.

          • Heres hoping tomorrow you learn about jokes.

            • by Luckyo ( 1726890 )

              >This isn't a joke. Look up how bulimiacs usually die.

              No, really, look it up. This is the stuff that gives nightmares to experienced first responders who're used to "guts and glory" shit in real life.

              • No really... look online for the update on this story. On VICE's website: "Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff"

                Lede: âoeEvery single thing Tessa suggested were things that led to the development of my eating disorder.â

                Guess you were wrong huh.

                • by Luckyo ( 1726890 )

                  Wait, did you just cite VICE as a source?

                  The same VICE that is known to spin, lie, and twist everything to maximize outrage clicks?

                  For the record, I 100% expect rage bait sites to intentionally set up a fake account, call in a few times and then be outraged about outcome and write a few stories about it. Remember the recent "white Karen assaults innocent Black men over bike rental and should be fired for it" case for example? And I 100% expect this sort of thing to get cancelled as a result, because these h

                  • Wow you REALLY have a hard time admitting you were wrong. Ok go to Google News, type in "eating disorder chatbot", sort by date, and read it from whatever website your feeble mind is able to believe isn't trying to trick you.

                    Lets see... wow NPR, The Guardian, Engadget, Fortune, Gizmodo, The Register, New York Post, NBC... they ALL seem to be in on these same scam that VICE instigated to trick you into believing this chatbot dispensed horrible harmful advice, and real EmPaThEtIc humans did a better iob. What

                    • by Luckyo ( 1726890 )

                      Actually, if you look at my posting history, I'm one of the really rare posters on slashdot to actually post mea culpas when I get something wrong.

                      This isn't one of such cases. Because this is exactly how this kind of activist sabotages these sorts of rollouts. It's a well oiled process in these circles. And these NGOs are very poor at taking attacks from this sort of activism.

                      Vice is on the record actually doing this exact kind of activism in the past. The "everyone else reposts" on the other hand is not a

  • Bots? (Score:4, Insightful)

    by glum64 ( 8102266 ) on Friday May 26, 2023 @10:38PM (#63554669)
    I know nothing about eating issues. However, I noticed that the moment I sense a robot on the other end of a communication medium, I hang up. I discovered that I cannot stand the idea that I do not deserve time of another human. Somehow, it feels deeply humiliating. The other day I phoned an insurance company to resolve an unusual administrative issue. I was in a hurry and quite nervous about the situation. All of a sudden, instead of an operator, I heard a pre-recorded announcenent prompting me to speak to an AI instead of a human. I was so puzzled, that I failed to start speaking and left the issue unresolved. I just could not decide how to explain the issue to a computer program. The feeling is horrible.
    • I think I might be the opposite: I have a hearing disability, and while I can use a phone, imperfectly, years of flawed interactions with other humans have left me gun-shy in phone situations. If I knew I was talking to a robot, I'd probably feel less anxious about the interaction. Up until the point where I felt I was talking in circles - because it's a robot and doesn't understand the nuances of my problem - and then I'd start swearing at it. But I still wouldn't feel bad about that.
  • We all love calling call centers these days, talking to somebody who can barely speak English, and reads from a script. If you ask a question that's not on their script, they have no idea how to answer. This AI probably wouldn't do any worse!

    • No, it's worse than that. They DO answer, namely give an answer that is in their script, even though it has nothing to do with your question.

  • Knowing there is no real human on the other end, and that calls should not be recorded for confidentiality reasons, it is now fair game. If they have started recording calls to help improve the chatbot, then I am sure whoever is paid to work on it would appreciate the free entertainment either way!
  • by LeeLynx ( 6219816 ) on Saturday May 27, 2023 @04:19AM (#63554971)

    The NEDA spokesperson also told Motherboard that Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating.

    54% of the time, it works every time. [youtube.com]

  • The advice giver's new name: FatbotGPT.

  • They can rename it the LMGTFY help line.

  • Unions, always, and usually by design, raise the cost of employing people. Ultimately reducing the overall standard of living by eliminating jobs and businesses while increasing costs and prices for everyone.

    Technology on the other hand relentlessly makes it possible to do more with the same number of people, or, equivalently, to do the same thing with fewer of them. This greatly increases the general standard of living. Jobs still can be eliminated in the short term, but the businesses themselves surviv

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...