Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Patients Aren't Being Told About the AI Systems Advising Their Care (statnews.com) 42

At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools -- many of them unproven -- to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether they're at risk of readmission, and whether they're likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found. From a report: The result: Machines that are completely invisible to patients are increasingly guiding decision-making in the clinic. Hospitals and clinicians "are operating under the assumption that you do not disclose, and that's not really something that has been defended or really thought about," Harvard Law School professor Glenn Cohen said. Cohen is the author of one of only a few articles examining the issue, which has received surprisingly scant attention in the medical literature even as research about AI and machine learning proliferates. In some cases, there's little room for harm: Patients may not need to know about an AI system that's nudging their doctor to move up an MRI scan by a day, like the one deployed by M Health Fairview, or to be more thoughtful, such as with algorithms meant to encourage clinicians to broach end-of-life conversations.

But in other cases, lack of disclosure means that patients may never know what happened if an AI model makes a faulty recommendation that is part of the reason they are denied needed care or undergo an unnecessary, costly, or even harmful intervention. That's a real risk, because some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely haven't yet been shown to improve patient outcomes. Some hospitals don't share data on how well the systems work, justifying the decision on the grounds that they are not conducting research. But that means that patients are not only being denied information about whether the tools are being used in their care, but also about whether the tools are actually helping them. The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value -- but plenty of downside -- in raising the subject.

This discussion has been archived. No new comments can be posted.

Patients Aren't Being Told About the AI Systems Advising Their Care

Comments Filter:
  • "AI suggests you should kill yourself to prevent the corona virus from killing you...."
    • A resident looking up your malady in a text books and then guessing?

      As long as the AI is used to augment the judgement of an experienced physician, then there is no issue.

      • Its based on statistics. So as long as your a textbook case you should be ok. Im never a textbook case so Im a little worried. But then again I challenge and confirm everything Im told anyway.

        You know the saying

        There are Lies. Damn Lies. And Statistics.

        • by Cylix ( 55374 )

          Expecting personalized care in the revolving door of hospital staff is not a good bet. You have to be your own advocate.

          No, I cannot take that medication and eat twenty minutes later. I repeated that sentence so much. It was also a fight to get the right care.

          I swear I want to hire a private physician to handle the hospital next time I have a big ordeal. Kinda like how you get a contractor to manage your housing development.

          • True Dat. That's a really good idea, hiring someone like a lawyer but useful. Can be the Uber of medicine. Someone just read this and is going to bank in 3 years....lol
  • Seems fine to me (Score:5, Insightful)

    by reanjr ( 588767 ) on Monday July 20, 2020 @05:09PM (#60312431) Homepage

    As long as they don't take the FB defense (we're not responsible, the algorithms are), this seems fine to me. Doctors don't regularly inform you of the journals they're reading or how a PCR test works. That's their job. It's their concern.

    • by znrt ( 2424692 )

      the key here is "proven". deep learning and statistical analysis can actually outperform humans which are error prone too and subject to bias.

      the algorithms must be proven to be free of bias and at least just as accurate as human diagnose on average. if they are there is no problem, even the more the merrier. i don't see why a specific consent should be necessary, you already consent a great deal when you seek healthcare. if they aren't they shouldn't be used plain and simple, not even with consent.

      • Analyzed bacterial infections using a rule based system. People were concerned about the mistakes that it made, and it did indeed make mistakes.

        But in trials it made fewer mistakes and produced better results on average than doctors in the field.

        Things do not have to be perfect to be good.

        It never took off because in practice it usually recommended what doctors would do anyway, and entering all the test results and observations into a 1970s teletype was tedious.

        Being a rule based system it could also provi

        • by znrt ( 2424692 )

          rule based systems are very hard to "train" (you don't, you actually painstakingly write them) and thus are inherently limited in scope, scale and domain. they are still useful at the very specific level. current systems use a mix of techniques including rule based inference.

          anyhow the process and implementation is not really relevant here, it's the accuracy that matters and you can test that with any black box. one of the recent shifts in ai is that we got used to stuff that works even if we often don't ex

          • People are rightfully wary of black boxes results from either computers or people. A reasoned argument tells them what has and has not been considered.

            There is more to AI than just plugging everything into a magic artificial neural network and letting it learn. ANNs are only one tool in the box.

            • by znrt ( 2424692 )

              People are rightfully wary of black boxes results from either computers or people. A reasoned argument tells them what has and has not been considered.

              yet people have been eating all sorts of food for millenia without any reasoned argument for it except: "it works". actually, we are still pretty clueless about the finer details on how it works, we just keep eating it because it does.

              There is more to AI than just plugging everything into a magic artificial neural network and letting it learn. ANNs are only one tool in the box.

              nobody said it shouldn't. my point was precisely "proper verification" and "right questions asked". and i didn't even mention neural networks nor any other tool, you singled out rule based inference, implying it is superior because you know exactly how it works. well, it isn't

              • by znrt ( 2424692 )

                i hope next time you need some diagnostic you can have your choice.

                just realized this could have a pretty nasty interpretation. well, it wasn't meant that way :)

      • Doctors prescribe off-label all the time. That's pretty much the definition of unproven medicine, but we seem to do alright with it.

  • "We didn't recommend anything - it was the computer!"
    • No it isn't. We patients don't actually understand anything about almost any aspect of care we receive now. 'Why is that surgical implement shaped the way it is?' is something a patient could ask, but never do. That doesn't relieve physicians of liability now and it won't in the future.
      • by Cylix ( 55374 )

        The trick to hospitals is you need to shank one or the doctors your first day there. Then everyone in the hospital will respect you.

  • Tools (Score:4, Insightful)

    by chill ( 34294 ) on Monday July 20, 2020 @05:25PM (#60312489) Journal

    In other news, patients also weren't asked for their informed consent about doctor's using stethoscopes, blood pressure cuffs, small rubber mallets, EKG machines, or any other tool in their arsenal.

    Nor were they informed about the doctor's use of reference material, consultative opinions, or the fax machine.

    • or the fax machine.

      Is anyone else impressed that AI is being used for medical diagnosis in an industry where medical professionals spend 30 minutes two finger typing a short message into a computer, printing it, and then faxing the result into the ether?

  • Overblown (Score:5, Insightful)

    by carlcmc ( 322350 ) on Monday July 20, 2020 @05:30PM (#60312505)
    I also do not disclose to patients that I 1)used a computer monitor, 2)adjusted the window setting (contrast etc) on their radiology imaging study 3)checked Uptodate to make sure I don't miss a guideline 4)phone a friend (get advise from a colleague)

    Why? Because ultimately it is your name on the line making life and death recommendations.
  • "Ok, doctor, give me the good news first."
    "We have an artificial intelligence advising us on your condition, and we have a recommendation on how to proceed."
    "And the bad news? Give it to me straight."
    "The bad news is that I can't give it to you straight. Well, the AI could, but neither you nor I could understand it [youtube.com]. You know, because, AI."

  • Do patients get a complete list of staff, every single person involved in their care and decision making and patient care, complete list of books which the doctor might consult (or maybe even a complete list of books they read), a complete list of past patients which obviously will affect the doctors decisions as they comprise the doctor's "work experience". From doctors which might have consulted on the case over lunch with their primary physician, through all the nurses and lab techs who might have offere

  • Ask patients to decide their own care and their own tools.

    Better still let's ask a homeless person in the street, I'm sure they know as much about medicine as anyone else who goes to see a doctor.

    • I just love how people act surprised and like they were misdiagnosed when they still have symptoms after one dose of antibiotics or their fever comes back in four hours after acetaminophen wears off.
  • they will under trumpcare find out when blacklisted

  • Before giving any advice your doctor should be required to list every single research paper he ever read that might have influenced said advice.
  • So we have been doing this for YEARS. We have decision tools and equations that we have been using with pen and paper. Now the computer is calculating it for us. What is your HEART score for heart issues. Whatâ(TM)s your CHADS2 score for afib??? We have been using decision equations and other metrics for years. Again computers are just making it easier. To be honest it should make Care BETTER if the research is done correctly to make these equations and risk stratification.
  • Because this is how you get the Butlerian Jihad.

    I refer to the one described in the Dune Encyclopedia.

  • If there are known issues, they need to be solved. A patient could sign a waver and the doctors will be happy, or the patient could decide (without any knowledge to back it up) that human bias is better than AI bias. All that does is reduce doctor liability. It makes things worse, not better, because instead of making sure that the AI does its work well (that is, at least comparably to an average doctor), it makes sure that potentially helpful AI might not be used and bad AI will not be eliminated because t

One way to make your old car run better is to look up the price of a new model.

Working...