Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

ChatGPT Will See You Now: Doctors Using AI To Answer Patient Questions (wsj.com) 54

Pilot program aims to see if AI will cut time that medical staff spend replying to online inquiries. From a report: Behind every physician's medical advice is a wealth of knowledge, but soon, patients across the country might get advice from a different source: artificial intelligence. In California and Wisconsin, OpenAI's "GPT" generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients' online inquiries. UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool.

Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. "It gives the human touch that we would," Dr. Millen said. There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.

This discussion has been archived. No new comments can be posted.

ChatGPT Will See You Now: Doctors Using AI To Answer Patient Questions

Comments Filter:
  • Too early for this (Score:4, Insightful)

    by ranton ( 36917 ) on Monday May 01, 2023 @10:53AM (#63488818)

    Chat-GPT and similar technologies are nowhere near ready for this. I am very bullish on AI in the medical field, but it must still be relegated to helping medical professionals with their diagnosis, not doing it themselves. I just attended HIMSS 23 in Chicago, a healthcare IT conference, and saw plenty of cutting edge uses of AI and listened to lectures on how researchers are finding new ways to explain why an AI is making its recommendations. We are perhaps getting close to when AI can make diagnosis without a doctor present, but we aren't there yet. The cutting edge researchers don't think we are there yet.

    It doesn't matter if ChatGPT gets a diagnosis right better than doctors on average (even if that's true). It mostly matters how often it is dangerously wrong when compared to a doctor. And these AI systems can still be very dangerously wrong. I sure hope they are just using this to give a quick diagnosis to patients and then making sure a doctor reviews it later (and before any drugs or procedures are prescribed or performed).

    • insurance companies decide what healthcare you get. And this is so cheap they won't sign off on a human to look at your medical records.

      If you got a few thousand lying around you can pay out of pocket though. You just need to be rich. Or die. The good 'old healthcare plan: die quickly.
      • by ranton ( 36917 )

        insurance companies decide what healthcare you get. And this is so cheap they won't sign off on a human to look at your medical records. If you got a few thousand lying around you can pay out of pocket though. You just need to be rich. Or die. The good 'old healthcare plan: die quickly.

        There isn't much motive for insurance companies to cut existing costs which patients are used to and expect, and which other insurance companies already cover. It makes customers unhappy. If insurance companies were actively working to cut costs wherever they can find it in the medical industry, either insurance rates wouldn't be so high or their profit margins wouldn't be so low.

        Insurance companies do have a motive to not add new procedures and drugs which are very expensive. They would have to raise their

        • Re:Doesn't matter (Score:4, Insightful)

          by GrumpySteen ( 1250194 ) on Monday May 01, 2023 @03:31PM (#63489492)

          Insurance companies do have a motive to not add new procedures and drugs which are very expensive.

          Yes and no.

          The 80/20 Rule generally requires insurance companies to spend at least 80% of the money they take in from premiums on health care costs and quality improvement activities. The other 20% can go to administrative, overhead, and marketing costs. For group sales to groups larger than 50, the requirement is 85/15.

          This sounds like a good idea at first glance, but it created a perverse incentive and means for insurance companies to maximize their profits.

          Insurance companies quickly realized that 20% of a million dollars is more than 20% of a half million, so they started approving high cost procedures, knowing they could pass those costs to the customers along with a 15% or 20% profit margin for themselves.

      • If you got a few thousand lying around you can pay out of pocket though.

        I do have a few thousand lying around, but I'm told I still need to waste my money on insurance. How does that work?
        • by BranMan ( 29917 )

          By buying a high deductible, catastrophic coverage type plan. And keeping the difference, along with the few thousand needed in case something comes up.

          That's what I have - a high deductible plan paired with a Health Savings Account (HSA). Maxing out the HSA per year covers the max out of pocket possible with the high deductible plan, it's cheaper than the traditional offerings, less hassle, no deductibles needed at time of service, and offers a massive discount if things go well.

          If I don't have any medic

    • hipaa violations may happen and kill this with fines

    • by Anonymous Coward

      OpenAI CTO: "AI desperately needs to be regulated!"
      Also openAI CTO: "Let's see if we can get into healthcare before they regulate us!"

      It doesn't matter how premature it is, the whole industry is a mad scramble to grab what you can before the other shoe drops.

    • by mysidia ( 191772 )

      but it must still be relegated to helping medical professionals with their diagnosis, not doing it themselves

      ChatGPT's not an AI designed to make diagnosis.. ChatGPT is a language model - Its purpose and what it is good at is creating written responses to prompts in natural language.

      That is it create any response: Not accurate responses... Causing the result to be accurate is up to the prompter to do essentially.

      If you ask ChatGPT to say something without knowing the answer, for example if you ask Chat

    • Quoting ranton [slashdot.org]:

      "It mostly matters how often it is dangerously wrong when compared to a doctor. And these AI systems can still be very dangerously wrong."

      Excellent observation. However, doctors can be dangerously wrong, also. It's good to get advice from more than one doctor.

      One answer: Get advice from more than one AI system, and also from more than one doctor.
      • by ffkom ( 3519199 )

        It's good to get advice from more than one doctor. One answer: Get advice from more than one AI system, and also from more than one doctor.

        Unlikely there will be two entirely independent medical AI systems at your disposal, simply because training only one costs less, meaning more profit. With human doctors, there a natural limit of how many patients one can see, so there is always more than one for sure. There is no such patient limit for AI, so what corporation would waste money on training more than one? Just look at the many industries where having one single vendor is the norm already, despite all the knowledge how this is a risk...

    • ...but it must still be relegated to helping medical professionals with their diagnosis, not doing it themselves...It doesn't matter if ChatGPT gets a diagnosis right better than doctors on average (even if that's true). It mostly matters how often it is dangerously wrong when compared to a doctor.

      It's even more insidious than this: this is a lawsuit gold mine...
      Doctor and AI agree to do Procedure X, good outcome: no problem.
      Doctor and AI agree to not-do Procedure X, good outcome: no problem.
      Doctor and AI agree to do Procedure X, bad outcome: "these things happen; medicine is imperfect"
      Doctor and AI agree to not-do Procedure X, bad outcome: "these things happen; medicine is imperfect"
      "Why do we need AI / Why do we need doctors?"

      Doctor believes Procedure X is the best solution, AI does not, good outc

    • by dvice ( 6309704 )

      You say: "Chat-GPT and similar technologies are nowhere near ready for this."
      Study says: " ChatGPT scored better than real doctors at responding to patient queries posted online"

      So it is pretty clear that you are wrong.

      You wonder how this can be possible, because the AI makes horrible mistakes all the time, Reason is simple: Humans make horrible mistakes all the time also. Even more than the AI. It is not a competition to be perfect, it is a competition to be better than human.

  • Save the cost of his or her income, and remove a filter that may obscure or distort the information in transit between the patient and the GPT.
  • Virtual Doc: You've got: leprosy.

  • So nuclear control out of its hands, now our health care is...
    With the godfather of AI (Hinton) being worried about the state of AI, this is some troublesome times.

  • Me: Doctor, I have a painful hangnail.
    Bot: I recommend we cut off your testicles.
    Me: Where did you get your medical training?!!!
    Bot: Anheuser-Busch University College of Veterinary Medicine.

    • by ebunga ( 95613 )

      Bot: No. That's not right. You can't put a nail on your finger to hang a picture. As a lard languish model, I have only been trained on medical advice.

  • God no (Score:3, Insightful)

    by rsilvergun ( 571051 ) on Monday May 01, 2023 @11:08AM (#63488862)
    this is what happens when the almighty dollar reigns supreme in healthcare.

    There's already been instances of "AI" (e.g. iterative algorithms) screwing up massively on radiology. Youtuber Rebecca Watson mentioned on a while back. An AI had a higher rate of success spotting tumors than radiologists... because the training data had a ruler for scale in the images and it just so happened that the ruler was present when tumors were more often....

    AI doesn't seek the "correct" answer. It seeks the most common, easiest answer. Sometimes that's right, but often it's not.

    But none of this matters. There's so much money to be made putting radiologists out of work this is going to happen whether you and me like it or not. Short of passing a law this is happening like it or not. Your insurance company won't approve a human being to check your records unless you're already dying.

    Unless you're rich. Then you can pay out of pocket the thousands of dollars it'll cost because healthcare is now a luxury service like an Apple watch or something.
    • by CAIMLAS ( 41445 )

      Don't kid yourself. Doctors are horrible at diagnostic work - if not because they don't care, then it's because they're not good at it.

      I've personally experienced benefits from using ChatGPT for healthcare diagnostics. It's helped identify things that doctors were unwilling, or unable, to address/diagnose.

      • by gweihir ( 88907 )

        With careful plausibility checking in the loop, yes. But AI can give you advice that will reliably maim or kill you and never notice. Real MDs will very rarely do that, AI does it relatively frequently.

        That said, anybody diagnosing themselves on anything a bit more complicated is an idiot. The probability to find what you are looking for instead of what is there is just far, far too high.

        • by CAIMLAS ( 41445 )

          I don't know about that, doctors - medical malpractice and procedural failure - are the #1 cause of death in the US.

          • by gweihir ( 88907 )

            Sure. And doctors diagnosing themselves are _worse_ and they are supposedly experts. Amateurs diagnosing themselves are worse again.

            Do not diagnose yourself on anything worse than a cold. You can and should, of course, help your MD doing the diagnosis, but that is it. There are rare cases where self-diagnosis works and frequent cases where it is disastrous.

            • by CAIMLAS ( 41445 )

              Guess I got lucky, then - I was able to accurately diagnose my ailment and the doctors were saying it was absolutely not that until the cultures came back.

              I guess it's going to depend on how much smarter and scientific the patient is than a midwitted doctor.

              • by gweihir ( 88907 )

                True. A smart person that is careful can do this with reasonable chances of getting reliable indicators. But you did it smart: You had the MDs lab-verify your claim and did not start to try to self-treat or self-medicate or go to some other pseudo-MDs (homeopathy and crap like that) instead.

                So yes, there is the occasional success story, but in most cases things like this turn out badly. Most people are not very smart though and a fair number is unaware of that, hence the typical situation is that the MD is

          • by tragedy ( 27079 )

            I agree that a lot of doctors are not that great at diagnosis. Do not even get me started on how bad many of them are at math and statistics. We are talking about a profession that almost universally uses BMI, for example, but could not tell you what units it's in after all. Still, I am a little confused how "medical malpractice and procedural failure - are the #1 cause of death in the US." My understanding was that the number one cause of death was heart disease and the number two cause of death was cancer

            • by CAIMLAS ( 41445 )

              My mistake, #3 for medical errors -

              https://www.hopkinsmedicine.org/news/media/releases/study_suggests_medical_errors_now_third_leading_cause_of_death_in_the_us

              • by tragedy ( 27079 )

                I'm pretty sure number 3 is still COVID. It dropped from the height of the pandemic, but it's still about 275K per year. Your article is from about 7 years ago though, so it obviously would not include anything on COVID. I can't find any real numbers in the article you linked and it does not include any link to the study it references, but searching elsewhere it looks like Dr. Makaray is saying it kills about 250K people per year, so it would still be 4th behind COVID if accurate.
                As for accuracy, I have my

    • There's already been instances of "AI" (e.g. iterative algorithms) screwing up massively on radiology. Youtuber Rebecca Watson mentioned on a while back. An AI had a higher rate of success spotting tumors than radiologists... because the training data had a ruler for scale in the images and it just so happened that the ruler was present when tumors were more often....

      Link? I couldn't find this, either of her channel or via google.

      (chatGPT: "There is no specific instance that I am aware of where improper training of radiology AI datasets occurred due to the inclusion of rulers only in some images that are more likely to be cancerous. However, it is important to ensure that AI models are trained on balanced and diverse datasets to minimize potential biases and improve their accuracy. If a dataset were to have an imbalance, such as including rulers only in cancerous images

      • The reason you only get 5 mins with a "doctor" is that doctors are expensive. And the reason doctors are expensive is that there are not enough of them, and well supply and demand. But what is govt doing about it? Well they are giving more and more people free or at least insured medical care, but doing NOTHING to ensure an increased supply of doctors, while at the same time all those baby boomer docs are retiring. For a long time AMA lobbied against increasing medical school number or enrollment. Why?

        • we slashed education funding in the late 90s to make way for tax cuts. Tuition prices skyrocketed as a result and here we are.

          On the plus side, it's a problem that'll solve itself. Boomers will lose access to healthcare, their media choices mean they're skipping vaccination, and, well, their grandkids (and even some of their kids) want the same universal higher education they see working just fine in Europe & Scandanavia.

          And who cares what the AMA lobbies for? Vertical integration (e.g. buyouts)
        • The reason you only get 5 mins with a "doctor" is that doctors are expensive. And the reason doctors are expensive is that there are not enough of them, and well supply and demand. But what is govt doing about it? Well they are giving more and more people free or at least insured medical care, but doing NOTHING to ensure an increased supply of doctors, while at the same time all those baby boomer docs are retiring. For a long time AMA lobbied against increasing medical school number or enrollment. Why? AMA is an old fashioned guild.

          I largely agree with this. AMA is definitely an old-fashioned guild and has for many years fought against new MD schools.

          Regarding supply and demand, doctor is no longer a hugely prestigious job. The image of doctors today is long years in school and long working hours. If supply and demand is constrained, and demand (far) outstrips supply, what do you do about it? Pay doctors more. And that doesn't exactly help the cost angle.

          I still think the American system as it is today, while it can be the best in the

    • This problem exists because radiologists make really big bucks. That's primarily because there are not enough radiologists. It's also because of consolidation in the radiology industry -- radiologists here in calif practice mainly in large group practices which have regional monopolies. But Lina Khan wants to go after Amazon because politics.

    • by gweihir ( 88907 )

      Indeed. Also reminds me of IBM Watson designing treatment plans. Usually a bit better, but occasionally and far too often it killed a patient in a way that no MD would ever have because of zero insight and zero understanding. They scrapped the project (the medical data analysis, not Watson).

  • As anyone with an ounce of tech competency can plainly see, the problem with isn't with the doctors. Its with billing, scheduling, staffing, and medical supplies. Doesn't matter how fast my doctor goes if the front desk and back office can't reliably schedule an appointment.

    • As anyone with an ounce of tech competency can plainly see, the problem with isn't with the doctors. Its with billing, scheduling, staffing, and medical supplies. Doesn't matter how fast my doctor goes if the front desk and back office can't reliably schedule an appointment.

      What incentive do they have to schedule longer appointments?

      Gotta make those quotas. It's a sad reality.

      • What incentive do they have to schedule longer appointments?

        You misunderstand me. Its not that the appointments are unscheduleable due to time constraints. Its that the technical competency of the healthcare industry is so atrocious that they have trouble with simple data entry tasks, like setting up an event on a calendar.

        • I got you, but my point is that there's no incentive for them to fix the problem. You (the patient) are just going to sit there, whether they're late or not. It works just fine for them. Terrible for the customer, just fine for the doctor/company/whatever.

          Unfortunate.

  • by ZipNada ( 10152669 ) on Monday May 01, 2023 @11:29AM (#63488908)

    Paywalled, but the full article can be seen here;
    https://archive.is/KMmbS [archive.is]

    Apparently they don't just let the AI handle the replies without supervision. It can come up with a draft answer and references the patient's chart, that saves time.

  • Pay attention (Score:4, Informative)

    by nospam007 ( 722110 ) * on Monday May 01, 2023 @12:26PM (#63489024)

    You doctor has been googling your question for years now, it's the easiest way to know if Shingrix needs several doses and how long to wait.

  • Like I always say, it's better to get a wrong answer than no answer.

  • I.e. medical malpractice with triple damages and their license removed if the answer is dangerous? Sounds acceptable to me. Whaaat, under these conditions they may not want to do it? Such a surprise.

  • What did ChatGPT do better at? Was it better at handling the amount of replies, was its accuracy better, or both?

    One doctor admits that the responses were poor, and needed a fair amount of edits to bring them inline with expectation, is that what we should expect? ChatGPT is cool, and it's a fun toy that can amuse people for several minutes, or maybe hours, but it's not an expert at anything. ChatGPT is basically customer service that's been automated, and I don't want a customer service agent reviewin
    • For one, it doesn't have a massive ego that you need to massage to get help from it.

      I suspect a lot of people on /. haven't encountered this problem themselves (it's a bigger issue for women), but ask in any disabled community and there'll be no end of stories from people who've had to carefully manipulate a doctor into coming up with a diagnosis "by themselves" because they won't accept hearing it from the patient.

      Some people spend a decade trying to get a correct diagnosis because their doctors just w

      • Ha! Well as a person with a fair number of different medical problems, I know all too well the run around that happens between doctors.

        I'm waiting on a call from a specialist, to take a look at an issue that happened as a side effect to a medication. The medication is addressing an issue that four of my doctors can't explain, and two of them ignored. Here's the fun part, in order to get that explanation, I have to get an MRI, and I don't fit in the MRI at any hospital in my province, literally none of
  • Train AI to handle the red tape so that doctors have time to spend with patients, instead of helping them spend more time away from patients.
  • I have been utterly disappointed in my last slew of doctors, they seem more interested in just getting you out so they can get the next billing patient in. This is just going to make it worse, lip service with fake compassion, oh boy!

Wishing without work is like fishing without bait. -- Frank Tyger

Working...