Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Hippocratic Is Building a Large Language Model For Healthcare 36

An anonymous reader quotes a report from TechCrunch: AI, specifically generative AI, has the potential to transform healthcare. At least, that's the sales pitch from Hippocratic AI, which emerged from stealth today with a whopping $50 million in seed financing behind it and a valuation in the "triple-digit millions." The tranche, co-led by General Catalyst and Andreessen Horowitz, is a big vote of confidence in Hippocratic's technology, a text-generating model tuned specifically for healthcare applications.

Hippocratic -- hatched out of General Catalyst -- was founded by a group of physicians, hospital administrators, Medicare professionals and AI researchers from organizations including Johns Hopkins, Stanford, Google and Nvidia. After co-founder and CEO Munjal Shah sold his previous company, Like.com, a shopping comparison site, to Google in 2010, he spent the better part of the next decade building Hippocratic. "Hippocratic has created the first safety-focused large language model (LLM) designed specifically for healthcare," Shah told TechCrunch in an email interview. "The company mission is to develop the safest artificial health general intelligence in order to dramatically improve healthcare accessibility and health outcomes."

Shah emphasized that Hippocratic isn't focused on diagnosing. Rather, he says, the tech -- which is consumer-facing -- is aimed at use cases like explaining benefits and billing, providing dietary advice and medication reminders, answering pre-op questions, onboarding patients and delivering "negative" test results that indicate nothing's wrong. [...] Shah claims that Hippocratic's AI outperforms leading language models including GPT-4 and Claude on more than 100 healthcare certifications, including the NCLEX-RN for nursing, the American Board of Urology exam and the registered dietitian exam.
Hippocratic aims to have its LLM detect tone and communicate empathy better than its rivals -- in part by "building in" good bedside manner, says Shah. They have designed a benchmark to evaluate their model's humanistic qualities, and it scored higher than other models, including GPT-4.

As for whether or not it can replace a healthcare worker, Hippocratic argues that their models, trained under medical professionals' supervision, possess high capabilities.

"We're only releasing each role -- dietician, billing agent, genetic counselor, etc. -- once the people who actually do that role today in real life agree the model is ready," Shah said. "In the pandemic, labor costs went up 30% for most health systems, but revenue didn't. Hence, most health systems in the country are financially struggling. Language models can help them reduce costs by filling their current large level of vacancies in a more cost-effective way."
This discussion has been archived. No new comments can be posted.

Hippocratic Is Building a Large Language Model For Healthcare

Comments Filter:
  • by AmiMoJo ( 196126 ) on Thursday May 18, 2023 @09:03AM (#63532027) Homepage Journal

    No matter how empathetic the computer appears, people are not going to like getting bad news from a machine.

    • by rsilvergun ( 571051 ) on Thursday May 18, 2023 @10:08AM (#63532203)
      we have a private healthcare system, and the insurance companies will just refuse to let you see a doctor.
      • No, patients will have a choice, but they will be encouraged to go with the AI. It's all about maximizing profits. Want to use the AI system they subscribe to? No copay. Just keep forking over hundreds of dollars every month for your premiums. Want to see a doctor? Sure, that will be a $100 copay, and then additional copays and deductibles for anything else they do. Guess how many people are going to go with the cheaper route? It won't happen tomorrow, or next year, or the year after. But at some point actu
      • by gweihir ( 88907 )

        Then you are fucked. I have a right to see an actual MD whenever I think it is necessary.

    • you got leprosy

    • No matter how empathetic the computer appears, people are not going to like getting bad news from a machine.

      Well - no, I'm sure they don't. I don't like getting bad news from anyone for that matter. 8^)

    • by micheas ( 231635 )

      No matter how empathetic the computer appears, people are not going to like getting bad news from a machine.

      Doctor: I have good news and bad news

      Patient: Let's get the bad news first

      Doctor: you have operable brain cancer

      Patient: What's the good news?

      Doctor: I'm not going to have to take out a loan for the Ferrari I'm buying next week.

      • Doctor: I have good news and bad news

        Patient: Uh-oh. Let's hear the good news first.

        Doctor: You're going to be dead in 24 hours.

        Patient: My God! That's the GOOD news? What's the bad news?

        Doctor: I should have told you this yesterday.

      • I thought it goes like this:

        Doctor: I have good news and bad news

        Patient: Let's get the bad news first

        Doctor: you have Alzheimer.

        Patient: Oh my gosh, and what is the good news?

        Doctor: you will have forgotten this conversation when you ar back home.

    • by wrsi ( 10382619 )

      The article specifically spells out that the AI is intended only to be delivering routine negative test results that indicated no problem.

      I used to write guidelines for how hospital systems communicated about certain conditions. A common case would be Pap smear results. A women who's been sent for a pap can be anxious about the results, right? Relying on human communications about negative tests can involve more time -- we identified one hospital where the mean was around 2 extra days! there was variation u

      • by vux984 ( 928602 )

        An "AI" generating more immediate communications about those negative test results seems like pointless overkill; why not:

        Dear ,

        We are happy to report that your pap smear results all came back negative. A negative result means that there is nothing to worry about and everything is fine.

        If you have any questions or concerns please contact our office.

        You should schedule your next pap smear in .

        Thanks for coming in,

        For virtually all negative pap smear results that's sufficient.
        And then mail merge /SMS text/ro

      • The article specifically spells out that the AI is intended only to be delivering routine negative test results that indicated no problem.

        )

        You don't need an "AI", much less a GPT chatbot, to inform a patient of a negative test result. Or a set of results that taken together give a prognosis.

        It is easy to do that computation and select an appropriate communication of pre-written material that explains it all. This would also include links and FAQs. All of it is 100% reliable.

        I suppose you could glorify that with an (e.g. rule-based) system and call it "AI". But none of that has nothing to do with anything GPT.

        That would all be relatively minor

    • I'm far more likely to take medical advice from an AI than a real doctor over the age of 36 or so.

      If you read the education curriculum for medical doctors and more specifically general practitioners, you'll find that the education is rapidly dated and due to the nature of universities, the amount of retained information is extremely low to begin with. There are medical professors at universities who manage to properly accumulate and maintain their medical educations, but general practitioners who typically
  • "Shah emphasized that Hippocratic isn't focused on diagnosing. Rather, he says, the tech -- which is consumer-facing -- is aimed at use cases like explaining benefits and billing, providing dietary advice and medication reminders, answering pre-op questions, onboarding patients and delivering "negative" test results that indicate nothing's wrong."

    Benefits and Billing, handled by Revenue Cycle teams, at the healthcare institution. The other features dietary advice, medication reminders, pre-op and on boardin

    • And yet having a system to detect medical errors is not going to happen because then it will make said provider look bad, pulling down their metrics and with it the perceived value of the hospital or health network. Better not... Instead put it somewhere that will help improve revenue by catching billing mistakes in the customer's favor.
      • by HiThere ( 15173 )

        You need to be more explicit as to what you mean.
        As I read it you're expecting the system to know what the appropriate answer is without testing.

        Perhaps you actually meant something more like "compare the success rates of different doctors on cases that are equally difficult", which is a more reasonable desire, but also requires information that NOBODY has access to. There's no easy way to compare whether two cases are equally difficult. You might be able to come up with a reasonable statistic, but it wou

  • "I'm sorry, Dave. I cannot permit that form of treatment."
    • What was that line from "Passengers"?

      No treatment can meaningfully extend your life, these pills ( they rattle out of the dispenser) may ease your transition.

  • by RogueWarrior65 ( 678876 ) on Thursday May 18, 2023 @10:45AM (#63532339)

    What's already been happening for many years is that people who do not have medical degrees are second-guessing doctors' diagnoses and courses of treatment. Oh, and to add insult to injury, those people usually get paid a lot more than the doctors do. As an added bonus, those people are immune from any sort of malpractice lawsuit. Here's what's going to happen: the same people without a medical degree are going to use this AI system as justification for their second-guessing while doing even less work and getting paid even more. Don't believe me? See the recent slashdot story where a university professor asked AI to determine if students used AI in their papers which dutifully said yes and the students got screwed because the professor (and the overpaid healthcare bureaucrat) is the gatekeeper.

    • Very good point.

      In case anyone is wondering, I believe parent is referring to insurance companies and Medicare where some bureaucrat decides if your doc's recommendation is going to be paid for.

      However, this process is already 99% done by computer where the CPT (procedure) code is matched against the ICD (diagnosis) code. It's just table lookup. It's only in the rare cases when an appeal is made that a human even looks at it. In those cases, they usually just reject it, handling like 200 appeals a day.

  • by byronivs ( 1626319 ) on Thursday May 18, 2023 @10:45AM (#63532347) Journal

    Your Consumer Score is sub-optimal.
    Your Credit Score is sub-optimal.
    Your Social Score is sub-optimal.
    Your Health Score is sub-optimal

    Probable outcomes using similar scores show that you will provide no further increase in productivity.

    Please report to the nearest self-recycling unit. We appreciate our customers! Thanks for your business!

    • by ffkom ( 3519199 )

      Please report to the nearest self-recycling unit. We appreciate our customers! Thanks for your business!

      And please settle your bill and fill out the customer satisfaction feedback form before beginning your final treatment.

  • >explaining benefits and billing, providing dietary advice and medication reminders, answering pre-op questions, onboarding patients and delivering "negative" test results that indicate nothing's wrong.

    Almost all of these require human level emotional intelligence, an attribute completely lacking in LLM's. Tell a 60 year old man why he can't have red wine anymore, easy. Do it in a way that keeps him happy with the system and also stops him from drinking red wine, very difficult. Go through a bullet poin
    • It's basically an FAQ in dialogue format. If a question has already been asked enough times, it can provide a good answer, just like an FAQ.
      • by cstacy ( 534252 )

        It's basically an FAQ in dialogue format. If a question has already been asked enough times, it can provide a good answer, just like an FAQ.

        It can (and often will) also provide a wrong answer, unlike an FAQ.

  • That's a good one. The AI will have good bedside manner. I can tell whether a doc gives a shit about me from his bedside manner, including whether he is pretending to "care" about me. I already know the AI doesn't give a shit about me, so no matter how "caring" it is, I still know it couldn't give a shit. They intend to replace information with flattery. Maybe that works for most idiots.

  • I for one look forward to receiving healthy enemas from a robotic AI enema nurse. I mean what could possibly go wrong?

  • For a nursing student like me, this is some next-level stuff! Imagine having a language model specifically tailored to our field, providing us with accurate and relevant information. It's like having a genius mentor available 24/7. And speaking of resources, check out https://studyhippo.com/essay-e... [studyhippo.com] , It's been a lifesaver for me when I need help with study tasks. Trust me, it's a total game-changer! So let's embrace these advancements and make the most out of 'em in our nursing journey.

Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.

Working...