Comment Re:640kb (Score 2, Insightful) 221
Gotta agree. I can say with respect to medical insurance, Bill Gates is pretty much dead wrong. AI can't make medical claims easy if the people who approve the claims change the rules with no notice. Which they will, because claims management is a constant push and pull over who keeps the premium dollar not a high school history paper.
I recently heard a VP at Geisinger say that their efforts to use current "AI" in charting had also not yielded meaningful efficiency gains.
That's because the humans in the systems still are and must be responsible. If you know the RACI scheme, consider this: Responsible = humans, Accountable = humans, Consulted = LLM + humans, Informed = Humans.
The basic problem is the same for doctors as for drivers. The human is accountable, but to understand and take action the human must learn. However good the LLM is for a human to learn requires thought, effort, and time. Basically when a doctor lets "AI" write the clinical note, s/he then has to supervise by reading the note and fixing it. That takes more work than just writing the note.
And unlike LLMs, humans can actually reason and intuit,so when faced with unfamiliar inputs they can in fact exercise judgment and common sense. LLMs cannot. They also cannot revise their prior training in realtime as they learn which facts to believe and which not to believe.
And when the shit hits the fan, some human needs to be able to explain, reason, and take responsibility. Doctors cannot do that without charting, drivers can't do it without paying attention.
Yes, LLMs can help gather and synopsize information. They are likely to be used for this, though they would sure be more useful if one could understand and constrain the on which they were trained. E.g. for a doctor, an LLM trained mostly or entirely on medical literature is vastly more useful than one trained on the internets. And even within that set, one would need to weed out publications that proved to be simply wrong or even fabricated (Lancet article on autism anyone?).
LLMs could well save us software folks a lot of frustrating debugging time - in fact I think that's one of the best use cases around for them.
But take over tasks for which humans have responsibility? Not for a while.