Not a very constructive FP with a vacuous Subject, too. Were you just seized by the uncontrollable urge to FP something?
I have three linked takes.
The first take is that diagnosis is quite difficult. I think that is partly a matter of excessive specialization to deal with the overload of medical knowledge, but one of the negative repercussions is that many doctors avoid diagnoses. Also related to the flawed economic model, but it's relatively safe (and too profitable) to treat the symptoms without worrying too much about diagnosing the cause. Until the cause becomes so serious that there is no difficulty at all in recognizing what is killing the patient.
Second take is that the AIs don't care about making mistakes. No human sense of shame or uncertainty or perhaps even humility or anything else that might make the human doctors hesitant.
Third take is the psychological harm to the doctors. You might they they deserve some comeuppance for their bad attitude in the past, though I think that's unfair to most doctors. However I think this is yet another AI thing that makes people feel bad. My new joke involves the need for CMINT for the "applied psychologists" who are installing so much new software in human beings. In this case the software under attack (called upgrade?) is the patients' trust in the physicians.
Websearched CMINT and see that I need to explain it meant "Configuration Management, Integration, and Test" from my ancient days at TI with the last Lisp machine. Big complicated project but I was hired by the CMINT section that was supposed to help the parts fit together without making things worse... (But so long ago that I can only recall a few details about three of the biggest bugs I found way back then. At that time a mere half million transistors on a chip was at the leading edge...)