Twice, in fact -- once in an academic research lab and once at a company that designed and built medical imaging equipment.
In both cases we worked on image classification using digital image processing and statistical pattern recognition. (In one of the two cases we also used syntactic pattern recognition and machine learning.) It's very, very, very hard to make this accurate enough for clinical use even if you pour effort and time and money into it. There's no way this technology should be deployed without humans backing it up.
As to the human mistakes: everyone can cite a case where a professional radiologist committed a false positive or false negative error. But did you stop to consider why they made a mistake? Were they 13 hours into a 14-hour shift, their third one in a row -- because the hospital CEO felt that money should go into his pocket instead of into hiring another radiologist to share the load? Was it an imaging anomaly (they happen) that was ambiguous? Was it because the study that was done wasn't the best choice? (I.e., imaging modality or location) There are all kinds of ways for this to go wrong that will result in blame being assigned to the radiologist, and only some of those assignments are fair.
AI isn't a magic fix for this. And I certainly wouldn't even try to use any of the general-purpose models -- as Zathras would say: "This is wrong tool." If I were to do this again today, I would return to the approach we used before with modest success, I'd take advantage of some of the improved algorithms that have come along, and obviously I'd use bigger/faster hardware, because that opens up approaches that were computationally infeasible. But I wouldn't even consider removing humans: these are, or can be, life-and-death decisions, and a human being needs to make them.