What part of "...you're still entirely missing the point for some weird tangent that has nothing to do with what I was saying." don't you understand. I have not once argued that the statistically better system would typically be the best choice. What I said was that, if that statistically better system still fails in catastrophic ways randomly, even though it is better in the average case, that is something that needs to be addressed. It's not that nuanced or hard to grasp is it?
Consider a parallel example that doesn't involve AI at all. Let's say you have a tricky surgery coming up. You will die without out. The mean mortality rate for most surgeons on the procedure is 50%. However, your doctor tells you that you are lucky. The surgeon they have lined up for your procedure has done it hundreds of times with a mortality rate of only 5%. That is fantastic you tell your doctor. They reply that yes, it is fantastic, however there is one small thing. Every now and then, the surgeon has some sort of psychotic break during surgery and murders the patient. Sometimes they saw off their head, sometimes they sever their carotid or aorta, sometimes they inject them with a fatal dose of opioids, etc. However, despite that, their statistical performance is terrific. Now, would you just shrug and say that's fine, or would you wonder if, maybe, just maybe, they should find out what is going on there and stop the doctor from murdering patients?