That's a great question. Do you think 80% accuracy is good enough for medical use? If you're a doctor facing an unfamiliar situation, and your data says treatment X helped 40% of patients it was tried on, treatment Y helped 35% of them, and all other treatments (Z, W, etc.) helped no more than 30%, but you know the data might only be 80% accurate, what treatment do you choose? Are those ratios even meaningful in the presence of so many errors?
Consider the case where the patient's condition is critical, and you don't have time for additional evaluation. Is X always the best choice? What if your specialty makes you better than average at treatment Y? Maybe that 20% inaccuracy works in favor of the doctor who has the right experience.
It could it be used for ill, too. What if you know you'll get paid more by the insurance company for all the extra tests required to do treatment Y? You could justify part of your decision based on the uncertainty of the data.
In the end, historical data is just one factor out of many that goes into each of these decisions. Inaccurate data may lead to suboptimal decisions, so it can't be the only factor.
Great strawman, but your strawman happens to actually be a nuclear powered, armor plated tank...with sharks and laser beams!!! Turns out way back in the 60's, when they started to think about what problems computers could one day solve, they listed many: beat world champion at chess, drive cars, etc...one of them was medical diagnosis. It took decades longer than thought to solve the ones they have been able to solve with one exception: medical diagnosis. By the early 80s we had "expert systems" that were more accurate than human doctors at medical diagnosis (especially 24 hrs in to a 36 hr shift). The AMA and insurance companies have basically blocked this tech for decades despite overwhelming evidence that they were killing people by doing so. Today we have started to slowly role out this type of tech for things like drug interaction but not yet for medical diagnosis. Ironic huh?