Take an X-ray machine for example. We know these can kill people (look up Therac-25). However, if we write an overall program that calls a supplied program to calculate the treatment duration, and have a routine to control the machine and which has a hard limit on the duration, then it doesn't matter if the supplied program can, in some circumstances, calculate an excessive duration, because the patient can't get that dose.
What makes you think that "hard limit" enforcing code can't have a bug in it? I assume your answer is that it's such a small program that you can do a formal proof that it has no bugs. Fine, but what about the OS it runs under? Or the memory controller? Or the actual memory? Any of those can be wrong in a way that could, in theory, cause a patient to be given an overdose. If we can make a robot that is more accurate than a human (at practically any task), we probably should (ignoring issues of liability and macroeconomics), but we have yet to make *anything* that is 100%, let alone a computer controlled robot.