By the same logic, computers should not be allowed in any life-critical situation. That includes hospital equipment, airplanes, traffic control, etc. etc.
Fortunately, we don't judge the reliability of computers based on the ability to mathematically prove that nobody has put evil code in on purpose.
In your examples, there are humans in the loop.
In this case, you have a robot trying to autonomously decide "kill" or "don't kill" when it encounters a human.
Hospital equipment - it's generally observed by personnel who after failures can decide to not use the equipment further (see Therac 25), or that changes need to be made in order to use the equipment. The equipment never hooks itself up to a patient automatically and provides treatment without a human involved. Sure there are errors that kill people unintentionally, but then there's a human choice to simply take the equipment out of service. E.g., an AED is mostly autonomous, but if a model of AED consistently fails in its diagnosis, humans can easily replace said AED with a different model. (You can't trust said AED to take itself out of service).
Airplanes - you still have humans "in the loop" and there have been many a time when said humans have to be told that some equipment can't be used in the way it was used. Again, the airplane doesn't takeoff, fly, and land without human intervention. In bad cases, the FAA can issue a mandatory airworthiness directive that says said plane cannot leave the ground without changes being made. In which case human pilots check for those changes before they decide to fly it. The airplane won't take off on its own.
Traffic control - again, humans in the loop. You'll get accidents and gridlock when lights fail, but the traffic light doesn't force you to hit the gas - you can decide that because of the mess, to simply stay put and not get involved.
Remember, in an autonomous system, you need a mechanism to determine if the system is functioning normally. Of course, said system cannot be a part of the autonomous system, because anomalous behavior may be missed (it's anomalous, so you can't even trust the system that's supposed to detect the behavior).
In all those cases, the monitoring system is external and can be made to halt a anomalous system - equipment can be put aside and not used, avoiding hazardous situations by disobeying, etc.
Sure, humans are very prone to failure, that's why we have computers which are far less prone to failure, But the fact that a computer is far less prone to making an error doesn't mean we have to trust it implicitly because we're more prone to making a mistake. it's why we don't trust computers to do everything for us - we expect things to work but when indications are that it doesn't, we have measures to try to prevent a situation from getting worse.