The real problem is that the actions of people, in some circumstances, are considered beyond good and evil, and all the silly hypothetical situations in the world doesn't begin to capture this. In the heat of the moment, with only seconds to decide, people can't be relied on to make a choice that conforms to some explicit moral code. On account of that, when faced with passing judgement on the actions of people in emergency situations, we don't pass judgement; rather, we forgive them.
Robots, however, are programmed, and "split seconds" don't mean the same thing to robots that they do to us. Thus, there is no way around what they're going to do. They will be programmed to do one thing or another, and someone is going to have make the bad decision—since, in many cases, there are no good decisions to be made. And that poor bastard may have to program the machine anonymously, because what he will get is not forgiveness but, "What were you thinking!"