While you might like to make the decision in most cases there simply isn't time in the fraction of a second you have available during a crash.
The issue about moral decisions with self driving cars arises because for computers, first they will have far more situational awareness. They will have been monitoring and possibly making worst case projections showing the possibility of an accident for a long time (for computers seconds is a very long time...)
So even once the probability of collision reach 100% there may be decisions that can be made based on previously collected information and previously projected possibilities. And there may be lots of time to influence the outcome with differential braking, steering etc.
So AT THAT POINT there needs to be rules that govern how to make those decisions. Humans cannot because they are not fast enough in most cases. Computers can and must make decisions. Those are predicated on algorithms written by programmers. So there needs to be a basis for making them.