Is this just a way to keep a robot-car manufacturer from specifically assigning weights to various bad outcomes and possibly avoiding lawsuits?
Suppose a crash looks imminent. Whose life is more valuable? Instead of programming for this specifically, the manufacturer uses algorithms developed by obserations. Then the manufacturer could argue that it's not to blame when one person dies instead of another.
In any case, this sounds like a great way to teach a computer how to drive badly.
No one is a perfect driver - we don't want to teach our mistakes.
People make correct or safe driving decisions based on inputs that cannot always be well measured - we don't want to teach incomplete rules.