As far as I can tell, the autonomous algorithms don't work this way and probably never will work this way. That is, they don't calculate potential fatalities for various scenarios and then pick the minimum one. The car's response in any particular situation will be effectively some combination of simpler heuristics -- simpler than trying to project casualty figures, while still being a rather complex set of rules.
Take one of these situations, and let's say the car ended up killing pedestrians and saving the occupants. The after-incident report for an accident like that is not going to read "the algorithm chose to save the occupants instead of the pedestrians". It's not going to read that way simply because that's not how the algorithm makes decisions. Instead the report is going to read something like "the algorithm gives extra weight to keeping the car on the road. In this situation, that resulted in putting the pedestrians in greater danger than the car's occupants. However, we still maintain that, on average, this results in a safer driving algorithm, even if it does not optimize the result of every possible accident."
And regarding the "every possible accident" part of that: it is simply impossible to imagine an algorithm so perfect that, in any situation, it can optimize the result based on some pre-determined moral outcome. So it's not just "well, let's change how the algorithms work, then". Such an algorithm that makes driving decisions in any possible weird decision based on predicting fatalities, rather than relying on heuristics (however complex they are) is simply not realistic.