Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.
So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.
What will happen is that the manufacturers will lobby for a statutory "safe harbor." The legislature will make the ethical decisions in advance, or provide a menu of "safe" ethical options. And the manufacturer will be statutorily immune from lawsuits as long as they have followed those safe harbor guidelines. This is a good thing in theory, as it permits the technology to progress, where lawsuits would otherwise eliminate it. So don't worry about the manufacturers. What you should worry about is that those clowns in Washington, D.C.* will be selling off their "ethics" decisions under the table in exchange for cushy corner-office jobs with AIM, Inc. after they retire from public office.
*Yes, it will inevitably be a federal law, though just as inevitably, California will have some granola-munching variant that requires autonomous cars operating in California to place a super-premium on the lives of endangered salamanders or something.