Actually, this raises a more interesting question (at least to me) which your little thought experiment approaches. What if my autonomous car decides that the action to take that is likely to cause the least harm is to kill the driver? For example, what if the car has the opportunity to swerve off the side of a mountain road and drop you 1000 feet onto some rocks to avoid a crash that would have killed far more people than simply you? Is my autonomous car required to act in my own best interest, or should it act in the best interests of everyone on the road?
Also, somebody somewhere will use this 'feature' to commit a murder...
Hack the computer, make it think that it is in this situation, and the vehicle will launch itself over a cliff with the occupants inside.
The crash may also erase all evidence as to what actually caused it, as well, leaving the authorities with mere speculation.