You're assuming that the human would have made a better decision.
I think most humans would have NOT kept driving at full speed into a giant tractor trailer. But that's just a guess.
Now, don't think that I am against self driving cars. My post specifically mentioned Tesla's autopilot. In the long term, I think self driving cars will prevent many, many more deaths than they might cause. My issue is that the very first thing any self driving car should be able to do is know if something is blocking it's path. Literally the first thing it should 'learn' to do. Given that it failed this test (spectacularly, I might add), I wouldn't touch a Tesla autopilot system for a long time.
Yet here we have a case where in the future this accident won't likely happen.
Really? Why would you think that? Tesla didn't properly program/test this scenario the first time. Why do you think they would get it right the next time? See my examples of other, similar modes of failure that could cause this.
Assuming any system is perfect on day one is asinine.
I'm not asking for perfection. I'm asking that an autopilot system deployed to consumers can do the first, basic task of any autopilot system: Know if something is in your way, and stop if there is. If it can't do that bare minimum task, I won't use it.
I will happily tolerate many deaths...
Thanks for offering to Beta Test the Tesla autopilot system