Comment Re:Morals, ethics, logic, philosophy (Score 1) 255
Self-driving cars don't and won't have morals, ethics, logic, or philosophy. They don't need any of that. They simply have a wide array of input sensors connected to a set of complex algorithms that provides the necessary vehicle inputs to drive from point A to point B while avoiding crashes. Not infallible avoidance, of course – if there's no room to stop when an obstacle pops up, there's no room – but better than human drivers can. And the truth is that this is a pretty low barrier. Regular cars result in about 35,000 crash fatalities a year in the U.S. alone. Self-driving cars just have to do better than that, not achieve absolute perfection all the time.
The question discussed by Patrick Lin and Eric Sofge is how the programmers designing the vehicle algorithms should configure them to behave when a collision is truly unavoidable. Lin and Sofge advocate that the programmers should use strict utilitarian philosophy when deciding what to hit. I don't think that is going to fly, either from a legal or a sales perspective; the least damaging choice is just to try to stop the vehicle even if there is no time, rather than trying to "select" a crash for the least possible damage.