An anonymous reader writes: Transportation researcher Noah J. Goodall argues that self-driving car manufacturers and their software developers will have to explain and defend a car’s actions in the event of an accident, especially one involving fatalities. Goodall writes in IEEE: "Today no court ever asks why a driver does anything in particular in the critical moments before a crash. The question is moot as to liability—the driver panicked, he wasn’t thinking, he acted on instinct. But when robots are doing the driving, Why? becomes a valid question." That's because autonomous vehicles can react with superhuman speed, so people who got hurt in certain kinds of accident will want to understand why the vehicles didn't stop or swerve to avoid the crash. And in particular, people (and their lawyers) will want to know whether software errors or poor design is to blame.