A driverless car cannot stop within abrupt short time.
Uh..... yes it can? Better than a human can at any rate if you boil it down to reaction times. It's still, you know, a car.
Just one, one only, example: If presented by either hitting a 4-year-old child or an octogenarian; should it take a random selection, or being programmed? If the latter is the case: who is it programmed to kill?
I imagine it'd be program to slam on the breaks and stop, minimizing the damage. Don't give me that bullshit about swerving to the side. In that case it's program to kill whoever is in violation of it's right-of-way.
Okay, a second example: You are sitting in a driverless car, with 4 of your family. A bus with 12 passengers comes up frontally (driven by an imperfect human driver, I guess). The whole thing on a narrow bridge, if you hit the bus, probabilities are it will slide to the side and tumble into a canyon. How would you think your perfect driverless car ought to be programmed?
To stop and share the narrow bridge, allowing the human-driven bus to navigate it first. Sorry, but you haven't explained how this is a critical scenario that will lead to crash.
I think you're trying to describe the scenario where how the car should sacrifice itself to save the bus. A us-or-the-bus scenario.
In that case: Slam on the breaks, minimizing the damage.
Really, you think that the car is going to have some sort of morality judgement function. No. It's going to have a "crash imminent mode" where it tries to stop. That's it.
The outcome of such a reaction might be hitting a kid. Or hitting a bus. Or whatever. But as a standard oh-shit procedure, it's solid.
the perfect driverless car becomes a pragmatic killing machine.
Pft, please. No more so than SCUBA gear, power tools, and industrial robots.
Shit hits the fan, they try to stop.
And it will never be perfect. Just good enough.