I'm not saying it can't happen, I'm saying that when exercising good judgement, even humans drive to allow leeway for such things. If there's a car that weaves or sways, if it drives close to the lane separator, drives too fast, too slow, etc, you keep enough distance to be able to deal with any random behaviour by that car safely. When you know there are difficult short on-ramps and the lane beside you is free you switch lanes to let other drivers on without any risk of causing a problem situation. Humans do it when we're doing things right. An autonomous car could be programmed to always keep the optimal freedom of action.
"The difficult problem is that sometimes you don't get the luxury of doing that, and unexpected situations can be created faster than you're able to react."
Yes, but the way most traffic situations are designed, it's our big saver that it requires usually not one, but two persons making a mistake for an accident to happen. I've been saved from the consequences of being an idiot by someone else not being an idiot and planning for me being an idiot. And I've saved many others by noting that they're not paying attention and increasing distance, leaving them space to be stupid. Not to mention the number of lives that have been saved by traffic planners saying 'humans are idiots, lets make a roundabout here'. So, if an autonomous car can be programmed to not do the stupid things we do, what are the situations where the best response isn't simply 'make sure it doesn't get into such situations'? And could a human deal with them?
I mean, some things are impossible to deal with. You're not going to be able to handle things like a car going through a guard rail on the overpass and landing in your lane. Some things are more probable, but very hard to deal with as well, like someone deliberately arranging for a frontal collision in a higher speed countryside road. There simply aren't any safe trajectories or distances, so that can't be planned for, but that's not really a difficult decision but more trying to avoid the frontal, where the autonomous car would probably have an edge in reaction time and more accurate sensor feedback providing all viable physically possible options.
It's an interesting problem, but it's not entirely easy to figure out situations where our more complex reasoning skills will actually do us any good once the actual shit hits the fan, or if all situations where such reasoning skills may be of use are actually due to failing to use them earlier.