And, I'm sorry, but the driver with his right turn signal on who swoops across two lanes and turns left ... or the ones who think they can use the oncoming lane because there's something in their lane ... or who randomly brake because they can see a cat a half mile away ... or cyclists who do crazy and random shit ... or any number of crazy things you can see on a daily basis ... all of these things will create situations in which the autonomous car utterly fails to do the right thing.
Your pessimistic prediction is based on a misunderstanding of how autonomous cars are programmed.
You're imagining that the autonomous car's programming is based on a legal model -- i.e. that it is assuming that other cars will always follow the law and do the right thing according to the DMV handbook. And you (correctly) infer that a program based on that assumption would often fail in the real world, since other drivers sometimes do crazy or illegal things.
Fortunately, Google's engineers aren't stupid, and they understand that problem also. Which is why they don't program their cars to rely on that assumption. Rather, they program their cars to assume only that other cars will follow only the laws of physics -- for example, they can safely assume that a car will not accelerate from 0mph to 150mph in one second, but they cannot safely assume that a car will stop at a red light.
Approaching the problem at that level is not only more reliable (since unlike the driving code, it's impossible to break the laws of physics) but also easier (since the laws of physics are simpler and easier to quantify than the US driving code, and they are MUCH easier than trying to predict human drivers' psychology). An autonomous car avoiding other cars on a real street isn't much different than a bot avoiding enemy missiles in a video game -- in both cases, the program knows the missiles' current positions, their velocities, and the possible ways in which they could accelerate/decelerate/change-direction if they chose to do so -- and it plans and executes its own actions accordingly. It's not trivial, and it's not guaranteed to avoid every possible accident -- but it's not rocket science, either. Remember that to succeed, the car doesn't have to be God-like in its abilities, only an order of magnitude or so better than the average human driver.
None of this is to say that the autonomous car doesn't need to understand the rules of the road -- but its understanding of the law is used to guide its own actions, not to make assumptions about the actions of the other cars.