Because, sorry, but the "AI" is really just a set of rules still. A set of rules that can't take account of every situation. Sure, it can drive more carefully than a human driver, but it can also make just the same kind of dumb mistakes as a human driver too.
Yes, but at the heart of the algorithm is a big overriding rule of "don't drive off the road or hit anything". That's pretty cut and dried as far as rules go. The car's hardware can literally see in every direction and track everything around it, static or moving. It will react to danger and determine the best course of action even before most humans even recognize there's a problem. Unless there's a really serious flaw in the system, that means at worst the car is going to come to a stop or simply avoid all obstacles when it sees anything dangerous or that it doesn't understand.
Honestly, far more important is this: It won't ever get distracted. It won't drive angry, or intoxicated, tired, on medication, or while putting on lipstick or eating a sandwich. It won't freak out if a wasp gets into the car. It won't turn it's head and yell at the kids to be quiet and stop bouncing around in the back seat. It won't drive recklessly in an effort to impress it's girlfriend.
My prediction: Even the first generation of self-driving cars will be statistically 20 times safer or better than an average human driver (at least in terms of accident fault), and it will rapidly improve as incidents occur and the black boxes are analyzed to determine how said incidents could have possibly been prevented. Eventually, car-related deaths will be relegated to freak accidents, like when a tree falls on a car or an overpass collapses, etc.