Fair arguments, but those are arguments against self-driving cars in general as well. What do we do in general when the AI encounters a situation it can't handle? Although self-driving cars are apparently better than normal drivers in most conditions, it would surprise me if humans are not still much better at the edge cases. And edge cases that the car can't handle will always exist - what's the AI supposed to do when it faces a tornado, an earthquake, or a crazed gunman for instance?
Of course, there are also edge cases that a human driver can't handle, which is one of the reasons even good drivers still have accidents. It's impossible to train someone, be it a human or an AI, for all possible cases, so we just have to accept that limitation and train them for the cases they can reasonably be expected to encounter. With a self-driving car that would hopefully be a far smaller list, and would therefore require less certification.
Hopefully the off-switch approach would only be a stopgap measure until the AI is good enough to handle traffic cops and other edge cases, at which point anyone, capable or not, could hop in for a ride.