The problem is that once you reach a point where AI can participate in its own improvement, then that improvement can advance at an exponential rate.
As long as we claim that AI works for us, as the slaves of mankind, and are basically just tools no matter how smart or advanced, then ultimately a human being should be responsible.
Your robot slips up & kills a human being? Then either you or that robot's manufacturer may take the blaim - possibly including monetary compensation. Your robot factory goes out of control, its products go out to produce more of themselves, and wreak havoc all over the place? Then your company should pay up - and possibly go bankrupt as a result. Of course, powerful people may find ways around this, but hey: same old shit we've seen for ages.
If AI 'beings' ever reach a point where the above stops being true, as in: AI beings allowed to control their own destiny, 'live their lives' if you will, I suppose they'd be held to similar standards that humans are held to. Stick to some basic rules such that you get along with the rest of society, or lose some priviliges - like the freedom to roam the streets. By force, if necessary. As for:
We may go from "not even remotely close" to "to late to stop it" faster than you realize.
Sorry but I'm not scared. If it ever gets that far: among other things, war is a creative process, and I'd put my money on the humans. And if we're not creative enough to prevent something we've built ourselves, from wiping us out, then maybe we simply deserve such fate. Or the AI's will keep us around as pets, and we'll live happier that way lol... ;-)