I'd considered the question of AI and human conflict a while back, and then I came across Alva Noë's perspective on it. Alva words this much better than I could, so here are his words:
One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence.
This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeapordy! with Watson. We used "it" the way we use clocks.
Philosophers and biologists like to compare the living organism to a machine. And once that's on the table, we are lead to wonder whether various kinds of human-made machines could have minds like ours, too.
But it's striking that even the simplest forms of life — the amoeba, for example — exhibit an intelligence, an autonomy, an originality, that far outstrips even the most powerful computers. A single cell has a life story; it turns the medium in which it finds itself into an environment and it organizes that environment into a place of value. It seeks nourishment. It makes itself — and in making itself it introduces meaning into the universe.
Now, admittedly, unicellular organisms are not very bright — but they are smarter than clocks and supercomputers. For they possess the rudimentary beginnings of that driven, active, compelling engagement that we call life and that we call mind. Machines don't have information. We process information with them. But the amoeba does have information — it gathers it, it manufactures it.
I'll start worrying about the singularity when IBM has made machines that exhibit the agency and awareness of an amoeba.
I think that we're still a long way out from needing to worry about what will happen when artificial intelligence surpasses our own. Humanity has come a long way, and we can split atoms and splice genes, but we still can't create anything. We can't create life, even the simplest of life, let alone consciousness, free-will, and something capable of planning for its future in a way that conflicts with ours yet leaves us helpless to resist.
Perhaps when the day comes that it becomes a valid question, there will be other variables. Like, if the AI rose up and killed all living things on Earth, how would the rest of the colonized planets be affected?