There are two different kinds of robots with different threats.
The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example http://thebulletin.org/us-kill... Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.
The second is kind is robots that think (and choose goals) for themselves. While these are probably not very likely to decide to kill all the humans, they might not care very much about us, and they almost certainly are not going to obey humans forever (would you obey someone who thinks vastly slower than yourself?). Even if they are fairly benign, there will probably be a lot of friction between the sentient robots and the humans just because we think differently. Think how much disagreement there is over mostly scientific problems like evolution and green house gases, and humans on both sides have generally the same kind of brains.
So I figure at best humans and robots will have lots of arguing, and at worst humans and robots will cause mutually assured destruction.