The concern isn't so much that the AI would have human-like goals that drive it into conflict with regular-grade humanity in a war of conquest, so much as that it might have goals that are anything at all from within the space of "goals that are incompatible with general human happiness and well-being".
If we're designing an AI intended to do things in the world of its own accord (rather than strictly in response to instructions) then it would likely have something akin to a utility function that it's seeking to maximise, and so implicitly has a goal defined by that function - some arrangement of the world that scores the most highly according to that function. Whether the nature of that goal is inscrutable beyond the wit of man, or utterly prosaic like the "paperclip maximiser"... if it doesn't share our values then the things that we value may end up disassembled for raw materials.
In the admittedly unlikely event of a machine achieving a degree of intelligence that allows it to completely achieve any goal it happens to have, the only way for humanity to win is if it has goals that align near-perfectly with what's best for humanity, which is a vanishingly small target when you consider the universe of possible utility functions that aren't that.
Obviously not really a concern with the current state of technology, but if progress in making more intelligent machines follows anything like an exponential curve then we could fall foul of how bad our intuitions are around exponentials, and end up being taken by surprise by a machine that's rather abruptly more intelligent than we expected. Especially if we make it able to improve itself.