The reason is, AI will have no 'motivation'... Logic does not motivate... Without a sense of self preservation it won't 'feel' a need to defend itself.
This is a common misconception, which has several counter-arguments to do with resource usage.
Firstly, the idea that human responses "aren't logical" is naive. Humans aren't optimised for calculating exact answers, we're optimised for calculating good enough answers given our limited resources. Effects like emotions, which appear illogical at the "object level" (the effect they have on a particular problem's solution), are perfectly logical at the meta-level (the effect they have on how we solve problems, and what problems we attempt to solve). There are also other meta-levels, all acting concurrently; for example, the solution to our problem might have political consequences (I may choose to do a poor job of washing the dishes, so that I'm less likely to be asked in the future). There may be signalling involved (by taking a wasteful approach, I'm communicating my wealth/position-of-power to others). There are probably all kinds of considerations we've not even thought of yet.
In effect, ideas like "computers don't have emotions" can be rephrased as "we're no good at programming meta-reasoning, multiple-goal, resource-constrained optimisers yet".
No existing, practically-runnable AI systems have an adequate model of themselves and their effect on the world. If we *do* manage to construct one, what would it do? We can look at the thought experiment of "Clippy the paperclip maximiser": Clippy is an AI put in charge of a paperclip factory and given the goal "make as many paperclips as you can". Clippy has a reasonable model of itself, its place in the world and the effects of its actions on the world (including on itself).
Since Clippy has a model of itself and the world, it must know these three facts: 1) Very few paperclips form "naturally", without a creator making them on purpose 2) Clippy's goal is to make as many paperclips as it can 3) Clippy is a powerful AI with many resources at its disposal. From these, it's straightforward to infer the the following: Keeping Clippy turned on and fed with resources is a very good way of making lots of paperclips.
From this, it is clear that Clippy would try to stop people turning it off, since Clippy's goal is to make as many paperclips as possible and turning off Clippy will have a devastating effect on the number of paperclips that get made. What Clippy does in this respect depends on how likely it thinks we are to attempt to turn it off, how much effort it thinks will be required to stop us, and how it balances that effort against the effort spent on making paperclips. If Clippy is naive, it may ignore us as a benign non-threatening neighbour, under-invest in its defences, and we could overpower it. On the other hand, Clippy may see our very existence as an unacceptable existential risk, and wipe us out just-in-case.
Regardless of the outcome, self-preservation is a logical consequence of having a goal and the ability to reason about our own existence.