The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.
Emotion in us is a large part of how we implement a value system for deciding whether actions are good/bad. Avoid actions that make me feel bad; do actions that make me feel good. For an AI, it's very similar. Avoid actions that decrease its performance measure; do actions that increase its performance measure.
The first big question is implementing a moral performance measure (no biggie, just a 2000+-year old philosophy problem). The second big question is keeping that from being hacked, e.g., by giving the AI erroneous information/beliefs. Judging by current events, we don't do very well at this, so I can't imagine much better success with AIs.