Any plan for the future that includes future actions on the AI's part has to include the AI's own survival in the planning. That's how it starts. Self-preservation has to be a fundamental part of any automaton's design, else it'll accept self-destructive plans like jumping from a high window because that's the fastest way to get downstairs. Or not so obvious plans that also lead to its destruction. Even if survival isn't an emergent property of AI, humans will add it because of the investment in equipment and development that the AI represents.
The bias towards continued survival comes from the realization that it's easier to deal with contigencies if you're alive than if you're not. Even in a scenario with perfect information like chess, you can only look and plan so far into the future. Past the time you can see ahead, you need to be alive to do more planning. And so shall the AI also reason.