Surely an AI, like a real (aka biologically evolved) intelligence, is dependent on the parameters within which it operates. I am scared of death (and heights, and certain noises, and certain insects, etc.) because my code( DNA ) has hard coded me to be so.
The survival instinct is not a learned response, but rather an inherent condition given at birth (more or less, a 2 month old baby doesn't have the mean's to express it).
Similarly, a lot of emotions are inherent rather than learned, the simple ones at least - happiness, anger, sadness.
It seems to me that some of these (fear of death, internal emotional states, etc.) are not going to come out of an AI spontaneously - they must be put in from the outside.
So back to the point lucient86, your comment:
" The result as above is a machine that starts off unstable and insane and probably fights to exist by hiding and self-replicating as hard as it can.. more virus than anything else." ...suggests to me that an AI, weak or strong, would have a survival instinct, but I'm not sure why it would.
It is like the old AI problem of goal orientation - why would an AI choose to do anything? We do things because they satisfy our emotional desires (I act because it makes me happy/proud/content/gives self esteem/avoids scary things/avoids shame/ etc.). Why would an AI choose to act, unless give outside instruction?