I like your list, in that it contains some interesting points and seems like you've put some thought into it. I'm not sure I agree with all of your points, though.
I think it's more likely that, if we ever do develop a real artificial intelligence, it's thought processes and motivations are likely to be completely alien to us. We will have a very hard time predicting what it will do, and we may not understand its explanations.
Here's the problem, as I see it: a lot of the way we think about things is bound to our biology. Our perception of the world is bound up in the limits of our sensory organs. Our thought processes are heavily influenced by the structures of our brains. As much trouble as we having understanding people who are severely autistic or schizophrenic, the machine AI's thought processes will seem even more random, alien, and strange. This is part of the reason it will be very difficult to recognize when we've achieved a real AI, because unless and until it learns to communicate with us, its output may seem as nonsensical as a AI that doesn't work correctly.
The only way an AI will produce thoughts that are not alien to us would be if we were to grow an AI specifically to be human. It would need to build a computer capable of simulating the structure of our brains in sufficient detail to create a functional virtual human brain. The simulation would need to include human desires, motivations, and emotions. It would need to include experiences of pleasure and pain, happiness and anger, desire and fear. The simulation would need to encompass all the various hormones and neurotransmitters that influences our thinking. We would then either need to put it into an android body and let it live in the world, or put it into a virtual body and let it live in a virtual world. And then we let it grow up, and it learns and grows like a person. If we could do that with a good enough simulation, we should end up with an intelligence very much like our own.
However, if we build an AI with different "brain" structures, different kinds of stimuli, and different methods of action, then I don't think we should expect that the AI will think in a way that we comprehend. It might be able to learn to pass a touring test, but it might be intentionally faking us out. It might want to live alongside us, live as our pet/slave, or kill us all. It would be impossible to predict until we make it, and it might be impossible to tell what it wants even after we've made it.