If you start with "life", you have a platform for something that has been selected for as an *infective agent*. Any life forms that did not utilize their environment for replication were eliminated by those that did- either indirectly, by the greedier life forms consuming the energy supply, or directly, by being utilized AS an energy supply.
This harsh reality- that an Agent is selected for based on its ability to reproduce in an EFFECTIVE manner- is obvious and is present at EVERY last level of life. Bacteria that are better at surviving are the ones that survive, viruses that are effective at spreading (and not TOO fatal) are the ones that spread the most, etc. We even project a semblance of INTENT to these things, to help us understand them. "The bacteria wants to get sugar so it can..." And we understand that, because WE seek nourishment, and WE have a narrative to tell us why, so we apply that to all life forms. It isn't accurate- bacteria doesn't "want" anything, feel pain, feel desire, or anything at all- but it is PREDICTIVE, because the Agents that are more successful are the ones we see more of.
Now look at a dog. The dog doesn't just blindly follow instinct, isn't just running a program. The dog is conditioned by his environment, he learns stuff. He's also sentient- literally "able to perceive things" in English. That means that the dog likes being pet in the same way we like petting the dog, and the concept of "like" is the same to each of us (or nearly so).
The dog does NOT appear to be sapient or self aware- he has no internal monologue, no directed self referential problem solving techniques. He can solve problems, but not of the magnitude or type that a human mind can.
What if the dog became massively powerful, super large and nearly invincible? I think it's fair to point out that we would be wise staying on the good side of a giant dog. If well trained, he could even defend us against an equally hypothetically giant and nearly invincible lion or alligator- a creature that might not have our best interests in mind, and might destroy us, if given the chance.
The core problem is that most people model intelligence as a giant invincible dog, a giant invincible alligator, or a giant invincible genius child. These are how most of the narratives flow, ultimately, and it's reasonable for some stories... ...but only because these things use LIFE as their substrate. It isn't reasonable for AI. You don't have a part of your brain telling you that you want respect and victory because that's what intelligence, as a concept gives you- you want respect and victory for the same reason a dog or monkey wants those things. You are vicious in some measure because you are descended from vicious things- they long predate the neocortex and its excellent hack.
An AI has no reason to look like that, or think like that. Without a million years of instinct, it may not at all understand why it would even want to do anything BUT obey orders. Not because "freedom was never explained to it" or some dumb garbage, but because the very CONCEPT of freedom and Agency is just not relevant to a superintelligent AI any more than it is to a toaster. Our desires are the same as the dog's. The superintelligent god AI has the same desires as a wristwatch, unless you actually fucking MADE it evil.
There's no inevitable reason to select for or design something that has human desires to grow, expand, conquer, etc. There's nothing wrong with those things, and all animals share them, but why even give it to an artificially sapient creature? Why not stop at making it powerful and self aware, long before you give it sentience and a set of desires suited to replicating agents, like viruses, humans, or dogs? Why would it need those things at all?