So you construct a fantasy world with whatever you imagine is or will be, and then want to discuss what will happen in that world. Fine, it's a fun thing to do, but you can't then bring your conclusions back to the real world.
I think we're arguing along different lines here. You want to posit a scenario and then discuss what happens within that scenario. I'm saying that the conclusions you draw from such a discussion only apply to reality insofar as the initial scenario matches reality. Your scenario doesn't. You start with "create a true, self aware, synthetic mind ...". That's nowhere near reality, so whatever conclusions you draw are also nowhere near reality.
And that's my point. It's useful to consider "what would happen if" because people do have the goal of creating a "strong AI", but it is speculation. The reality is that all we know how to do now and in the foreseeable future is build specialized, though flexible, algorithms to perform complex tasks. Talking about these as if they are "intelligent", or "want" things, or can "think" just makes it difficult to be productive. There is already real danger in having autonomous cars, autonomous planes, autonomous soldiers, and other complex computer controlled machines. We'd be better served discussing the real risks than fretting over some sci-fi world in which machines have become super-human fictional CyberMen.
Our autonomous cars will be faced with situations like the train moral dilemma (do nothing and it will kill 5, but you can divert it to kill just 1). That problem needs to be faced and an answer provided without resorting to pretending that the autonomous car has "will" or "morality" or a "desire" to minimize some mathematical function related to the number of deaths caused. Autonomous cars, as much as they may seem to have a "goal" of taking us to our requested destination, are just algorithms we created tied to machines we created. We have designed them with a goal in mind, but we have to understand what they *are*, not what we wanted them to be.