"We (humans) have a thing we like to call consciousness/free will/self determination/etc. I'm not event going to try to define those things in a way that implies whether we really have it or not, or just an illusion of it, etc."
Fair enough.
"I never said we needed to come up with the "answer" a priori. We could simply make a whole bunch of AIs (emergent ones similar to ourselves), and keep the ones that have the properties we want, akin to something like breeding animals. This has been a pretty standard scientific methodology for quite some time. Put a bunch of stuff together, see what happens. Remove something from a system, and see how it breaks. We will be "figuring out" things in this way probably long before we are able to purposefully make anything without trial and error. That doesn;t mean we won't be able to do it eventually."
Okay, but now you are back to what I said in the first place. In fairness you've added an evolutionary algorithm to it (assuming you mean an automated form of "breeding") but yes, we are back to training, convincing, and controlling environment. All I was saying is that once we have a true AI this is what we have left, the same as with a human or animal. Without having the answer a priori we can't dictate it's behavior in an absolute manner the way we can most programs.
"This is not what I meant. The AI would still be the one solving the problems in whatever clever ways occur to it (and not to us, hence the reason for the AI). I was only talking about inserting the motivation for solving these problems in such a way that the AI thinks it is the one that wants to solve the problems."
I see what you mean. But since we aren't actually writing the instructions per say we can't necessarily feed the AI motivations directly anymore than we can do that with a person or animal. But we certainly won't be able to do that any less than we can a person or animal either. In fact, we can do more of that because we won't have the ethical constraints. For instance, once we've developed this AI we could find what corresponds to a reward signal within it and then monitor it's memory and generate a certain tone every time this reward signal is strongly present. Presumably just like any human or animal brain it will form a positive association with that signal, it will be so used to the generation of reward signals being attached to the tone that it will generate reward signals when it hears the tone. Think clicker training with a dog. We'll also want to build in some constraint that the system needs (or wants) and is physically incapable of providing for itself.
We have a distinct advantage with this sort of thing relative to actual living creatures because we don't need an FMRI or anything of the sort, this brain will live within a computers memory. We can probe it and modify it at will and also take snapshots of it's state and restore that state at will.
The biggest roadblock to AI I see in the short term is that a true AI isn't particularly profitable. We can already create systems that perform the same function, we simply have babies. Additionally, when it does become profitable (human workers can be replaced with AI workers) then the way our economy currently works that will just create massive unemployment and poverty. At some we'll have to let go of the expectation that people should need to perform work to gain and utilize wealth.