I think Elon sees something that most of you do not. Artificial intelligence is not like anything else. We know very very little about the kinds of intelligence that are possible. But if it is possible to build AI that is smarter and more capable than us, then it will by definition be better than us at building the next generation of itself. And at that point, humans are permanently obsolete because we have no rapid methods for upgrading ourselves. It has nothing to do with who is 'using the AI' or 'Who is doing the prescription'. There will be no person and no human moral intuitions in the loop at all. The intelligence that supersedes us will be doing what it wants to do. We'll be like the fish who debate how to control their bipedal relatives who have decided to start overfishing the oceans. It is simply out of their control. And if that doesn't scare you, then you don't understand.
We don't know whether or not artificial intelligence is possible. But it seems like a very reasonable possibility sometime in the next few centuries. And we know so little about intelligence that we have very little idea about whether it will share anything like the moral intuitions that undergird human society. Many of us suspect those evolved for survival in hunter-gatherer tribes and AI will evolve a very different set of criteria upon which it makes its choices.