The fundamental idea is right...that it is understanding of the human condition that will be the biggest growth area in the next few decades. But he is wrong that this is an argument for training more students in current curriculum in anthropology or classics. The future belongs to people who can take the serious critical thinking characteristic of math, science, and engineering curricula and apply it in complex situations where technical details and human behavior are both important.
By the way, I see white/lavender and brown. It would be very interesting to know what lighting/image manipulation was done to get those colors out of a dark blue and black dress.
I think Elon sees something that most of you do not. Artificial intelligence is not like anything else. We know very very little about the kinds of intelligence that are possible. But if it is possible to build AI that is smarter and more capable than us, then it will by definition be better than us at building the next generation of itself. And at that point, humans are permanently obsolete because we have no rapid methods for upgrading ourselves. It has nothing to do with who is 'using the AI' or 'Who is doing the prescription'. There will be no person and no human moral intuitions in the loop at all. The intelligence that supersedes us will be doing what it wants to do. We'll be like the fish who debate how to control their bipedal relatives who have decided to start overfishing the oceans. It is simply out of their control. And if that doesn't scare you, then you don't understand.
We don't know whether or not artificial intelligence is possible. But it seems like a very reasonable possibility sometime in the next few centuries. And we know so little about intelligence that we have very little idea about whether it will share anything like the moral intuitions that undergird human society. Many of us suspect those evolved for survival in hunter-gatherer tribes and AI will evolve a very different set of criteria upon which it makes its choices.