The fundamental idea is right...that it is understanding of the human condition that will be the biggest growth area in the next few decades. But he is wrong that this is an argument for training more students in current curriculum in anthropology or classics. The future belongs to people who can take the serious critical thinking characteristic of math, science, and engineering curricula and apply it in complex situations where technical details and human behavior are both important.
By the way, I see white/lavender and brown. It would be very interesting to know what lighting/image manipulation was done to get those colors out of a dark blue and black dress.
I think Elon sees something that most of you do not. Artificial intelligence is not like anything else. We know very very little about the kinds of intelligence that are possible. But if it is possible to build AI that is smarter and more capable than us, then it will by definition be better than us at building the next generation of itself. And at that point, humans are permanently obsolete because we have no rapid methods for upgrading ourselves. It has nothing to do with who is 'using the AI' or 'Who is doing the prescription'. There will be no person and no human moral intuitions in the loop at all. The intelligence that supersedes us will be doing what it wants to do. We'll be like the fish who debate how to control their bipedal relatives who have decided to start overfishing the oceans. It is simply out of their control. And if that doesn't scare you, then you don't understand.
We don't know whether or not artificial intelligence is possible. But it seems like a very reasonable possibility sometime in the next few centuries. And we know so little about intelligence that we have very little idea about whether it will share anything like the moral intuitions that undergird human society. Many of us suspect those evolved for survival in hunter-gatherer tribes and AI will evolve a very different set of criteria upon which it makes its choices.
That doesn't tell us much about how to engineer processes that obey the known laws of physics. Predicting what humans will be able to do is very very difficult...and people regularly get it badly wrong being both too optimistic and too pessimistic. In my mind, good hypotheses based on careful consideration of the best evidence are never premature. They just might be wrong.
There is a fantasy that lives on and on that physics is only the search for the fundamental rules of how the universe works. Physics does include the search for the most fundamental theory...things like trying to detect the higgs boson or understand dark energy. But those two pretty nicely define 'irrelevance' to the everyday lives of humans. If physics is only about the search for fundamental rules, then physics is essentially over as an enterprise with practical relevance. (See http://www.preposterousunivers...) But the overwhelming majority of physicists have long been working on applications of known fundamental physics to discover new emergent laws and new technological applications. Semiconductor and device physics is one of the great successes of 20th century physics and this achievement of fabricating gallium nitride with its large bandgap was a major advance, both in the fundamental science of crystal growth and in high frequency electronics as well as the production of blue light. This is exactly the kind of prize that should be given because we need the next generation of physicists to be finding fundamental problems that have practical relevance rather than using their talents on interesting but economically useless tasks like string theory. I predict that in the rest of the 21st cenury, there will be more Nobel prizes in physics given for biological, environmental, and neuroscience applications of physics than there will be for fundamental particle physics. If not, then the Nobel prize will be overshadowed by the Kavli prize or some other prize that recognizes accomplishments that have consequences for humans.
This hasn't been my experience. Reviewers and grant officers want to fund high risk/high reward science. But you are competing with others who have already tried a bunch of risky ideas and are only proposing the ones that happened to work. You basically have to make a significant discovery before you can be funded and then you can get funding to bring that idea to full bloom and hopefully fund a few risky projects on the side that will serve as the basis of the next grant proposal.
Most new ideas are bad ideas, so funding agencies have to have a pretty rigorous filter to sort out the promising ones. As a result, it will always be very hard to get funding to explore an idea before there is evidence that it is on the right track.