No, you are pretty precisely wrong. Elon and Gates made their fortunes in the software business but don't work in exactly the niche of AI. Exactly the same as Einstein worked in relativity and a bit in quantum mechanics, not nuclear physics. While AI and nuclear explosions are totally different, the level of understanding of the possibilities comparing now to 1939 is not all that different. At least you give no reasons beyond personal incredulity for the claim that there is no feasible way for this to happen in the next century.
The comparison to the risk of atmospheric fire is also precisely wrong. That was brought up as a possibility in the 1940s. The experts evaluated it, and concluded it was extremely low probability. Strong AI on the other hand is estimated by many experts to be very likely over the next century. (https://nakedsecurity.sophos.com/2015/05/27/1-in-5-experts-believe-artificial-intelligence-will-pose-an-existential-threat/ ) The main question is whether it will be a threat.
It is the next century or two that Musk, Gates, and others are warning of. And it is quite short sighted to dismiss the threat with 'there is no feasible way for this to happen' right now.
It has happened before that the smartest people in the world warn that technological advances may present major new weapons and threats. Last time it was Einstein and Szilard in 1939 warning that nuclear weapons might be possible. The letter to Roosevelt was three years before anyone had even built a nuclear reactor and 6 years before the first nuclear explosion. Nuclear bombs could easily have been labelled a "problem that probably does not exist." And if someone could destroy the planet, what could you do about it anyway? The US took the warning seriously and ensured that the free world and not a totalitarian dictator was the first capable of obliterating its opponents.
This time Elon Musk, Bill Gates, and Stephen Hawking are warning that superintelligence may make human intelligence obsolete. And they are dismissed because we haven't yet made human level intelligence and because if we did we couldn't do anything about it. If it is Musk, Gates, and Hawking vs Edward Geist, the smart money has to be with the geniuses. But if you look at the arguments, you see you don't even have to rely on their reputation. The argument is hands down won by the observation that human level artificial intelligence is an existential risk. Even if it is only 1% likely to happen in the next 500 years, we need to have a plan for how to deal with it. The root of the problem is that the capabilities of AI are expanding much faster than human capabilities can expand, so it is quite possible that we will lose our place as the dominant intellect on the planet. And that changes everything.
The fundamental idea is right...that it is understanding of the human condition that will be the biggest growth area in the next few decades. But he is wrong that this is an argument for training more students in current curriculum in anthropology or classics. The future belongs to people who can take the serious critical thinking characteristic of math, science, and engineering curricula and apply it in complex situations where technical details and human behavior are both important.
By the way, I see white/lavender and brown. It would be very interesting to know what lighting/image manipulation was done to get those colors out of a dark blue and black dress.
Marriage is the sole cause of divorce.