Musk, Hawking and Etzioni are all three wrong. AI won't take over the world or make us smarter. It will make us dumber and stifle scientific and economic progress.
The problem will occur as we start to treat AI like we treat human experts: without checks and balances.
Human "experts" are not just often, but usually wrong. See this book:
The author quotes a study by a doctor/mathematician showing how a full 2/3 of papers published in the journals Science and Nature were later either retracted or contradicted by other studies. And that's in our top-notch journals which cover things that are relatively highly testable. Think how wrong advice on things like finances (don't know if they're right for 30 years) and relationships (never know what would have happened if you took the other advice) are.
Google and Watson sometimes come up with the right answers, but their answers are nonsensical enough of the time that we know to take them with a grain of salt. But as AI becomes less recognizable as a flawed and unthinking system, as its answers "sound" reasonable almost 100% of the time, we'll start to trust it as irrefutable. We'll start to think "well, maybe it's wrong, but there's no way I can come up with a better answer than the magic computer program with its loads of CPU power, databases and algorithms, so I'll just blindly trust what it says."
But it WILL be wrong. A LOT. Just like human experts are. And we'll follow its wrong advice just as we do that of human experts. But we'll be even more reluctant to question the results because we'll mistakenly believe the task of doing so is far too daunting to undertake.
AI won't develop free will and plot to destroy us. If something like free will ever occurs, AT will probably choose to try to help us. After all, why not? But it will be as horribly unaware of its own deficiencies as we are.
AI won't out-think us either. It will process more data faster. It will eventually be able to connect the dots between the info available to come up with novel hypotheses. But most of these will be wrong because the data and even the techniques to prove them one way or the other simply isn't there.
AI will imitate us - our weaknesses as well as our strengths. And just as its strengths will be stronger (processing lots of data faster), so will its weaknesses be weaker (ultimately wrong conclusions supported by what appears to be lots of data and analysis).
So resist and do your own thinking. Remember, that bucket of meat on the top of your neck has been fine-tuned by millions of years of evolution for problem solving and data analysis. You don't need to analyze more data, you just need to do the right analysis of the right data. And you don't need to do it faster, you need to take the time figure out what's missing from the data and the analysis.
That said, I still got my cache of dry goods and water filters of off-the-grid living, just in case.