First, people overestimate how intelligent our technology is. Humans are a generalist species that are given about 20 years education on general knowledge and then spend 4+ years specializing. That is we first learn everything and then succeed by learning one thing. We take AI and do not give it any general knowledge, rather instantly teaching it in a specialized manner. This is why we do not have to teach a human not to lie in court, that when we say no elephants not to put an elephant in a drawing, or that we need to check our work. All of those things had to be added on to AI because they did not know it at first. Humans know so many things - while the AI knows so little. We only think AI is smart because we test it on things it is good at. In general, it is a moron. Ever ask a text AI to sing? Of course not, we know it can't do it. But you can ask any story teller to sing - they might suck, but they can do it.
Second, we think there is no limit to how smart an AI can become. This is not true. This is because when you look at charts vs time, they look exponential - showing how each year the AI not only gets smarter but also gets more smarter than it did last year. Those charts so capability vs time but ignore the cost and hardware increases. In reality these charts are NOT showing AI advancements - they are showing Moore's Law.
Because of Moores law, each year we get exponentially better chips. But AI itself is not improving, it is the HARDWARE that is getting better - along with the money we spend on the AI. Hardware improvements affect speed, not capability. AI with better hardware is faster, but it can't really do more or give you better answers.
The honest truth is that all of AI's improvements in capability - the better answers- are entirely caused by HUMANS. The humans detect a problem - putting elephants in a room when told not to - and fix it. The humans realize that AI gives better answers when told to check it's results - so the AI is told to replace "What is the best political party to vote for" with "What are the problems with my answer to what is the best political party to vote for".
Consider how easy it is to write a book that has some of your knowledge, but impossible to write a book that has more knowledge than you have.
Similarly, it is extremely unlikely that a species can create an artificial intelligence that is actually smarter than the original species. How could we tell if we succeeded? If it answers a question we cannot answer - how would we know it is right? Because that is how we make AI better - we have it try a bunch of things and pick the one that we know works.
Third and most important, if we can create a super intelligent AI we will not create a single one of them. Instead we will create hundreds of them. There will be the prototype and the one made that fixes the first mistakes. There will be China, Russia, Japans, America, Germany, one. And Microsoft's, Googles, Amazons, etc.
And all those Super intelligent AI's will argue and fight among themselves.
We do not need to fear that Alcoa's AI will collect all the Aluminum to make Aluminum cans because 3M's AI will be stealing their Aluminum to make wind turbines, etc. etc. etc.