The assumption that we humans will be able to develop AI that can then create new and better technology is a logical fallacy.
How is it a logical fallacy? Isn't it just an empirical question as yet unanswered?
For this the AI must become sentient, or can only optimize existing processes and technology, but never create new one.
Why? The fact that no one has yet invented a robot-designing robot is no guarantee that no one ever will. I work with neuroscientists who build animats and cultured neural networks that interface with computers. The latter have been shown to learn. Sure, those examples are to robot-designing robots as protozoa are to humans, but that's the point. Protozoa evolved and here we are. The real logical fallacy is in assuming that because we claim to be sentient now that we have evolved, that the eventual robot-designing robot will make the same claim.