I'm not sure that intelligence is the bottleneck of technological (or any other) progress that many people seem to believe it is. I think this is the view of people for whom technology is inscrutable, but most progress is predicated on research, where the biggest bottlenecks are time and the adequate application of resources (and convincing people to give you those resources). It's not clear to me how a "super" intelligent AI would immediately change that, unless perhaps people trusted it implicitly, so it was consequently better able to allocate resources than we do at present.
In any case AI makes mistakes, and there's no reason to believe that mistakes diminish as intelligence increases, so trusting AI as above probably wouldn't be prudent. In other words, reliability/trustworthiness is its own thing, its own obstacle, and only tangentially related to intelligence, if at all. There are highly intelligent liars, for example and conversely, if you give a principled, intelligent person flawed information, they will naturally arrive at flawed conclusions. The quality/trustworthiness of information is just as important (if not more) than the capacity to analyze it intelligently, and the process for establishing the quality of information is through research, not by "being smarter."
Granted, ML algorithms can potentially expedite analysis, but it's still limited by the quality of data, which is not something I believe intelligence can inherently improve. I am open to that possibility; I just haven't really seen anyone explain how that might happen (let alone provide a testable explanation). Most people just wave a magic wand and say smarter = faster.