One other interesting takeaway for me was the range of what it might mean to be be a superintelligence. The author being interviewed said there are kind of various dimensions to superintelligence, such as speed of processing, complexity of processing, size of "memory" or available database of info, concurrency (ability to process independent events simultaneously).
Not all superintelligences may have all of these qualitative dimensions maxmized, either, which can be part of the problem of failing to recognize when one has been created because we may fail to see its potential because it doesn't seem omniscient.
I think it's also interesting how we kind of default to science fiction ideas of like Terminator or other "machines run amok" scenerios where the outcome is physical violence against humans.
Some of the outcomes could be more subtle and some of the biases could be inbuilt by humans and not the part of some kind of warped machine volition or intuition.
One of the everyday examples might be the advanced software designed to bank finances, linking program trading, risk and portfolio analysis, markets, etc. The amount of information big banks have to process on a daily basis is massive and while humans make important decisions, they rely heavily on machine analytics and suggested actions (and modeled outcomes) to make those decisions.
The system may make money, but is it only biased in terms of firm profit or could it have other, unintended capital effects? Is it possible that while each big bank may have their own unique system but because all these systems have a lot of shared data (prices, market activity, known holdings by others, common risk models, etc) that they could have an influential or feedback loop among them that might actually drive markets? Could this unintentional "network" of like systems be something like a superintelligence?
One question I sometimes ask myself -- what if wealth inequality wasn't a conspiracy of some kind (by the rich, the politicians, a combination, etc) but instead was something of a "defect" in the higher order of financial system intelligence? Or maybe not even a defect, but a kind of designed-in bias in the system's base instructions (ie, make the bank profitable, for exampel) that resulted in financial outcomes which tend to make the rich richer? What if the natural outcome of markets was greater wealth equality but because they are heavily influenced by a primitive machine intelligence we get inequality? How could we know this isn't true?
I think these are the more interesting challenges of machine superintelligence because they grow out of the things we rely on current (and limited) machine intelligence to do for us now. Will we even recognize when these systems get it wrong and how will we know?