It takes a good stab at examining the challenges and possibilities of superintelligent A.I.
Nice summary view here:
It posits three possible intelligence advance scales.
The first is self improvement over seconds.
I.e., the machine become conscious. It is able to increase it's intelligence to superhuman levels at machine speeds within a few seconds. There will be no time to react. Even air gapping the machine might not be sufficient as it may figure out new principles which allow it to bridge the air gap, figure out ways to mislead it's human owners as to it's capabilities so they enhance it further, etc.
The second is over a scale of weeks or months.
Not much time to react to it. A reliable way to cut the power should work. A nuclear safety net should definitely work. Society certainly couldn't react to it in time. There would likely be mass unemployment as it enabled human replacement within a few years for thinking jobs (and combined with robotic bodies- almost all methods of manual labor).
The last way is over a long time period. Society would have time to react. Perhaps to see and stop it if it was turning bad. Especially if it simply became the equivalent of IQ 160-300 slowly, you might be able to understand it. Later phases where it's iq reached meaningless numbers (6000... compared to it, humans would be like horses in relative intelligence).
The definitional problem is also there.
"Make people happy".
Okay- rig them to machines that feed them pleasure signals in the brain 24/7. Extinct.
Make people smile!
Easily obtainable with surgery.
There is a risk the machine will be "greedy" and basically convert the entire planet (and then the solar system) into a system for increasing it's intelligence. Humans don't play a large part in that scenario. Nothing malicious or personal about it-- not a failure of friendliness.