Comment Re:Could we "pull the plug" on networked computers (Score 1) 69
Good point on the "benevolent dictator fantasy".
I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://duckduckgo.com/?q=exam...
Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://www.techopedia.com/tim...
Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://www.bbc.co.uk/news/tec...
Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://www.buzzfeed.com/mikes...
Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.
Of course there are example of robots killing people with guns, but they are still unusual:
https://theconversation.com/an...
https://www.npr.org/2021/06/01...
https://www.reddit.com/r/Futur...
https://slashdot.org/story/07/...
These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://en.wikipedia.org/wiki/...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."
But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).
Is there money to be made by fear mongering? Yes, I have to agree you are right on that.
Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?
I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.
Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life.
https://en.wikipedia.org/wiki/...
It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...
Two other USSR citizens we can thank for our current life in the USA:
https://en.wikipedia.org/wiki/...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."
https://en.wikipedia.org/wiki/...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."
There is even a catchy pop tune related to the last item:
https://en.wikipedia.org/wiki/...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."
If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?
Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://www.youtube.com/watch?...
But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.
In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://www.infoq.com/presenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."
Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://pdfernhout.net/princet...
https://pdfernhout.net/sunrise...
https://kurtz-fernhout.com/osc...
https://pdfernhout.net/recogni...
Example of related fears from my reading too much sci-fi:
https://kurtz-fernhout.com/osc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"
So, AI out of control is just one of those concerns...
So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).