Submission + - Ask Slashdot: Could Asimov's Three Laws Of Robotics Ensure Safe AI? (wikipedia.org) 2
OpenSourceAllTheWay writes: There is much screaming lately about possible dangers to humanity posed by Artificial Intelligence that gets smarter and smarter and more capable and might — at some point — even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-Bit home computers entered our lives, and Isaac Asimov's Robot stories in particular often revolved around Laws Of Robotics that robots were supposed to follow so as not to harm humans. The famous Three Laws Of Robotics from Wikipedia:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
So here's the question — if science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned from these literary works? If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions "So 20th Century" that AI builders won't even consider learning from their work?
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
So here's the question — if science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned from these literary works? If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions "So 20th Century" that AI builders won't even consider learning from their work?
Not so obvious answer (Score:2)
--As someone who is an Asimov fan and used to think this way... Eventually I came across an article with the critical observation that the "3 Laws" were used by Asimov to drive plot points and were not to be seriously considered as "basics" for robot behavior.
--Additionally, Giskard comes up with a "4th Law" on his own and (as he is dying) passes it on to R. Daneel Olivaw.
--Positronic brains (in the stories) are extremely complex and it seems that no two are completely alike. We may eventually end up with
Rampant ignorance (Score:2)
This also highlights the Real