Forgot your password?
typodupeerror

Submission + - Ask Slashdot: Could Asimov's Three Laws Of Robotics Ensure Safe AI? (wikipedia.org) 2

OpenSourceAllTheWay writes: There is much screaming lately about possible dangers to humanity posed by Artificial Intelligence that gets smarter and smarter and more capable and might — at some point — even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-Bit home computers entered our lives, and Isaac Asimov's Robot stories in particular often revolved around Laws Of Robotics that robots were supposed to follow so as not to harm humans. The famous Three Laws Of Robotics from Wikipedia:

        A robot may not injure a human being or, through inaction, allow a human being to come to harm.
        A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
        A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

So here's the question — if science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned from these literary works? If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions "So 20th Century" that AI builders won't even consider learning from their work?

This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Ask Slashdot: Could Asimov's Three Laws Of Robotics Ensure Safe AI?

Comments Filter:
  • --As someone who is an Asimov fan and used to think this way... Eventually I came across an article with the critical observation that the "3 Laws" were used by Asimov to drive plot points and were not to be seriously considered as "basics" for robot behavior.

    --Additionally, Giskard comes up with a "4th Law" on his own and (as he is dying) passes it on to R. Daneel Olivaw.

    --Positronic brains (in the stories) are extremely complex and it seems that no two are completely alike. We may eventually end up with

  • The OP proves a point I've been making for a while now: People actually think that the 'pseudo-intelligent' crapware that they keep trotting out as 'Artificial Intelligence' actually has a human-like mind, cognizant, self-aware, and conscious, when nothing could be farther from the truth. Asimovs' Three Laws of Robotics would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing therefore it does not apply.

    This also highlights the Real

"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson

Working...