Forgot your password?
typodupeerror

Submission + - Ethical trap: robot paralysed by choice of who to save (newscientist.com) 1

wabrandsma writes: From New Scientist:

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

In an experiment, Alan Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov's fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole.

Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, "my answer is: I have no idea".

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Ethical trap: robot paralysed by choice of who to save

Comments Filter:
  • by Falos ( 2905315 )
    > robot makes choices for itself
    The article clearly goes over how uniform and reliable code execution is, as it has been since the first logic gate. Robots and cars don't fret over jack shit, they do exactly what they're told, and that can include a weight cost on evaluation time.

    It's the writers that are going to struggle with drawing boundaries and survival priority.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...