Please create an account to participate in the Slashdot moderation system


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Killing by proxy, "collateral damage" (Score 1) 373

by svkal (#18736535) Attached to: New Laws of Robotics Proposed for US Kill-Bots

Here's a bigger, related question: a robot is a)not a person and b)maybe more durable. A human soldier is allowed to fire in defense. Picture a homeowner in wartime, guarding his house. Robot trundles by, x-rays the house, sees the weapon, charges in. He sees it heading for him, freaks out, fires at it. How can the robot possibly be justified in killing him? Even if it represents a threat, you're only threatening a machine!

That's a very interesting point. Even against opponents that are obviously completely hostile to it, it has no obvious moral right to self-defense because it's just a thing.

Nonlethal weapons are, obvious as they may seem, morally speaking probably not a solution; most methods of subduing someone, when applied without human awareness and wisdom, probably have a chance of accidentally maiming or killing the target in special circumstances.

Mr. Canning's laws could actually be interpreted as a solution, provided there's also a strict prohibition on all "collateral" damage to living beings. As applied to the situation: the robot may want to destroy an AK47, but because there's body-like heat emanating from somewhere close to it and the robot does not have a reliable way of disabling the weapon without hurting the human, it is unable to act. Of course, this makes robots pretty useless in war unless the other side has already acquired them, and as such, I fear that less ethical robots will be built.

Of course, though, these theoretical discussions present a very idealized picture of war. People—armed and unarmed—are killed in war who everyone in retrospect could've agreed shouldn't have been killed, even when only human soldiers are used. Sadly, adding robotic soldiers into the mix is very unlikely to make war significantly more humane.

Backed up the system lately?