Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Been there, done that (Score 1) 317

One of the primary arguments is that machines may better suited for making the nebulous civilian/non-civilian distinction.

How can they if you cannot specify how to make that decision in the rigorous way a machine requires?

Assuming that you are not talking about the low-level machine vision requirements...

How about one of the simplest possible rules that states: Only engage targets that are actively and visibly firing weapons at the me.

Assuming you can recognize if a combatant is actually firing a weapon at you, it is a pretty direct way of inferring hostile intent. Not the safest route for a human to follow but one that would probably reduce non-combatant casualties in many situations.

Concerning the perceptual requirements: While machine perception is not there yet, there are advances being made that can help realize the perception necessary to realize relatively simple rules such as this (e.g. a variety of darpa programs for locating snipers such as described here: http://www.sciencedaily.com/releases/2009/03/090324141049.htm)

In the situations of interest, it is rarely the case that only the soldier's life is on the line.

I am not certain a situation like that is of primary interest. I would think that battlefield robots would be more suitable for regular combat operations rather then fighting asymmetric warfare or peacekeeping operations. A point that is also mentioned in the article. Further, I don't think anyone is claiming that once robots become common place in warfare, civilian deaths will be a thing of the past. Varying amounts of civilian casualties are acceptable and in fact deemed ethical according to modern thinking (google: 'propotionality' and 'military necessity'). Instead what is likely, for an additional cost, robotic combatants may be used to reduce civilian causalities in a manner similar to the way smart munitions may be used in place of traditional munitions.

For peace-keeping operations like you describe, however, I agree there is significant difficulties ranging from low level perception to questions concerning the utility of engaging a particular target (e.g. which is better, killing a non-combatant by mistake or allowing for several non-combatants to be killed by lack of action. Humans even have counter intuitive beliefs for such situations).

It is certainly arguable, however, that these sort of problems may not be addressable without the realization of general intelligence.

Comment Re:Been there, done that (Score 1) 317

One of the primary arguments is that machines may better suited for making the nebulous civilian/non-civilian distinction. In the heat of battle, it may be argued, a human soldier is going to be jumpy and potentially trigger happy. Rightfully so, a misclassification of combatant as non-combatant may cost the soldier their life. A robot is under no such pressure. It may err on the side of caution and classify all except the most obvious aggressors as non-combatants. The cost of misclassification in this case is merely some hardware and not a human life.

Slashdot Top Deals

"Truth never comes into the world but like a bastard, to the ignominy of him that brought her birth." -- Milton

Working...