However, as the summary points out, there's also no known algorithm that can do the same for humans, and humans usually behave less consistently than software so chances are testing humans to an acceptable degree of certainty will be harder than testing robots to the same degree of certainty.
This right here is why I would never have funded this investigation. It's idiotic by design, there is absolutely no good criteria to decide "who needs to live and die", and thus its a very highly subjective subject. We argue this in court, we frequently don't agree on the verdict.
Killer robots are going to happen, and we're going to trust them every bit as much as any given human being. And they will have advantages:
- Scope can be limited: a robot may be able to kill, but it may not be able to move... we'll set it up and assume that it will kill anything in a given area, and maybe program it to try not to kill things that fit a certain criteria. But we won't rely on that
- Killer robots can be turned off. Killer humans can not, and some have a hard time going from killer mode back to civilian mode
- Killer robots will be far less likely to induce collateral damage. They won't forget that bullets go through walls. Humans raised on movies somehow think a bit of drywall or plaster is impervious.
There are numerous advantages. What we probably will never build, on purpose, is Terminator bots that roam around the street looking for "bad guys".