Comment Re:By the same logic (Score 4, Insightful) 335
Agreed. The authors set up a nearly impossibly complex ethical dilemma that would freeze even a human brain into probable inaction, let alone a computer one, and then claims "See? Because a computer can't guarantee the correct outcome, we can therefore never let a computer make that decision." It seems to be almost the very definition of a straw man to me.
The entire exercise seems to be a deliberate attempt to reach this conclusion, which they helpfully spell out in case anyone missed the not-so-subtle lead: "Robots should not be designed solely or primarily to kill or harm humans."
I'm in no hurry to turn loose an army of armed robots either, but saying that you can "prove" that an algorithm can't make a fuzzy decision 100% of the time? Well, yeah, no shit. A human sure as hell can't either. But what if the computer can do it far more accurately in 99% of the cases, because it doesn't have all those pesky "I'm fearing for my life and hopped up on adrenaline so I'm going to shoot now and think later" reflexes of a human?