Comment Comment from an AI researcher (Score 4, Interesting) 583
I've been working on strong AI for the past 7 years. Here's my take on the whole issue:
Military person: We want your software/techniques for an autonomous war machine.
Me: Uh... that's a really, really bad idea. You'll make mistakes, and then...
Military person: We know what we're doing, son.
Government - any government - won't see the problems until it's too late. To take obvious examples from history, government never thought that land mines would pose any sort of problem for future generations, and never thought that randomly bombing terrorist organizations would increase their number.
Having just finished "Harry Potter and the Methods of Rationality", there's a concept in that book "never reveal the secrets of power to someone who's not intelligent enough to figure them out for themselves", as applied to - for example - the atomic bomb. Einstein and others regretted ever unleashing that level of destructive power on humanity, not for any reason other than it would be misused by short-sighted people. It held promise for a utopian easing of the worlds troubles, while at the same time made it easy to obliterate a city on a whim.
For example Leó Szilárd (IIRC - I may be remembering the wrong name) discovered that graphite can be used as a neutron moderator thus making chain reactions possible. Had he not published his results, the atomic bomb might have been delayed by decades - possibly indefinitely.
I've discovered a few things that might be "results" in strong AI. I dunno if I want to publish, though(*) - the idea of a house-cleaning drone seems pleasant enough, but reading about a sentient tank going berserk in Afghanistan and wiping out a small village puts me to pause.
"No one's to blame, it was a software glitch. We've patched and fixed all the other units."
(*) Moral advice on this issue would be appreciated.