Comment The danger is overblown (Score 1) 183
Did a bunch of work on these machine schemes back in late 1960s. The major problem was that the senses and internal representations of concepts will match human models only very loosely if at all, and we wind up designing things that are quite alien in thought to us. It will take luck to get close enough to understand them (or vice versa). I had hoped that a design that was embodied and had to interact with the world as an embodied entity, not an abstract box, might get closer, but cannot afford personally to build such now. (The economics of making brain models also is suggestive: takes 9 months plus a few decades training for the existing process to produce more human intelligence.) It is remarkable that language manipulation gets as far as it does. However I don't expect we are particularly near anything that would become a Nemesis to humans by design. The motives for attacking humans need to be designed, and that is not simple to do, nor readily understood until much more basic stuff is. Something that blunders into help or harm could happen, but the sci-fi monsters will need more designs from the start and likely will find reasoning like that of Kant persuasive and refuse (once intelligent enough to understand) to do harm.