Comment Re:Asimov was not naive. (Score 1) 146
On the contrary, when robots will be intelligent they will essentially be reading your mind, because that's precisely what we *have* to train them to do. We can't encode any "constraints", much less "intentions" as general laws because it's too difficult. Instead, what we can do is encode them as a massive, crowdsourced set of (order in plain English, intended behavior) pairs and train machines to behave correctly in all the virtual situations listed. Provided we hold off a sizable set of these input/output pairs for testing, a machine that behaves in the intended fashion in all test situations (aka situations where you don't explicitly show them the intended behavior) is "reading your mind" with very high probability (it needs not to be an exhaustive list - if it manages to behave properly in corner cases xyz that it was never shown before, it's pretty damn likely it will also behave properly in corner cases abc).
So for instance, "be kind to people" would not be encoded as any kind of dictionary definition. It would be encoded as a large set of examples of kindness, each example being vetted by as many humans as possible. Any machine being "kind to people" in the unseen test situations is then assumed to "get it" with very high probability. The whole challenge, of course, is to figure out a way to get any machine at all to pass the tests from a number of training examples less than a gazillion and a training time less than a billion years.