Comment Right answer, wrong reasons. (Score 1) 79
We are all artificial intelligences. What we produce is based on our experiences. There are those that argue that AI programs have no soul or divine spark, but in all probability they are not that different to us. The difference probably lies in how our training data was curated. We have had lifetimes of slowly learning what is 'moral behaviour' from those that surround us. The AI lawyer that makes up references is not 'lying' as such; it just produces the answers it thinks you want to see.
Some Pentagon people would love to use an AI program. It looks smart. It will tell you to attack if that's what you want to hear. It can be blamed if that was the wrong advice. The solution is to rule that the AI program in law is not treated as an intelligence. Those who ask it questions and who act on its output should be held responsible for any consequences. This would seem to be the direction we are going.
Blaming the user does not exonerate the AI system. There is probably some duty on the developers to prevent the system causing harm, but that is harder to codify.
One day we will have to deal with the attitude that AI is not 'like real people' and 'should have no rights'. That has an unpleasant but familiar feel to it.