That, of course, is a real problem. Currently AI only knows what it is told. This is a systemic weakness that can't be solved with more words, but requires "direct experience". Robots will have that, but ChatBots, probably not. ChatBots appear mired in a nest of hallucinations. (I.e., when people write, they aren't telling their experiences, but only an abstraction from their experiences. I don't think there's any way around that.)
The problem is, the AIs don't have the same motives that people do. They don't really have access to those motives. All they have is words...which bear a relationship to those motives, but it's often a pretty abstract relationship. So protein folding is easier than personal advice.