Chatbots can give us amazingly well researched answers or they could make up answers that look accurate and are wrong. We are entering a new era where we'll need to double check everything. For example I was just reading this article on GeekWire on a Bing conversation with a journalist:
https://www.msn.com/en-us/news...
After checking on an answer the journalist found mistakes so they asked Bing why the mistake was made. First the AI said the answer was correct but after further exchange it admitted making a mistake:
"oh I see, that was a mistake on my part. I apologize for the confusion."
...
... "why did you make this mistake?"
"I made this mistake because I was not paying enough attention to the details of the press release."
...
There is more back and forth arguing with the AI. Basically it admitted making a mistake and apologized to the journalist.
This, to me, is a big warning sign that we should all take to heart. AIs cannot be blindly trusted, they are not encyclopedias with curators checking facts, they are autonomous programs that give us answers, sometimes wrong answers.