Comment Re:Tried it a couple of times, not impressed at al (Score 1) 275
instead of admitting it doesn't know something, it'll spout a likely-sounding word salad. When you call it out on the results, it'll admit it was wrong. Well if it knows it was wrong afterwards, why didn't it know it was wrong beforehand?
It doesn't know it's wrong afterwards, just as it doesn't know it was wrong before. It's just the training of the model suggests to the generative process that apologising is the most likely thing to be said after being called out. The architecture of large language models isn't really fit for trustworthiness, just plausible-sounding responses.