ChatGPT has quite a bit of intelligence. It can interpret and use idioms. It can map new words to existing concepts. It can cite the correct mathematical formula for most things. It was trained on a huge database. The problem here is it made up results and asserted them confidently without doublechecking them. It's been trained to BS. It hasn't been trained not to BS.
If it had been trained not to BS, before it said anything it would have a valid proof of reasoning for what it was saying. And if pressed it would supply that proof. It appeared to do that, but the facts it cited were made up, not from a database of known facts. Internally it was easy to show it was BS, but it presented it confidently anyhow. I don't know if it internally verified it was BS and confidently lied, or didn't bother to verify at all.