People are often wrong too...
The problem is that we are used to machines being used to do things that machines are good at - eg for predefined math calculations a computer is expected to reliably and quickly get the correct answer every time.
The problems being targeted by LLMs are not so well defined, so errors can be made wether its done by a human or an LLM. But people are used to the traditional problems solved by computers and expect everything to be the same.
Instead of assuming an LLM is a reliable machine that follows a rigid process and produces reliable output every time, treat it like a human employee and subject its results to the same processes - ie review, quality control etc. Of course then you won't get the massive cost savings that you imagined by replacing employees with machines.
Good use of LLM will typically augment existing skilled employees, not replace them.