If you'll read the actual paper, it says that skilled ones see when LLM hallucinating but benefit when it's not and not so skilled waste time on testing false positive results. What a surprise! Who would have thought!
I don't see any controversy here, LLM writes code or texts exactly in the same way. It's a tool, you can use it, it may help, it may not. Like any other tool.