It seems that most human "reasoning" is rationalization after the fact. Humans are not very good at formal reasoning. Yes, we can be trained to do it, but most never receive that training and even those that do make most of their every day decisions in a "quick and dirty" way. The quick and dirty way is "sub-conscious" (though I think that term is going out of favor). Try 'Thinking Fast and Slow' by Kahneman. It is a "popular science" book, but the author is an authority. In any case we don't actually have conscious awareness of most of the processes that make up most of our reasoning. Most of what we pass off as reasoning is making up a plausible story for why the decision we made is correct. I see no reason why a LLM can't do that.
On the other hand, it turns out computers are very good at formal logic and have been for a long time. It is where early waves of AI enthusiasm focused. Look up "theorem prover". What they have not been good at is the the fuzzy heuristics of human intelligence.
An interesting question is whether you can gain the useful characteristics of human intelligence and retain the correctness of formal logic. I don't know, but it seems that the limitations of the formal logic approach are rooted in the practical impossibility of formally describing the world (ontology). See for example Cyc https://en.wikipedia.org/wiki/.... Classically, anything follows from a contradiction so if there is a contradiction anywhere in your ontology, then your formal reasoning is at best unreliable. Perhaps you can partition your system, so your neural network can talk to various tools: a calculator, a theorem prover etc, much as human can use these tools.