There are interesting counterpoints to functionalism that aren't just "idiocy". The Chinese Room is the example I'm most familiar with offhand. I believe it was proposed in the context of "consciousness" not "reasoning" but I say it's relevant.
To use your bridge analogy: If I ask my "AI" how to build a bridge that spans X, can carry Y load, needs to handle Z wind shear, (and so on, for all relevant parameters), and it provides an answer from a (mind-bogglingly large) lookup table containing instructions for building a bridge under every conceivable combination of parameters, then yes, I would say that it did "simulate" the math. The math/reasoning were done by whoever/whatever created the lookup table that this "AI" relies on - not the "AI" itself - even if its output is functionally indistinguishable from what you'd get from a reasoning agent.
I agree with your central point: LLMs can reason. But in my opinion, that conclusion relies on more than that their output looks like it. It comes from the fact that, for example, you might conceivably train an LLM on a bridge-building textbook and ask it to tell you how to build a bridge that wasn't explicitly defined in its training set, and get a correct answer.
There's a tie to a joke I heard in university:
A professor assigns a problem set including the question: "What's the area of a circle with a radius of 2 cm?".
The next day, a math major in the course comes to lecture looking exhausted, complaining: "That was terrible! I was up half the night! I had to re-derive half of calculus, found that the area of a circle is pi*r^2, and finally showed that the circle has an area of 4*pi cm^2."
The engineering student sitting next to him says: "Really? It took me 2 minutes. I just went to my 'properties of a circle with a radius of 2 cm' table and it was right there..."