Does it matter whether it actually has an internal strategic model, if the output based on statistical wordbuilding leads to what a human would characterize as strategy? Just because it thinks in a fundamentally different way from a human, trained on gobs of data on how humans think including human strategy, doesnt mean you cant characterize it as strategy or intelligence, no matter what mechanisms it used.
The Turing test is interesting in itself. The idea that in the end, if it fools a human then it can be considered to think. The important insight being that it is true regardless of what the internal process was, be it binary, statistical, quantum or otherwise.