The point is: using "just" trivializes the effort, makes it look like it's all some small step that needs to be taken, some superfluous gesture and the problem would be fixed.
Analogy: If I buy a car horn and a windshield wiper, it makes no sense to say "what remains to meet the definition of owning a car is just buying the rest of the car"; while technically correct, it makes me look like I'm at 95% of owning a car, instead of 5%.
Case in point: "widening the scope" is not a matter of adjusting a couple parameters or coding for a few days. "Widening the scope" is definitely not "what remains", it's actually most of the effort. It's the devil in the details. It's the Pareto principle all over again, applied here as the roughly 20% of missing features which have an 80% share of effect (or lack thereof).
The very foundation of LLMs is ultimately the feet of clay on which the whole thing sits. It started with the wrong principles, and it keeps being built on, adjusted and steered in the wrong direction. The overhyped AI is a glorified trained monkey, or parrot, or what-have-you, which is able to eloquently utter words (be them in writing, sound or video) which it does not understand. The only difference compared to the above-mentioned beasts, in this regard, is the ability of LLMs to access a wealth of data and mix those words in many ways, but it ultimately is incapable of understanding the information it handles. It's a giant idiot, mutilated in 1000 places; and some people expect it to "just widen its scope". Yeah, good luck with that.