Comment Understanding AI's limits (Score 1) 54
LLM-based AI can do some pretty impressive things. It *seems* to answer questions with remarkable accuracy, and it instantly produces code in response to often ridiculously vague input queries:
"Write me an app to track ant farms in Vietnam"
And what do you know? You get something that seems surprisingly useful!
Except that it's all an illusion.
I'm an experienced software developer (25 years now) and I focus on information lifecycle apps targeting workgroups and enterprise - organizations of 50+ people. As I write this, about 20,000 people are concurrently using an app I created.
Over the past year or so, I've been trying to deeply integrate AI into my workflow. It's there when I write code in VSCode, it's there when I write sysadmin/shell code, and it's there when I'm refactoring.
The more I use it, and the "better" it gets, the more frustrating I find it. It's only somewhat useful in the area that most coding projects fail: debugging.
No matter what it seems, LLM-based AI doesn't *understand* anything. It's just an ever-more-clever trickery based on word prediction. As such, it serves only as another abstraction that still must be understood and reviewed by a real person with actual understanding, or the result is untrustable, unstable, and insecure "vibe code" that is largely worthless outside of securing VC funding, which is the thing that AI perhaps does best: help unprepared people get VC funding.
You still need real people to get code you can live with, depend on, and grow with.