I don't know how the next Python is going to get any traction, if table stakes for adoption is "language is understood by LLMs".
That’s a real constraint, but it’s not a new one, and it’s not uniquely LLM-shaped. Every language that ever got traction had to clear table stakes that weren’t technical purity: documentation quality, tutorials, books, community examples, tooling, package ecosystem, and the ability for a new user to get from zero to “it runs” without burning a week. LLMs just become another on-ramp, not the whole highway. The paper actually argues that vibe coding isn't a "lagging indicator"—it’s a high-velocity shock. The authors show that adoption accelerates sharply once a usability threshold is crossed, meaning the "next Python" won't just struggle with LLM data; it will struggle because the paper's software-begets-software feedback loop is hitting a friction point. We’ve relied on a healthy ecosystem of existing code to lower the cost of building the next thing. If vibe coding starves the maintainers of that ecosystem, the foundation for the "next Python" evaporates before the first LLM even sees it.
The current generation of coders won't use it if their LLM of choice doesn't understand it.
Some won’t, sure. Some also won’t use a language without a formatter, a linter, a debugger, a good standard library, or a GitHub footprint with 10k stars and a massive tree of downstream dependencies. But “won’t” is not a law of nature. It’s a product of incentives and education. The way you get adoption is the way we always got adoption: make the first hour pleasant, make the first week productive, and make the first month feel like momentum. An LLM can help with that, but so can excellent docs and tools. Excellent docs and a senior coder with an open door policy still beat an hallucinating chatbot.
LLMs won't understand it if there's no training data, which comes from users.
This is the part I agree with, with one missing piece: humans are in the exact same boat.
Programming languages have always had a training-data problem, because developers are not born with K&R hardwired into their cerebrum. They learn from corpora: textbooks, classes, tutorials, examples, codebases, mentors, review comments, and the slow accumulation of “how we do it here.” That’s literally why public schools exist, why CS departments exist, why apprenticeships and code review exist. Human understanding -- like an LLM -- depends on a huge, chaotic pile of prior artifacts that somebody took the time to order and categorize. So yes: LLMs that support developers need training data. This isn't a special indictment of LLMs; it’s the basic economics of learning for everyone—carbon-based or silicon-based.
I've always told people that coding would be automated last, if ever.
Reasonable prediction, because “coding” used to mean more than typing: it meant translating vague intent into precise behavior, testing assumptions, handling edge cases, debugging, maintaining, and integrating. What’s changed is that the typing and scaffolding part got cheaper, and the translation layer got surprisingly good. The hard parts -- translating vague intent into precise behavior and handling edge cases -- didn’t disappear. The boundary just moved. What the paper is really warning us about is a tragedy of the commons. When we use AI to graze on open source software without leaving the footprints (bug reports, documentation visits) that maintainers monetize, we strip-mine the soil. The result isn't just automation; it's an increase in low-quality code that serves a creator's immediate needs but consumes volunteer time to review and fix. NB: low-quality doesn't mean broken in the sense of a compile error. It means code that solves one user's immediate problem but is economically irrelevant to everyone else. LLMs are essentially weaponizing "if it ain't broke, don't fix it" to produce mountains of code that works just well enough to avoid human review, but not well enough to sustain the ecosystem. We’re at risk of trading a robust public pasture for a billion private, brittle window-boxes.
Apparently I was wrong, and will have to settle for being part of the last generation of coders that can actually read and understand code without LLM support.
This is where I push back hard, because it smuggles in a defeatist conclusion that doesn’t follow from the premises.
Humans still have to read and understand code. If anything, the ability becomes more important when code gets cheaper to produce. When output floods the zone, literacy matters more, not less. The paper points out that in our current baseline, users already receive roughly 1000 times more value than developers spend creating it. The big lever shifting the Nash equilibrium isn't that scale, but the demand-diversion channel. Vibe coding allows users to keep capturing that 1000x value while simultaneously stealing the engagement-based returns maintainers need to survive. That divergence is exactly what tips a virtuous loop into a recessive spiral.
And “without LLM support” is a choice, not a destiny. We can teach reading code the way we always taught it: deliberate practice, small programs, tracing, debugging, review, and the one that most of my professors used when I was an undergrad: “tell me what this does in plain English.” That's the one that tells them that the mental model is correct, that syntax has turned into meaning. LLMs can be excellent tutors, but they can’t be a substitute for comprehension, for the same reason a calculator can't grok the intuition behind the differential equation you just derived to model the physical system you’re about to simulate. They are a tool that scaffolds comprehension, not one that replaces it.
The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.