The stories come from prior stories, with new prompts to re-order the words essentially. This is enshitification. It will grow until the LLM's can coin new terms, build analogies, research the principals of a story, and even call people close to the story for their opinion and summarize it. Then LLM's will have to associate good journalism practices with prompt guidelines given by trainer models.
Those missing parts are ultimately solvable by even more LLM API's and trickery, but it's still not intelligent. In fact, the guardrails of most public LLM's are so narrow for divisive issues that most newsworthy issues would be dry-as-a-bone recaps. The arc of time that makes previously non-controversial phrases turn into a dogwhistle to a social agenda would make LLM's just agree with the accusation and move on. They have no agenda, including any to dodge embarrassment.
LLM's that could write in an acerbic, critical form like some great writers of social commentary (Twain, Vonnegut, Hitchens) are a far way off. Those would be able to build a cohesive worldview using a mostly-sensible value system. As it is now, the Transforms don't really have a way of teasing out a contextually-generic moral system, because there isn't one. So we're creating the best savant possible in the field of reading everything, summarizing what's its read. This covers a lot of daily human thought, but it cannot cross over to feeling something, and it seems absurd when a machine tries to fake it.
He'll mix things up a bit, raising the power of the Executive to unprecedented levels, and then fade out in a blur of elderly nonsequiteurs. Pennies won't disappear, Greenland won't be a state, and the privatized portions of the federal gov will flame out in corruption, prejudice or bankrupcy. As it has been, so it will again.
And your comment will look as ridiculous as any past administration gloating.
So, in the end, even if a cloud service is more expensive to some degree, the corporate comparison is to bodies-at-keyboards and herding the coders to Fix What We Want And Don't Touch Anything Else.
Corollary: This effect of boiling an employee's role, or even portions of it, down to "write a check to a service company" is the entire story of the Digital Revolution since the 1960's. LLM's are just another small (and flawed) step in attempting to get the ad-hoc requests automated. "Please compile the diverse, numbers into a filtered spreadsheet and present us a meaningful graph tomorrow" is, I'd guess, like 10-25% of what humans are doing at a business computer nearly constantly. This should be (one) Workplace Turing Test.
Compare this to a use-the-design, not-the-code shop: They may build a relatively larger set of their components using the "library" philosophy, still managing the portfolio, but hopefully reacting to change at a more-targeted level. For teams that want to resist imposed-change, and minimize the influence of a framework's arc of features, popularity, or even ownership - using a library can be a step.
The result is a different beast: Consider the heavy reliance on mind that a custom 1m+LOC legacy finance, aerospace or government system has. It's 20+ years old, has a specialized team that built it from the ground-up, and is near totally impervious to most software-market influence. It's also only flexible at points thought about 1-2 decades ago, and probably has shell systems around it. Now compare it to a more modern startup that built on a framework, did the care & feeding, outgrew it and wholesale-migrated 1-3 times, and balanced somewhere between 3 and 5 "years old" in component versions. Which is a better software/team mgmt strategy? That's debatable.
If you suspect a man, don't employ him.