We've been in the cloud era 15 years now. Docker hosts, Kubernetes pods, Lambdas, Even old fashion cpanel hosts. All of these are at risk, even if the users are otherwise doing everything right.
People who thought it to be a good idea that code you cannot trust from people you cannot trust is isolated from other sensitive information just by the thin layer of container features, while still sharing the same huge kernel interface, I/O, page cache etc., certainly have to learn the hard way that they turned what was previously a very moderate risk - "local privilege escalation" - into a much worse security breach. Likewise the irresponsible people who introduced omnipresent code-download-then-execute from random places on the Internet into almost everybody's life, aka "browsers just downloading and executing JavaScript".
A few miles away "Nearly 20 people in their 30s stared at their cellphones for a few minutes. Then they set them down and looked at their bared palms for a while. Then those of their neighbors."
Given the legal hysteria that is now involved with every attempt on social contact, it is only a matter of time until those "looking at their neighbors palms" will be prosecuted as creepy stalkers.
Why would an LLM need a langauge that's not machine language?
To not require expensive re-training of models for just the purpose of yet-another-slightly-different CPU or micro-controller becoming a relevant target.
AI can write code, but it's not clear that it will ever solve the problem of verifying that the code it wrote actually does what people want it to do, in all cases. For important tasks, who is going to want to trust a codebase that is difficult or impossible for a human to review?
I currently work in a "Fortune 500" company, and people left and right are trusting LLM-bots with both writing code and reviewing the resulting PRs, for things one could consider "important tasks". And yes, this utterly irresponsible, but cheap and convenient, therefore it is done. People who don't want to participate in the AI-Slop-production will leave the company. I know I will.
Then take a look how many people privately trust "AI agents" with everything from their emails to their banking. Madness, yes, but there is no denying this is happening.
Will people just take the AIs' word for it that their air-traffic-control system software is correct and reliable?
I happen to know someone who is working on air-traffic-control IT systems. And his boss already found a loop-hole to make the AI Slop happen, there, too: Just more outsourcing to external companies, and then some internal employee, who of course has no reasonable chance to actually properly review the externally supplied code, is asked to "sign off" on their contributions.
Some people only open up to tell you that they're closed.