Most crashes come from a "scenario that prevent a jet from staying airborne."
LLM crawlers are understandable these days, but who on earth is actively trying to take the FSF down?
A bunch of heathen VIM users trying to stop people from accessing EMACS? What the heck?
Let's say you actually managed to take down the FSF website. Who would even notice or care? How would that help your hacker rep in any way? You'd be a laughingstock for making the attempt.
This likely is an LLM scraper designed to specifically target Media Wiki sites with a botnet.
What does that even mean?
LLMs doing crawling? That might be ill-behaved, bot not an "attack".
Or does it mean LLM-written attack software?
Or "attack software that somehow utilizes some LLM"?
It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users.
What a relief. Drugs with well-defined edges are much safer!
In fact, LLM models already show signs of ageing because updating training data gets more and more tricky due to too much AI Slop out there and model collapse.
So which is it? Were LLMs useless all along (like, ahem, you've been saying all along), or they are just now "aging" and getting worse?
In 10 years time, you'll deny you ever believed such a ridiculous thing.
Nope. It's a tool. Used properly, it's helpful. Used improperly, it isn't.
Neither a panacea nor "useless". Just a tool.
Android takes over the 1on1 porn.
If you don't take the food, you ARE food.
It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot.
No, but unless the API is super opaque, it should make it simpler to implement your own back end. should you be so inclined.
In each of these cases, you do have to know what you're doing, and you have to be able to know when it's right. But the things it does, do save me time and research effort.
Likewise.
They are just tools. Each user has to learn how to use them for their own use cases. They are neither panaceas nor useless.
In short, "AI coding" is not as mystical as it seems. Doing little that prior sets of tools were not doing. It's just more convenient, perhaps automating the use of numerous such existing tools. It still requires a skeptical review of the code and likely the addition of defensive code.
Right. It's a tool.
It's not the singularity, and it's also not "useless" like some energetic posters here want it to be. LLMs are just tools, which devs need to figure out how best to use for their use cases.
If you write "good enough" using agents, don't expect me to fix it for you when it goes horribly wrong.
The literal meaning of "good enough" would be "doesn't need any more fixing than your manual code does".
But I'm sure yours is all perfect
You will be successful in your work.