Comment How about an off switch? (Score 4, Informative) 13
A "Disable all AI crap and stop pushing this shit already" switch would be more desirable.
A "Disable all AI crap and stop pushing this shit already" switch would be more desirable.
Higher quality slop and approaching pink slip.
The last great recession was due to precisely this sort of spending pattern plus a collapse in payment. Banks may be healthy, for now, but they can't keep lending forever with no recover. This is not a good sign.
Not to mention Amiga had visualizers before people even knew what Windows was..
In fact, music visualizers existed even before people knew what windows were!
Been a while since I've flown a budget airline. On the normal flights I've taken, there's always a few people (usually older people) with paper boarding passes.
"Secret trick destroys AI" is bullshit. What is not bullshit is that for less common tokens, the conditional distributions of their occurrence in language depend on a relatively small number of examples. This is not an LLM property, it's a property of the language data itself. Also known as the hapax problem. Any language generator, including LLMs, is constrained by this fact. It has nothing to do with the architecture.
In practical terms, this means if you have a learning machine that tries to predict a less common token from some context (either directly like an LLM, or as an explicit intermediate step), then the local output will be strongly affected if it sees a single new context in the training data, such as when someone is poisoning a topic.
There are no solutions for this in the current ML learning paradigm. The system designers can make the system less sensitive to the tokens in the training data, but this comes at a price of being less relevant, due to deliberately discounting newly encountered facts against a implicit or explicit prior. Your example falls in this category.
It is fundamentally impossible to recognise the truth and value of a newly encountered datum without using a semantic model of reality. Statistical language models do not do this.
That was probably the right thing to do. It's called differential diagnosis, and it doesn't mean that the doctors didn't suspect Lupus from the start. They were being careful to rule out alternatives in order of priority.
If your Word document that you're writing has a grammatical error, it could be caused by many things, from a typo to bad autocompletion to hackers messing with your desktop or bad choice of language dictionary packs etc. You don't start the first treatment by rebooting the computer and replacing the whole MS Office suite. You first try a bunch of less invasive things.
"It is hard to overstate the debt that we owe to men and women of genius." -- Robert G. Ingersoll