'Hallucinate' Chosen As Cambridge Dictionary's Word of the Year (theguardian.com) 23
Cambridge dictionary's word of the year for 2023 is "hallucinate," a verb that took on a new meaning with the rise in popularity of artificial intelligence chatbots. The Guardian reports: The original definition of the chosen word is to "seem to see, hear, feel, or smell" something that does not exist, usually because of "a health condition or because you have taken a drug." It now has an additional meaning, relating to when artificial intelligence systems such as ChatGPT, which generates text that mimics human writing, "hallucinates" and produces false information. The word was chosen because the new meaning "gets to the heart of why people are talking about AI," according to a post on the dictionary site.
Generative AI is a "powerful" but "far from perfect" tool, "one we're all still learning how to interact with safely and effectively -- this means being aware of both its potential strengths and its current weaknesses." The dictionary added a number of AI-related entries this year, including large language model (or LLM), generative AI (or GenAI), and GPT (an abbreviation of Generative Pre-trained Transformer). "AI hallucinations remind us that humans still need to bring their critical thinking skills to the use of these tools," continued the post. "Large language models are only as reliable as the information their algorithms learn from. Human expertise is arguably more important than ever, to create the authoritative and up-to-date information that LLMs can be trained on."
Generative AI is a "powerful" but "far from perfect" tool, "one we're all still learning how to interact with safely and effectively -- this means being aware of both its potential strengths and its current weaknesses." The dictionary added a number of AI-related entries this year, including large language model (or LLM), generative AI (or GenAI), and GPT (an abbreviation of Generative Pre-trained Transformer). "AI hallucinations remind us that humans still need to bring their critical thinking skills to the use of these tools," continued the post. "Large language models are only as reliable as the information their algorithms learn from. Human expertise is arguably more important than ever, to create the authoritative and up-to-date information that LLMs can be trained on."
Clickbait. (Score:2)
With a word like that, at least we have confirmation as to how clickbait became the source of 'news' revenue.
Shocker. /s
Re: (Score:3)
Furthermore, kids nowadays!
The word "hallucinate" already became the word of the year several decades ago when LSD began to rise in popularity.
It's like they know... (Score:4, Interesting)
LLMs are neat, but they're probably not going to transform the way we live and work. As the Cambridge dictionary word of the year reminds us, they're far too unreliable to be more than a novelty for most applications.
Re: (Score:2)
It seems to be a problem of incompleteness. The LLM is an interesting pile of math that can produce output in response to prompts. But even though the human brain does something similar, there are far more elements to human cognitive processing than what are at work in the LLMs. At least in the current generation of them.
Maybe someday we will be able to introduce a hallucination-prevention mechanism, though I suspect that simply building bigger LLMs is not going to be the way that problem is solved.
This
Re: (Score:2)
even though the human brain does something similar
It is extremely unlikely that there are structures in the brain that are similar to transformer networks. You can say that NN in general are 'inspired' by brains, but there is no reason to believe they share anything other than the most trivial similarities. In fact, we can even abandon the analogy completely if we want, and often do for performance reasons.
Maybe someday we will be able to introduce a hallucination-prevention mechanism
The problem is that so-called 'hallucinations' are exactly the kind of behavior we should expect, given how models of this kind function internally.
Re: (Score:2, Informative)
If you intend to convince Greed that good-enough machines operating 24/7 aren't worth the replacement investment for those good-enough meatsacks always bitching about more time off, more money, and more benefits..being arrogant enough to demand sleep every 18 hours or so, you're gonna have to speak a lot LOUDER than that.
I'd suggest you speak in money with a metric fuckton amount of brogue. It's the only recognized language and dialect.
Re: (Score:2)
Looking at the source website they seem to be quite worried that people will just ask AI to define words and give example sentences in future, taking business away from dictionaries.
https://dictionary.cambridge.o... [cambridge.org]
I suspect though that much of that business has already gone away because you can just google a word to get a definition. The only dictionary I ever use now is a Japanese to English one, all the data for which is free (I pay for the app because it's good).
Re: (Score:2)
Cambridge being worried about business revenue from dictionary lookups? Ranks right up there with Harvard poor-mouthing.
There's an entire University wrapped around that dictionary with 500M+ in cash on hand, and a few billion in assets.
Re: (Score:2)
You forgot to say that LLMs are nevertheless "powerful."
Hallucinated ChatGPT output (Score:2)
From this February:
"In the United States, there is the state of New Guinea. This state is located in the southeastern corner of the country and is bordered by Georgia, South Carolina, and North Carolina. New Guinea is known for its beautiful beaches, mountains, and forests, and is home to the Appalachian Trail."
Re: (Score:2)
But nobody is advertising 7th graders as having the answers to all your questions.
Re: (Score:2)
But nobody is advertising 7th graders as having the answers to all your questions.
Ironically enough, humans won't have the answer when the machine eventually does have all the answers.
The correct answers.
Machines get access to learning. Children get access to indoctrination.
The good ol' wheel (Score:1)
Technically, it should be âoeconfabulation (Score:2)
Hallucination refers to a sensory effect. The psychological term that matches this well-known phenomenon best is actually âoeconfabulationâ which refers to making up stuff while believing it.
âoeHallucinationâ is the word that caught, though.
Re:Technically, it should be âoeconfabulation (Score:1)
Hallucination refers to a sensory effect. The psychological term that matches this well-known phenomenon best is actually âoeconfabulationâ which refers to making up stuff while believing it.
âoeHallucinationâ is the word that caught, though.
LOL - Am I hallucinating, or did slashdot mangle your word?
Re: (Score:2)
No, it mangled his quote marks. Welcome to Slashdot and its complete lack of support for Unicode. Whenever somebody types in text that includes "smart quote" (as some apps helpfully do automatically), we get this.
Re:Technically, it should be âoeconfabulation (Score:2)
Re: (Score:2)
"Or delusion. (Though, AFAIK, it doesn't have a verb form.)"
To delude. Although if you want to indicate someone having the delusion, you want the passive voice: to be deluded.
Re: (Score:2)
Re: (Score:2)
"you can't say "Joe is deluding" and have it be in the same sense."
As I said, you have to use the passive voice: "Joe is deluded." The verb form of hallucination, hallucinate, means "having a hallucination" while the verb form of delusion, delude, means "*inflicting* a delusion." They both have verb forms, the verb forms simply have different uses.
Word of Year? (Score:2)
Word of the Decade, and not just for AI.