Chatbots: Still Dumb After All These Years (mindmatters.ai) 79
Gary Smith: In 1970, Marvin Minsky, recipient of the Turing Award ("the Nobel Prize of Computing"), predicted that within "three to eight years we will have a machine with the general intelligence of an average human being." Fifty-two years later, we're still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.
Blaise Aguera y Arcas, the head of Google's AI group in Seattle, recently argued that although large language models (LLMs) may be driven by statistics, "statistics do amount to understanding." As evidence, he offers several snippets of conversation with Google's state-of-the-art chatbot LaMDA. The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM's ability to be "a fluent spouter of bullshit" and what Timnit Gebru and three co-authors called "stochastic parrots."
Blaise Aguera y Arcas, the head of Google's AI group in Seattle, recently argued that although large language models (LLMs) may be driven by statistics, "statistics do amount to understanding." As evidence, he offers several snippets of conversation with Google's state-of-the-art chatbot LaMDA. The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM's ability to be "a fluent spouter of bullshit" and what Timnit Gebru and three co-authors called "stochastic parrots."
...and never experience the real world. (Score:3)
Didn't MS try that out with their chatbot, which rather bad results?
Re: (Score:2)
Using social media as training data is bound to do that. Both with bots and people.
Minski was right! Only in reverse (Score:3, Funny)
The average human intelligence is approaching that if an AI. Just look at Trump and Lauren boebert.
Re: (Score:2)
It worked fine as long as you were writing a letter.
Re: (Score:1)
Clippy Sr: "I see you're typing an archaic form of communication that is no longer in use in the Real World. Would you like me to format it in Comic Sans 48 point blinking greyscale for you?"
User: "No."
Clippy Sr: "I have sent the email to the entire department and cc'd your ex-gf!"
Re: (Score:2)
After the Tay's Tweets experiment, I'm not afraid of Skynet... I'm afraid Skynet would read 4Chan first.
Re: (Score:2)
Unfortunately the normally lax moderation of 4chan is more like the unfiltered real world than what we see in many other places.
The trick is that at least some humans have the ability to distinguish between trolls and reality, but an AI have a hard time to do that. And any AI that isn't covering a wide area of the human spectrum will be dumb. A human with the level many AIs have would get a whole collection of letter diagnose definition. And probably a level of high functioning autism.
Re: (Score:1)
The trick is that at least some humans have the ability to distinguish between trolls and reality, but an AI have a hard time to do that
The problem is more basic than that: "Experience with having conversations" is different from "experience with the real world". Perhaps AI's should start off with the sort of experiences infants have: they discover that they have toes, they discover that they can move and feel their toes, they discover that certain things hurt, they learn about gravity, they learn about hunger and thirst, etc.
After that, they should acquire a fund of basic knowledge about the way the world works-- something comparable at
Re: (Score:2)
Unfortunately the normally lax moderation of 4chan is more like the unfiltered real world than what we see in many other places.
Actually the real world (you know that thing not connected to a computer) is highly filtered. Very few people in the real world say whatever is on their mind without worry of the consequences. Online (including but not limited to 4chan) it is much more likely that someone will say some outlandish thing that they would never say in the real world when talking face-to-face with someone else.
Re: (Score:1)
AI shouldn't feel bad (Score:5, Funny)
This site is ostensibly run by actual humans and they have trouble recognizing duplicate patterns, too.
Re: (Score:1)
Yeah, either [humans still duping stories & bits of summary] OR [editors replaced long ago by random summary-generator bots]
Which you think more likely? Nice subject for a /. poll. :-)
Re:AI shouldn't feel bad (Score:5, Funny)
This site is ostensibly run by actual humans and they have trouble recognizing duplicate patterns, too.
They'll get it right the next time they post the article.
Re: (Score:2)
They felt the need to explain more than once, just like training an AI chatbot.
Re: (Score:2)
Funny you mention that because this site is ostensibly run by actual humans and they have trouble recognizing duplicate patterns, too. ;)
Re: (Score:2)
Really? The only chat-related joke so far?
Obligatory http://www.bash.org/?835030 [bash.org] link.
Real world? (Score:2)
> [bots suck because they] never experience the real world.
They tried sending them out to get experience, but it didn't go so well. [independent.co.uk]
Re: (Score:2)
Was certain that was going to be a link to this article: https://nymag.com/intelligence... [nymag.com]
Re: (Score:2)
Johnny 5.
The fundamental roadblock is that, although comput (Score:2)
The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.
Re: (Score:2)
Re: (Score:2)
Damn straight!
I mean, damn straight!
Re: (Score:2)
"base level": did you mean "basement level"?
Re: The fundamental roadblock is that, although co (Score:2)
The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.
Re: (Score:2)
That's just half of it. The other half is, even if they could experience the real world, they have no way of incorporating this information in a meaningful way into their programming.
The fundamental roadblock... (Score:1, Troll)
The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.
Based
A Chatbot Wrote This Summary? (Score:2)
Re: (Score:2)
From what I've heard - the fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.
Not to mention that the fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld
Re: (Score:2)
Re: (Score:2)
Look for the good stuff. (Score:5, Interesting)
Re: (Score:2)
It's also the name of my online a canon Jimmy Buffett cover band. ;)
Re: (Score:2)
I see how you can be confused by this statement.
Chatbots do indeed seem to get smarter and smarter. But in fact it's an illusion created by chatrooms getting dumber and dumber.
What if chatbots were super smart? (Score:2, Funny)
There have been advancements (Score:3)
The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM's ability to be "a fluent spouter of bullshit"...
So, good enough to be elected president of the United States then. That's progress!
... and what Timnit Gebru and three co-authors called "stochastic parrots."
So... most of TikTok? More progress!
When you get right down to it, a whole lot of human behavior is sub-sapient. Bots that smart are definitely possible, and soon.
Re: (Score:2)
So, good enough to be elected president of the United States then.
The sad part is, I'm not sure which president you're talking about.
Re: (Score:1)
It's the Gaffe Era
Re: (Score:2)
The sad part is, I'm not sure which president you're talking about.
I noticed that myself when I wrote it, and decided it was appropriate to keep it just the way it is.
Chatbots are only useful for people who don't know (Score:5, Insightful)
I think Chatbots are only useful for people who don't know how to use a website. Basically, Chatbots are a slightly more intelligent version of a site map. They can guide a person to the page they are looking for. Which makes them completely useless to me. I already know how to use a website and how to use a site map. If I still need help, then the functionality doesn't exist on the website and a Chatbot is useless in that scenario.
Re:Chatbots are only useful for people who don't k (Score:5, Insightful)
If you need a chatbot maybe you have a bad web site design that prevents people to find relevant while you are flooding them with buzzwords and useless pictures.
Re: (Score:2)
FTFY
Re: (Score:1)
It represents a double-failure: the website itself is a mess, and your text/key-word search is no good.
Re: (Score:2)
Things like Intercom drive me insane. Just the icon in the bottom corner is distracting and annoying but I really can't stand when they automatically open, make an irritating notification sound and say "Hi can I help you?"
Ad blockers should start treating them as ads IMO, or have an option to block them at least. I've been meaning to put some effort into solving this problem. They're as useful as pop-up ads were.
Really just speedbumps to discourage customers (Score:4, Insightful)
I think chatbots are mostly used as one more speedbump in getting to an actual human for customer service.
The are the equivalent of "music on hold" when you're dealing with a website. The chatboxt asks a lot of questions to slow you down and discourage all but the most persistent customer.
On most sites the chatbox requires you to enter your customer information -- stuff like name, customer id, order number, etc. and then like the phone real world when you do get connected to an agent, none of that information pops on screen so they have to ask you the same info again.
And like the real world, your chances of getting disconnected while waiting for an agent are really high.
The physical equivalent would be a really long queue where you have to fill out many forms and randomly someone comes and kicks you out of line. If you ever do get to the front, the agent throws your forms in the trash.
There is no effort put into good chat bots... they are just designed to slow the "because of unusually high demand your wait time may be .... forever".
Re:Really just speedbumps to discourage customers (Score:4, Interesting)
Right, but this is really because their code is broken; not that the concept is useless.
I did chat support for a 10 month long stint and the system was supposed to copy/paste the chatbot interaction with the user to us, upon connecting us to them.
What I saw is that over half the time, it failed to do that properly. I'd get only a portion of the text, with things cut off mid-sentence, or sometimes just an error message that it failed to retrieve it. (That, of course, led to the frustration when you had to ask for the same info they already gave the chatbot.)
I also commonly saw where the chatbot provided too much generic information. (If we had a known outage or issue happening to enough people, they'd add a paragraph all about it that the chatbot would spit back to everyone.) Most people just don't like to read very much, and I found 90% would stop reading as soon as they saw all of that info scroll up their screen. That led to demands to talk to a live representative, etc. Often, the resolution we gave them to their problem was exactly what was in that paragraph they glossed over.
Interacting with the chatbot, otherwise, to answer questions and get suggestions on fixes from it? Now THAT was categorically awful. I found very few times where it looked like it gave a solution that would have solved the problem. And even if it did? It's VERY common that someone chatting in for support has more than one question, or wants more detail surrounding the problem and fix. A chatbot might tell them steps to fix a problem, but it can't have the follow-up conversation intelligently, to explain why that "only happened on computer A, but computer B has never done it", etc. etc.
Re: (Score:2)
It's cheaper to (not) hire more support reps than creating a really smart support rep. If you make support painful enough you can discourage all but the most persistent customers.
Building a smart chatbot is a very different problem than managing a call center/support center. Mostly the point of support is to do it just as well (or poorly) as the competitors. Few companies compete on "best support" as by the time you need support they've already sold you the product and you are just costing them money.
Re: (Score:2)
I thought most chatbots were trying to serve as more of a screening tool in tech support situations?
EG. User starts a chat session to get help with an issue. Chatbot answers first to collect some basic information and then suggests possible resolutions based on keywords the user might have entered when asked to "describe your issue". If that actually answers their question, great. A live "Tier 1" support person was spared the time conversing with them. Otherwise, it passes on the initial info collected so
Re: (Score:2)
Actually, chatbots are 150% useless. All they actually do is provide a tree of choices, no different than a drop down cascading menu. But they take a lot more time to deal with than a drop down menu. They appeal to executives because it makes them think they are hip and using AI.
Re: (Score:2)
If there is any site map, and if the site map makes any sense.
"general intelligence of an average human being" (Score:1)
Re: (Score:2)
Let me guess: that post was generated by one of these chatbots? Because it certainly sounds like the "stochastic parrots" and "fluent spouters of bullshit" that is mentioned in the article.
Re: (Score:3)
Actually, the problem is not IQ. The problem is what people do or not do with the Intelligence they have available to them. (Personally, I like to call that "wisdom" and it is not an accident that most RPG scores separate the two.) Apparently, only around 20% in any given population are open to rational argument, with 10...15% "independent thinkers" among them that can actually generate rational argument by themselves. 65% are basically just herd animals that do whatever the people around them do and 15% ar
Re: (Score:2)
Re: (Score:2)
Very interesting. And on reflection, I am inclined to agree with you. Have the numbers you quoted been published somewhere, perhaps as the result of a formal study? I would like to read more about it.
The 10-15% independent thinkers is an estimation from academic teaching that I and a friend arrived independently on. The 20%/65%/15% is from a recent interview in DER SPIEGEL in German: https://www.spiegel.de/panoram... [spiegel.de]
I do not know how well that article does in an online translator.
This seems to be the publication page of the person interviewed: https://www.researchgate.net/s... [researchgate.net]
At least a part is in English and available for download.
My impression is that these numbers are not really controversial in the
Re: (Score:2)
Re: (Score:2)
You are welcome.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Limitation of IQ scale becomes apparent here, the fact that AI can solve simple problems fast doesn't imply anything about it's ability to solve complex problems at all. Similarly, a caveman can have sky high IQ, but completely fail at even simple arithmetic due to l
bring back tay (Score:1)
I miss Tay and all the entertainment it provided. I never thought a bot could out-shitpost humans to such an extent.
Ah Marvin "the idiot" Minsky... (Score:2)
No understanding, big mouth, crappy predictions.
Re: (Score:2)
No understanding, big mouth, crappy predictions.
Clever as he was, the truth is that he is indeed more likely to be remembered for his recklessness and his big mouth rather than for his achievements.
Re: (Score:2)
He did a lot of good work for 20 years, then didn't do a lot of good work for 20 years.
He is more likely to be remembered for his good work, because people who don't do good work are plentiful and not worth remembering.
This is insulting to parrots (Score:5, Insightful)
Parrots have intelligence and understanding of the real world. Some are actually pretty intelligent. (No, Neuro-"scientists" have no clue how they can do it. Their "science" says parrots should be as dumb as bread.) Chatbots have none of that. "stochastic parrot" is giving them way too much credit.
Exchange "meaning" with "value" (Score:2)
It is critically important to understand that what we refer to as "meaning" derives from actual and perceived value of things to the object doing the perceiving. Can a machine with no agency, little or no senses, no self or sense of self be expected to experience our world and express human values as we do? Simply not possible without agency (the ability to act,) self (a body of some sort,) and sense of self (conscious sense of one's self as individually extant within a context.)
Simply, human higher-level c
Re: (Score:2)
If you tell a machine that the year is 1980, it should be able to remember that.
The kinds of philosophical musings that you are referring to are far beyond the capability of today's chatbots.
We need to find out how we learn (Score:1)
It doesn't matter what size the domain is - we need to work out how our brains learn stuff from the data they get.
Until the neuro "scientists" and psychologist animal-torturers get down to those basics.
Or, more likely, real scientists work it out from the basics, we will not have AI.
And no, using stats to pick answers IS NOT LEARNING.
Statistical patterns are just that. (Score:3)
Livings learn by stimulus and reaction, by cause and effect. Computers don't. Toddlers learn the shape of things by moving their eyes and heads and seeing things covering each other and re-appearing after being covered. Computer get feed static pictures without movement.
We train AI by data the AI has no influence on. So the AI never experiences data, it just get fed with it.
Re: (Score:1)
Toddlers learn the shape of things by moving their eyes and heads and seeing things covering each other and re-appearing after being covered. Computer get feed static pictures without movement.
Good news everyone! Tesla's driving bot is now being fed video, and has had a toddler's memory incorporated into it. It is now considerably better at identifying vehicles stopped across from it at an intersection after vehicles crossed between.
The memory feature really is extremely important for anything we recognize as intelligence. Plain neural nets that are just a frozen collection of recognition points are considerably inferior to a neural net with memory tacked on. The neural net purists in the AI
Re: (Score:2)
By paying attention to real world drivers, in partnership with the Seattle Police Department (SPD), the new cars AI speeds up when it sees a yellow light, floors it when there is a pedestrian or cyclist ahead of it, and hits the brakes when it detects an imminent collision with a CEO.
Sadly, the engineers had never seen a CEO, so they used a generic Old White Man Wearing A Suit Badly instead.
Re: (Score:2)
We do have chat bots - just the kind that makes se (Score:2)
Re: We do have chat bots - just the kind that make (Score:2)
I misread the blurb (Score:3)
I should explain that I knew Minsky from 1979-81, so perhaps I can be excused for misreading "Marvin Minsky, recipient of the Turing Award in 1970" as "Marvin Minsky, who passed the Turing Test in 1970". I thought "hmmm, I never would have guessed".
I further misread that in "three to eight years, he will have the general intelligence of an average human being." Time reduces us all, I'm afraid.
Not to worry (Score:2)
Not to worry, folks, the fine Google engineers are partnering with American Automobiles to make an AI that drives cars, and only hits those who aren't White.
But it says "Sorry!" when it runs over them, so it's all good!
Computer does what you tell it to do (Score:2)
They just (Score:2)
The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.
It's like they just cut and paste parts of the world with no real knowledge of it.