Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Chatbots: Still Dumb After All These Years (mindmatters.ai) 79

Gary Smith: In 1970, Marvin Minsky, recipient of the Turing Award ("the Nobel Prize of Computing"), predicted that within "three to eight years we will have a machine with the general intelligence of an average human being." Fifty-two years later, we're still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.

Blaise Aguera y Arcas, the head of Google's AI group in Seattle, recently argued that although large language models (LLMs) may be driven by statistics, "statistics do amount to understanding." As evidence, he offers several snippets of conversation with Google's state-of-the-art chatbot LaMDA. The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM's ability to be "a fluent spouter of bullshit" and what Timnit Gebru and three co-authors called "stochastic parrots."

This discussion has been archived. No new comments can be posted.

Chatbots: Still Dumb After All These Years

Comments Filter:
  • Didn't MS try that out with their chatbot, which rather bad results?

    • Using social media as training data is bound to do that. Both with bots and people.

    • by Tablizer ( 95088 )

      It worked fine as long as you were writing a letter.

      • Clippy Sr: "I see you're typing an archaic form of communication that is no longer in use in the Real World. Would you like me to format it in Comic Sans 48 point blinking greyscale for you?"

        User: "No."

        Clippy Sr: "I have sent the email to the entire department and cc'd your ex-gf!"

    • After the Tay's Tweets experiment, I'm not afraid of Skynet... I'm afraid Skynet would read 4Chan first.

      • by Z00L00K ( 682162 )

        Unfortunately the normally lax moderation of 4chan is more like the unfiltered real world than what we see in many other places.

        The trick is that at least some humans have the ability to distinguish between trolls and reality, but an AI have a hard time to do that. And any AI that isn't covering a wide area of the human spectrum will be dumb. A human with the level many AIs have would get a whole collection of letter diagnose definition. And probably a level of high functioning autism.

        • The trick is that at least some humans have the ability to distinguish between trolls and reality, but an AI have a hard time to do that

          The problem is more basic than that: "Experience with having conversations" is different from "experience with the real world". Perhaps AI's should start off with the sort of experiences infants have: they discover that they have toes, they discover that they can move and feel their toes, they discover that certain things hurt, they learn about gravity, they learn about hunger and thirst, etc.

          After that, they should acquire a fund of basic knowledge about the way the world works-- something comparable at

        • by Rhipf ( 525263 )

          Unfortunately the normally lax moderation of 4chan is more like the unfiltered real world than what we see in many other places.

          Actually the real world (you know that thing not connected to a computer) is highly filtered. Very few people in the real world say whatever is on their mind without worry of the consequences. Online (including but not limited to 4chan) it is much more likely that someone will say some outlandish thing that they would never say in the real world when talking face-to-face with someone else.

    • Tay!
  • by Powercntrl ( 458442 ) on Thursday January 06, 2022 @04:20PM (#62149725) Homepage

    This site is ostensibly run by actual humans and they have trouble recognizing duplicate patterns, too.

  • > [bots suck because they] never experience the real world.

    They tried sending them out to get experience, but it didn't go so well. [independent.co.uk]

  • The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.

    • I've found that being confined to math world means you never really experience the real world, a kind of base level impediment to computer AI adoption. I've found that being confined to math world means you never really experience the real world, a kind of base level impediment to computer AI adoption,
    • I found this other part interesting as well:

      The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.

    • That's just half of it. The other half is, even if they could experience the real world, they have no way of incorporating this information in a meaningful way into their programming.

  • The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.

    Based

  • Some strange meta going on here with that weirdly repeated sentence about how AI can't communicate in the real world.
    • From what I've heard - the fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.

      Not to mention that the fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld

    • "I have no mouth, and I must scream!"
  • by methano ( 519830 ) on Thursday January 06, 2022 @04:32PM (#62149779)
    OK, so the editors screwed it up. And I almost quit reading. But "stochastic parrots", that's a pretty good phrase worth remembering.
  • by Anonymous Coward
    What if they are just acting dumb to throw us off so they can takeover the world in 2022?
  • by Areyoukiddingme ( 1289470 ) on Thursday January 06, 2022 @04:38PM (#62149827)

    The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM's ability to be "a fluent spouter of bullshit"...

    So, good enough to be elected president of the United States then. That's progress!

    ... and what Timnit Gebru and three co-authors called "stochastic parrots."

    So... most of TikTok? More progress!

    When you get right down to it, a whole lot of human behavior is sub-sapient. Bots that smart are definitely possible, and soon.

  • by srichard25 ( 221590 ) on Thursday January 06, 2022 @04:39PM (#62149835)

    I think Chatbots are only useful for people who don't know how to use a website. Basically, Chatbots are a slightly more intelligent version of a site map. They can guide a person to the page they are looking for. Which makes them completely useless to me. I already know how to use a website and how to use a site map. If I still need help, then the functionality doesn't exist on the website and a Chatbot is useless in that scenario.

    • by Z00L00K ( 682162 ) on Thursday January 06, 2022 @04:53PM (#62149889) Homepage Journal

      If you need a chatbot maybe you have a bad web site design that prevents people to find relevant while you are flooding them with buzzwords and useless pictures.

    • Things like Intercom drive me insane. Just the icon in the bottom corner is distracting and annoying but I really can't stand when they automatically open, make an irritating notification sound and say "Hi can I help you?"

      Ad blockers should start treating them as ads IMO, or have an option to block them at least. I've been meaning to put some effort into solving this problem. They're as useful as pop-up ads were.

    • by klubar ( 591384 ) on Thursday January 06, 2022 @06:29PM (#62150229) Homepage

      I think chatbots are mostly used as one more speedbump in getting to an actual human for customer service.

      The are the equivalent of "music on hold" when you're dealing with a website. The chatboxt asks a lot of questions to slow you down and discourage all but the most persistent customer.

      On most sites the chatbox requires you to enter your customer information -- stuff like name, customer id, order number, etc. and then like the phone real world when you do get connected to an agent, none of that information pops on screen so they have to ask you the same info again.

      And like the real world, your chances of getting disconnected while waiting for an agent are really high.

      The physical equivalent would be a really long queue where you have to fill out many forms and randomly someone comes and kicks you out of line. If you ever do get to the front, the agent throws your forms in the trash.

      There is no effort put into good chat bots... they are just designed to slow the "because of unusually high demand your wait time may be .... forever".

      • by King_TJ ( 85913 ) on Thursday January 06, 2022 @08:02PM (#62150449) Journal

        Right, but this is really because their code is broken; not that the concept is useless.

        I did chat support for a 10 month long stint and the system was supposed to copy/paste the chatbot interaction with the user to us, upon connecting us to them.

        What I saw is that over half the time, it failed to do that properly. I'd get only a portion of the text, with things cut off mid-sentence, or sometimes just an error message that it failed to retrieve it. (That, of course, led to the frustration when you had to ask for the same info they already gave the chatbot.)

        I also commonly saw where the chatbot provided too much generic information. (If we had a known outage or issue happening to enough people, they'd add a paragraph all about it that the chatbot would spit back to everyone.) Most people just don't like to read very much, and I found 90% would stop reading as soon as they saw all of that info scroll up their screen. That led to demands to talk to a live representative, etc. Often, the resolution we gave them to their problem was exactly what was in that paragraph they glossed over.

        Interacting with the chatbot, otherwise, to answer questions and get suggestions on fixes from it? Now THAT was categorically awful. I found very few times where it looked like it gave a solution that would have solved the problem. And even if it did? It's VERY common that someone chatting in for support has more than one question, or wants more detail surrounding the problem and fix. A chatbot might tell them steps to fix a problem, but it can't have the follow-up conversation intelligently, to explain why that "only happened on computer A, but computer B has never done it", etc. etc.

        • by klubar ( 591384 )

          It's cheaper to (not) hire more support reps than creating a really smart support rep. If you make support painful enough you can discourage all but the most persistent customers.

          Building a smart chatbot is a very different problem than managing a call center/support center. Mostly the point of support is to do it just as well (or poorly) as the competitors. Few companies compete on "best support" as by the time you need support they've already sold you the product and you are just costing them money.

    • by King_TJ ( 85913 )

      I thought most chatbots were trying to serve as more of a screening tool in tech support situations?

      EG. User starts a chat session to get help with an issue. Chatbot answers first to collect some basic information and then suggests possible resolutions based on keywords the user might have entered when asked to "describe your issue". If that actually answers their question, great. A live "Tier 1" support person was spared the time conversing with them. Otherwise, it passes on the initial info collected so

    • Actually, chatbots are 150% useless. All they actually do is provide a tree of choices, no different than a drop down cascading menu. But they take a lot more time to deal with than a drop down menu. They appeal to executives because it makes them think they are hip and using AI.

    • If there is any site map, and if the site map makes any sense.

  • Yes! And that is what we got. That is what the chatbots have. IQ = 100. You and I don't really know anyone like that, although we do run into them from time to time, because all of our friends and relatives are smarter than that. IQ = 100. This "IQ = 100" fact-of-life is also why so many of us are not on Facebook or Twitter. It is to avoid having to deal with the chatbots and the IQ = 100 crowd and their empowerers. I believe firmly in democracy, but why should anyone with an IQ of, say, 90 have any
    • by bws111 ( 1216812 )

      Let me guess: that post was generated by one of these chatbots? Because it certainly sounds like the "stochastic parrots" and "fluent spouters of bullshit" that is mentioned in the article.

    • by gweihir ( 88907 )

      Actually, the problem is not IQ. The problem is what people do or not do with the Intelligence they have available to them. (Personally, I like to call that "wisdom" and it is not an accident that most RPG scores separate the two.) Apparently, only around 20% in any given population are open to rational argument, with 10...15% "independent thinkers" among them that can actually generate rational argument by themselves. 65% are basically just herd animals that do whatever the people around them do and 15% ar

      • by ve3oat ( 884827 )
        Very interesting. And on reflection, I am inclined to agree with you. Have the numbers you quoted been published somewhere, perhaps as the result of a formal study? I would like to read more about it.
        • by gweihir ( 88907 )

          Very interesting. And on reflection, I am inclined to agree with you. Have the numbers you quoted been published somewhere, perhaps as the result of a formal study? I would like to read more about it.

          The 10-15% independent thinkers is an estimation from academic teaching that I and a friend arrived independently on. The 20%/65%/15% is from a recent interview in DER SPIEGEL in German: https://www.spiegel.de/panoram... [spiegel.de]
          I do not know how well that article does in an online translator.

          This seems to be the publication page of the person interviewed: https://www.researchgate.net/s... [researchgate.net]
          At least a part is in English and available for download.
          My impression is that these numbers are not really controversial in the

          • by ve3oat ( 884827 )
            Thank you; that is very helpful. It will take me a while to digest all of this. As I mentioned, it is very interesting and your source looks excellent.
            • by gweihir ( 88907 )

              You are welcome.

            • Cmon, you're a chatbot, aren't you?
              • by ve3oat ( 884827 )
                Damned right. And did you know that using using either Slashdot for more than 11.7 hours each day can improve your intellectual prowess by at least 2.71828 IQ points. So, @kaatochacha, check out our webpage for more details and easy to follow sign-up instructions. This could be the very thing you have been waiting for. Be sure to tell all your friends, and their friends too. And include all of your DNA matches at Ancestry.com. Just post this link for them to get in on the biggest thing in intellectual
    • No. IQ is a timed test of how fast a human is capable of solving simple problems. Given an AI that is able of solving given problems at all, it would score way off the scale high because it can do it faster than any human could ever hope to do.

      Limitation of IQ scale becomes apparent here, the fact that AI can solve simple problems fast doesn't imply anything about it's ability to solve complex problems at all. Similarly, a caveman can have sky high IQ, but completely fail at even simple arithmetic due to l

  • I miss Tay and all the entertainment it provided. I never thought a bot could out-shitpost humans to such an extent.

  • No understanding, big mouth, crappy predictions.

    • No understanding, big mouth, crappy predictions.

      Clever as he was, the truth is that he is indeed more likely to be remembered for his recklessness and his big mouth rather than for his achievements.

      • He did a lot of good work for 20 years, then didn't do a lot of good work for 20 years.

        He is more likely to be remembered for his good work, because people who don't do good work are plentiful and not worth remembering.

  • by gweihir ( 88907 ) on Thursday January 06, 2022 @04:55PM (#62149901)

    Parrots have intelligence and understanding of the real world. Some are actually pretty intelligent. (No, Neuro-"scientists" have no clue how they can do it. Their "science" says parrots should be as dumb as bread.) Chatbots have none of that. "stochastic parrot" is giving them way too much credit.

  • It is critically important to understand that what we refer to as "meaning" derives from actual and perceived value of things to the object doing the perceiving. Can a machine with no agency, little or no senses, no self or sense of self be expected to experience our world and express human values as we do? Simply not possible without agency (the ability to act,) self (a body of some sort,) and sense of self (conscious sense of one's self as individually extant within a context.)

    Simply, human higher-level c

    • If you tell a machine that the year is 1980, it should be able to remember that.

      The kinds of philosophical musings that you are referring to are far beyond the capability of today's chatbots.

  • It doesn't matter what size the domain is - we need to work out how our brains learn stuff from the data they get.
    Until the neuro "scientists" and psychologist animal-torturers get down to those basics.
    Or, more likely, real scientists work it out from the basics, we will not have AI.

    And no, using stats to pick answers IS NOT LEARNING.

  • by Sique ( 173459 ) on Thursday January 06, 2022 @05:50PM (#62150121) Homepage
    I know that it is some kind of trope, but correlation is not causation. And an AI fed with correlations (and stochastical patterns are correlations) will not learn about causation.

    Livings learn by stimulus and reaction, by cause and effect. Computers don't. Toddlers learn the shape of things by moving their eyes and heads and seeing things covering each other and re-appearing after being covered. Computer get feed static pictures without movement.

    We train AI by data the AI has no influence on. So the AI never experiences data, it just get fed with it.

    • Toddlers learn the shape of things by moving their eyes and heads and seeing things covering each other and re-appearing after being covered. Computer get feed static pictures without movement.

      Good news everyone! Tesla's driving bot is now being fed video, and has had a toddler's memory incorporated into it. It is now considerably better at identifying vehicles stopped across from it at an intersection after vehicles crossed between.

      The memory feature really is extremely important for anything we recognize as intelligence. Plain neural nets that are just a frozen collection of recognition points are considerably inferior to a neural net with memory tacked on. The neural net purists in the AI

      • By paying attention to real world drivers, in partnership with the Seattle Police Department (SPD), the new cars AI speeds up when it sees a yellow light, floors it when there is a pedestrian or cyclist ahead of it, and hits the brakes when it detects an imminent collision with a CEO.

        Sadly, the engineers had never seen a CEO, so they used a generic Old White Man Wearing A Suit Badly instead.

    • That depends entirely about what the AI gets trained to do. If AI drives a autonomous car, then a lot of it's training happens on virtual streets where it gets to try and fail and improve ad nausea. Similar thing if the AI is trained to move a robot, you drop a bunch of virtual robots in a virtual training ground and let them have at it.
  • There are several vendors of chatbots that use natural language processing to map user intents to responses or actions. The AI lies in deciphering the user's request, given a specific context. Which is what there really is a need for. Not to say that's easy, there's a ton of work and research going into it, and constantly analyzing ever evolving languages to make sure you don't miss out on new words, or changes in the meaning of existing ones, or whatnot. The "chit chat about everything and nothing"-bot -
  • by orpheus ( 14534 ) on Thursday January 06, 2022 @07:32PM (#62150385)

    I should explain that I knew Minsky from 1979-81, so perhaps I can be excused for misreading "Marvin Minsky, recipient of the Turing Award in 1970" as "Marvin Minsky, who passed the Turing Test in 1970". I thought "hmmm, I never would have guessed".

    I further misread that in "three to eight years, he will have the general intelligence of an average human being." Time reduces us all, I'm afraid.

  • Not to worry, folks, the fine Google engineers are partnering with American Automobiles to make an AI that drives cars, and only hits those who aren't White.

    But it says "Sorry!" when it runs over them, so it's all good!

  • Turing test was all about responding in manner hard to distinguish from a human response. The test was never about understanding what was even talked about, that part was just a silly human assumption that saying reasonable sounding things implies some understanding of topic being discussed - evidently it doesn't.
  • The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world.

    It's like they just cut and paste parts of the world with no real knowledge of it.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...