Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Why Are So Many AI Chatbots 'Dumb as Rocks'? (msn.com) 73

Amazon announced a new AI-powered chatbot last month — still under development — "to help you figure out what to buy," writes the Washington Post. Their conclusion? "[T]he chatbot wasn't a disaster. But I also found it mostly useless..."

"The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?" I thought the shopping bot was at best a slight upgrade on searching Amazon, Google or news articles for product recommendations... Amazon's chatbot doesn't deliver on the promise of finding the best product for your needs or getting you started on a new hobby.

In one of my tests, I asked what I needed to start composting at home. Depending on how I phrased the question, the Amazon bot several times offered basic suggestions that I could find in a how-to article and didn't recommend specific products... When I clicked the suggestions the bot offered for a kitchen compost bin, I was dumped into a zillion options for countertop compost products. Not helpful... Still, when the Amazon bot responded to my questions, I usually couldn't tell why the suggested products were considered the right ones for me. Or, I didn't feel I could trust the chatbot's recommendations.

I asked a few similar questions about the best cycling gloves to keep my hands warm in winter. In one search, a pair that the bot recommended were short-fingered cycling gloves intended for warm weather. In another search, the bot recommended a pair that the manufacturer indicated was for cool temperatures, not frigid winter, or to wear as a layer under warmer gloves... I did find the Amazon chatbot helpful for specific questions about a product, such as whether a particular watch was waterproof or the battery life of a wireless keyboard.

But there's a larger question about whether technology can truly handle this human-interfacing task. "I have also found that other AI chatbots, including those from ChatGPT, Microsoft and Google, are at best hit-or-miss with shopping-related questions..." These AI technologies have potentially profound applications and are rapidly improving. Some people are making productive use of AI chatbots today. (I mostly found helpful Amazon's relatively new AI-generated summaries of customer product reviews.)

But many of these chatbots require you to know exactly how to speak to them, are useless for factual information, constantly make up stuff and in many cases aren't much of an improvement on existing technologies like an app, news articles, Google or Wikipedia. How many times do you need to scream at a wrong math answer from a chatbot, botch your taxes with a TurboTax AI, feel disappointed at a ChatGPT answer or grow bored with a pointless Tom Brady chatbot before we say: What is all this AI junk for...?

"When so many AI chatbots overpromise and underdeliver, it's a tax on your time, your attention and potentially your money," the article concludes.

"I just can't with all these AI junk bots that demand a lot of us and give so little in return."
This discussion has been archived. No new comments can be posted.

Why Are So Many AI Chatbots 'Dumb as Rocks'?

Comments Filter:
  • by Big Hairy Gorilla ( 9839972 ) on Sunday March 17, 2024 @03:59PM (#64322731)
    "develoipment"
    also, count the fingers
  • nothing else to add

  • by bill_mcgonigle ( 4333 ) * on Sunday March 17, 2024 @04:09PM (#64322749) Homepage Journal

    We're cursed to know what LLM's are and watch people write stupid articles like this.

    FWIW Ray says they're currently putting a lot of effort into teaching LLM's how to say "I don't know".

    But midwits want a Voice of Authority (in the Jaynesian sense) and don't care what's real.

    • Re: Cassandra Redux (Score:4, Informative)

      by LindleyF ( 9395567 ) on Sunday March 17, 2024 @11:03PM (#64323473)
      LLMs are a LANGUAGE model. It's right there in the name. You need a product description summarized or queried in natural language, an LLM is your tool. So long as it's working with a block of text external to the model, it's probably fine. But the second you expect those billions of parameters to contain actual semantic information----instead of just knowing how to generate language related to a query----you're probably doing it wrong. That's not what it's for. So it can turn your query into a search, and run that search, and summarize the results. But don't expect it to do anything the underlying search couldn't do, except on a very limited basis.
    • >> a lot of effort into teaching LLM's how to say "I don't know".
      Naah, this one is easy as 2 lines of code:
      if (input contain "?")
      then: reply = "I don't know."

  • by Retired Chemist ( 5039029 ) on Sunday March 17, 2024 @04:10PM (#64322751)
    Chatbots reflect the information they are trained on. The internet is full of garbage, so I assume their training sets are too.
    • by VeryFluffyBunny ( 5037285 ) on Sunday March 17, 2024 @06:56PM (#64323119)
      LLMs aren't trained on information, they're trained on language. They're entirely structural, meaning that the morphemes that they strong together in the most probable ways have absolutely no meaning whatsoever, i.e. surface features & patterns that they tend follow rather than a conscious mind expressing itself. Just read J. L. Austin's Speech Act Theory (1975) for a little elaboration on what it takes to make meaning, & then it's pretty clear that all we're getting out of LLMs is convincing sounding bullshit.

      Maybe the question should be, "Why are so many journalists as dumb as rocks?"

      Depressingly, the answer may be because they're paid to be. I'm sure there's money to be made from publishing such articles.

      Ref: Austin, J. L. (1975). How to Do Things with Words: Second Edition (J. O. Urmson & M. Sbisà, Eds.; 2nd edition). Harvard University Press.
      • by amplex ( 1649505 )
        It seems like every other technical question posed to an LLM returns some hallucinated (made up) command or property or cmdlet etc.. Rendering it useless much of the time.
  • These chatbots aren't stupid. They're selling the products that Amazon et al want to sell you. You were offered short fingered cycling gloves because the manufacturer paid Amazon to sell them to you.

    • When I was a kid, we still had shoe salesmen. Someone to help you measure your foot and fetch a box from the shelves for you.

      I have no idea why they bothered measuring, because they'd invariably come back with something that wasn't the right size but had a higher price. Because commission was their motivation, not putting you in the best pair of shoes available for your budget.

      If a chatbot is linked in any way to sales, it will not be there to sell you what you need, it'll be there to sell you what it can

      • Our shoe sales guy, who owned the store and who had a vested interest in profit, made sure we had good shoes and affordable shoes and would instantly take the box back if they didn't fit just right.

        • You see... that's LONG term thinking. That was a great sales person. Unfortunately, most sales people are only able to think about the next commission or bonus, which is why they're often on a monthly cycle.

          When I was a kid I worked in a warehouse, and the sales guys would hustle their asses off for the last week of the month when the totals for the month were under projections. Presumably they were cutting deals to get customers to buy when they didn't even really need the stock. The warehouse guys wou

        • Our shoe sales guy, who owned the store

          Yeah, that isn't happening anymore. That shoe shop is now owned by some equity group.

      • And now you have to guess if the shoe will fit, wait until it arrives, see if it looks like the picture, and hope you don't have to ship it back.

        Much progress. So wow.

        • "have to" is doing a lot of work in your comment.

          • If you're ordering online compared to going to a store, yes. Just because you wear a size 9 shoe doesn't mean that size 9 shoe online will fit you. There are differences among manufacturers. If it's something you've bought before then it's a resonable assumption it will fit you. But if it's something new, it's a crap shoot.

            And yes, you do have to wait until it arrives when ordering online. There is no store to go to and find out right away. Ordering anything online is always a wait. There is no simply

            • That's why I "go to the store" to buy my shoes or just reorder the exact same pair of shoes (which is easy for me as I wear work boots). For a new style, yeah, just go to the store. Shoe stores still exist as does Walmart, Target, etc.

            • by henni16 ( 586412 )

              Different point of view, though maybe an edge case:
              As someone with what would be size 15 shoes in the US or larger since my teens, I was used to going through 5 shoe stores to find someone who even had a single pair that was roughly my size.
              Forget about models, comfort, colors, prices or any other choice. It was get that single pair of possibly basketball shoes that were closest to my size at whatever prices they were asking.

              So I prefer ordering shoes online, because I'm more likely to find shoes in my si

      • by necro81 ( 917438 )

        Someone to help you measure your foot and fetch a box from the shelves for you.

        Don't forget "someone to operate the x-ray machine [wikipedia.org]"!

        I kid, I kid. In fact, I had an excellent experience recently with a salewoman at a running store - selecting a replacement for my go-to running shoes, because the company had inexplicably changed the design.

  • by gweihir ( 88907 ) on Sunday March 17, 2024 @04:12PM (#64322759)

    They have access to a phenomenal amount of information, but as soon as they are asked to do more than simple pattern search (simplified) on that, they fail completely and come up with the most deranged crap. To people that have essentially no effective intelligence themselves and/or refuse to learn the very basics of some things, LLMs may be useful. To anybody else? Waste of time.

    • by laddiebuck ( 868690 ) on Sunday March 17, 2024 @11:56PM (#64323579)

      Depends on what your job is. If your job is exclusively the production of highly crafted artefacts with no repetitive, tedious, or rote elements whatsoever, then sure.

      But in the real world, there are very very few people like that. Those that do tend to have executive assistants already.

      In practice, most knowledge jobs benefit from software that encapsulates, well, knowledge.

      As a software engineer, LLMs help me produce boilerplate code, hook up interfaces I don't need to bother to learn, do first-pass coding that I would otherwise have a junior engineer do, write documentation, and turn outlines into full-fledged design docs. As a biomedical researcher, they help me polish up my writing, write tedious administrative responses, and act as documentation for PowerPoint and Excel. As a medical student, they help me do first-pass research on new topics, as well as quickly answer things I would need to look up in long-winded reference databases. As a private individual, they help me draft boilerplate emails, recommendation letters, and in general turn outlines into text.

      If LLMs aren't making you more productive, then you have a job in which every second of your day requires your fully engaged intelligence, with no repetitive or boring responsibilities. If so, good for you! For the rest of us, LLMs are a godsend.

      I would put it to you that you could also benefit from LLMs, if only to draft tedious emails, documentation, or design docs. We tend to have a pervasive attitude on Slashdot that LLMs are too stupid to help us. I think we are missing out on many benefits because we're too proud to find how they can fit into our workflows. They are not AGIs, but that doesn't mean they aren't helpful.

      • by gweihir ( 88907 )

        I think you are kidding yourself while the quality of what you produce drops. LLMs cannot even do simple things right consistently.

        As to handing off the simple things, here is a quote for you:

                    "Someone who considers himself too important for small jobs is often too small for important jobs" -- Jaques Tati

        That applies to engineering just as much as it does to politics.

        • I agree with your premise completely! No job is beneath us. That is why I consider myself very fortunate, having had to learn the principles of my disciplines from the ground up. I studied everything up from pure mathematics through boolean logic, algorithms and data structures, operating systems, and programming languages from assembler up. Out in the real world, I similarly worked my way through every task a novice, then a junior developer, and finally a senior developer has to deal with, from tiny bugfix

  • If a tool that's not actually intelligent is 'dumb as rocks' I think the big problem is the user.
  • Counterpoint (Score:4, Interesting)

    by fahrbot-bot ( 874524 ) on Sunday March 17, 2024 @04:14PM (#64322775)

    Why Are So Many AI Chatbots 'Dumb as Rocks'?

    This kinda implies that rocks are as smart as AI chatbots.
    If so,the rocks are smart enough to keep that from us. Think about *that* ...

  • by xack ( 5304745 ) on Sunday March 17, 2024 @04:14PM (#64322777)
    AI is only being embraced by companies because they don't want to pay the going rate for human labor. If companies couldn't use AI, they would use slaves instead.
  • by SuperKendall ( 25149 ) on Sunday March 17, 2024 @04:20PM (#64322793)

    What I've found with off and on again use of AI, is that it really seems like a super thin layer in doing a job of coalescing search results I would have found naturally into a single answer.

    However, and issue I've run into in trying to use AI for coding, is pretty much every time the hallucination factor means often the code it suggests will not work, especially when it suggests bringing in a whole framework as the bulk of thew solution - when that framework does not even exist!

    For shopping I would wonder if it actually made up products, or the existence of products even on specific websites. I think often a simple glance at the first page of a Google or Bing search might yield just as quick and more accurate results...

    On a side note I mention Bing, because for me I the past month I have had Bing actually return a more obscure search result that was useful to me, that I could not find via Google! An interesting sign that maybe Google search results are decaying in their traditionally strongest arena, coding research.

    • by gweihir ( 88907 )

      The non-existing frameworks game me a good laugh! I picture some novice coder that desperately searches for that framework and then concludes it is being kept secret from him by a big conspiracy!

      I think calling it a "super thin layer" is pretty accurate.

    • I think this is it right here.The strength of the chat bot lies in it's language processing.

      However, the actions/results the chat bot actually takes are going to generally be limited by existing systems. If all the chatbot is doing is parsing your query then doing a 'google' or 'bing' or 'site search', then you might as well just do the search on your own. The technical competence of people to 'search' is also pretty ingrained in the culture that it is no big deal.

      Similarly if the chatbot is accessing some

    • by xanthos ( 73578 )
      As someone explained in an earlier response, the chatbot form of AI using an LLM is just parsing your question and submitting a search. Personally, I mainly ignore them when they pop up to offer help. On the rare times I try to engage with them it seems they just end up pointing me to the help section. Not seeing any real value there. Feels like the previous rush to add voice assistants / voice control to a lot of things. Alexa is pretty good at telling you the time or what the weather is, but that is
  • by penguinoid ( 724646 ) on Sunday March 17, 2024 @04:30PM (#64322819) Homepage Journal

    People just have weird expectations that a language model should for some reason be an AGI.

    • The dumb as rocks part is calling anything AI when it is not. Yes we are getting into semantics, but for most people Artificial Intelligence means something intelligent like a human with similar results, not some pattern matching algorithm that regurgitates matching data.

      Intelligent vs Mimic, BIG difference. At least some people are now using language model to describe a product instead of rubber stamping AI on everything, which is a start.

      Marketing and lies run amok, and people are just noticing... '
    • Re: (Score:2, Interesting)

      by sarren1901 ( 5415506 )

      People are dumb and believe marketing. AI is being marketed as the next coming, despite the fact that AI is just a semantics for an algorithm that can help automate a task. There's no thinking going on here but "artificial intelligence" implies there is.

    • It could be the constant over-naming of it as AI.
      In a world where the average person is borderline scientifically illiterate, the convention of calling these systems "artificial intelligence" where they absolutely aren't, is bound to have an effect on common expectations.

  • Real insightful (Score:4, Insightful)

    by TwistedGreen ( 80055 ) on Sunday March 17, 2024 @04:30PM (#64322821)

    The real purpose of a chatbot is to reduce load on your support agents. If even 10% of people can get a useful answer from a chatbot without bothering support, that would be a huge win.

    Replacing--er, I mean, "augmenting" sales is probably the next target, but that's going to be a much tougher nut to crack. Mostly because the LLM now has to learn how to lie like a salesman. I'm not sure if we're there yet.

  • AI is all Artificial they left out the Intelligence part. What CEO's and the Media is calling AI today is just fancy automation.
  • Susan Calvin should start getting famous any day now.

  • ..but the hypemongers and pundits write articles that are somewhere between optimistic and fantasy and gullible, non-technical readers believe them

  • To me the hype around AI is rather amusing. Never mind, it *is not* amusing, considering how much harm AI already is doing (try credit or job applications), and will do in the future. It overpromises and underdelivers, yet people trust it. But then again, I live in a country where lie detectors are used to fast track people to security clearances. Yeah, "science" baby!

    Basically, I would think there are many worthwhile, sane, useful, and non-harmful applications for AI in general and even smart (!) chat bot

  • ...NOT dumb as rocks?

  • The problem with AIs (Score:4, Informative)

    by Harvey Manfrenjenson ( 1610637 ) on Sunday March 17, 2024 @07:17PM (#64323165)

    I wonder if the next step in developing "true" AI, or general AI, will require giving the programs bodies.

    One of the striking things about current AIs is that although they have access to enormous amounts of information, they often seem to lack knowledge that is commonly shared by all reasonably-sane human beings (what psychologists call a "general fund of knowledge"). That's why they will say things that appear absurd. They also seem to lack reality testing-- they will express statements that a reasonably-sane person knows are not possible.

    Most of us acquire a substantial "general fund of knowledge" as small children, simply through daily experience. We learn that gravity is a thing, that water flows downhill, that some processes are reversible (dropping a hat) and other processes aren't (dropping a bowl of oatmeal), that it's reasonable to spend ten minutes preparing a meal but not reasonable to spend 300 years preparing a meal, that people eat chickens but chickens don't eat people, and so on and so forth-- it's a vast body of implicit and explicit knowledge that we don't ever have to think about. But if we didn't have that body of knowledge, we couldn't function, and we would appear insane.

    • I expect you're right, that true AGI will require some way to interact with the environment. We learn by exploring, after all. This could be a virtual body in a virtual environment. It just has to have consistent interactions.

      But that has nothing to do with the current crop of poseur AIs. LLMs "know" one thing: The statistical likelihood of one word following another. That's it. It turns out that when you have a huge body of data to train on you can make a model that is amazingly realistic. You can even

  • Their first reaction to any new technology is We're All Gonna Die. As soon as the apocalypse fails to materialize, they blame the capitalist system for "overhyping" the idea. Pot, kettle, black.

  • Just like hacker forever became a negative, now we're using "AI" for something that is most definitely, not , in any way, actually AI.

    This automatically means the average person thinks that AI is going to be "smart" in some way when it's just a very energy intensive, resource hungry, bubble-inducing pattern matcher.

    Is the level of it's pattern matching impressive ? yes it is.
    is it impressive for the amount of power it costs to actually train it ? Not sure about that one...

  • by Walt Dismal ( 534799 ) on Sunday March 17, 2024 @07:44PM (#64323215)

    The AI chat bot technology is kind of broken; in one of my AI textbooks on AGI engineering (in progress) I discuss why the architectures used have deep flaws. I am not going to say why here and give Google and OpenAI clues, but the basic problem is the methods used give surface results but do not understand deep meaning properly. The transformer architecture tries to predict likely next statements, based on what it observed when it was trained. That approach is flashy and fast but simply wrong because it is surface and not deep enough. And the famous 'all you need is attention' paper contained some unpalatable hype and flash.

    It generally does not perform proper reasoning logic as humans do, but instead is all about the computing linguistically and probablistically of 'if I see A, then my likely answer should be B', instead of true logic-chain-based thought, and this is done without constructing a permanent enduring model like humans do. So it may back up and apologize in the current session but not really memorize things and put them into a corrected model. Thus if you repeat, it still won't learn the true premises and logic to use. Some of that may have been behind the Google racial bias problem (though some of that was due to imposed rules, not training).

    Right now the tech herd is adopting the technology and the hype, but both Geof Hinton and I recognized before the current wave that things were going in a casually productive but ultimately flawed direction. The chip guys like Nvidia and the cloud crowd guys are making tons of of money presently from a flawed paradigm, but it will taper a bit after people wake up to what's wrong and can be more ineffective than expected. We need to, and we will, move to a new paradigm and better rigor, and less dishonesty in the AI industry. And SA, my eye is on you, bud.

  • by xlsior ( 524145 )
    Garbage in, garbage out.
  • by istartedi ( 132515 ) on Sunday March 17, 2024 @08:46PM (#64323315) Journal

    I asked Amazon what I need to start composting at home, and it referred me to a suicide hotline.

  • Let's see. Because there is no such thing as AI? That it's a marketing term used by businesses to pump their stocks? Let me know in a few centuries when we actually have AI.

  • None of these things is actual Artificial Intelligence; we've no real understanding of precisely what REAL intelligence is and how IT actually works, so we're not in any position to make the artificial version. What people are getting rich selling the promise of under the "AI" moniker is SIMULATED INTELLIGENCE, which attempts to make a result look like it was from an intelligence but actually by entirely different means. These so-called AI ChatBots are just glorified front ends for some future possible arti

  • In my experience, chatbots are designed to tarpit, stall for time, or frustrate the user into not being able to do anything on a site. Help? Hah. Never had one be able to give a usable answer.

    Sales is bad as well... a chatbot prompt taking up half a phone screen doesn't help things, and when the first thing a chatbot demands is user info, I just go elsewhere.

  • AI? (Score:1, Informative)

    by togabbai ( 9090063 )
    Information is not knowledge or intelligence.
  • What the hell did he expect?
  • This:

    I thought the shopping bot was at best a slight upgrade on searching Amazon

    Talk about setting a low bar... Amazon's search is one of the worst, quite possibly the worst, searches out there. Not only does it not find what you fucking clearly asked to find and is missing even the most basic search amenities such as wildcarding and quoted exact phrasing, it spams the search output with complete product irrelevancies, artificially up-floated overpriced results, and actual advertising for... well, whatever,

  • They are not meant to be helpful. They meant to sell you things.
  • My prediction... AI will never work for shopping.

    I reason this because no store will ever make an AI the users will ever want to use. Stores will always try and push what works for them - not what works for the customer. Taking a simple example, lets say I need to buy some AA batteries. The vendor will always try and push me to whatever makes them the most money or empties their warehouse the quickest, so they'll sell me a big box of batteries, or they'll sell me premium brands, or they'll sell their own br

  • I've been developing a chatbot, for my company, for the approximately 300 commonly asked questions and tasks that users have when they go to our website or call our customer service number. Here's a very brief overview and what I think is crucial in building a chatbot (from scratch):

    I learned how to build a chatbox by reading peer-reviewed papers and following online guides of how to do so. I'm an applied mathematician; so I didn't have to have learn any new math. I did, however, have to review some mathema

  • Language interpreting AI is garbage because it lacks imagination.

    "Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand." -Albert Einstein

  • I admit that Aritificial Idiocy does give more interesting (if ludicrous) responses than Eliza did. But they're NOT "Intelligence". They're typeahead writ large.

  • They've grown beyond the need for cutomers, so why spend money on customer service. All they want is for the right numbers to go up, so if you get rid of employees and embrace an AI strategy, well the right numbers go up.

  • Searching Amazon has been made a failure intentionally. Consider doing a search. Now ask for a sort by price. Why are there way fewer items? And the one with a decent price that you saw before isn't even found? Amazon purposely crippled their search for their own purposes.

Air pollution is really making us pay through the nose.

Working...