Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

Many AI Products Still Rely on Humans To Fill the Performance Gaps (bloomberg.com) 51

An anonymous reader shares a report: Recent headlines have made clear: If AI is doing an impressively good job at a human task, there's a good chance that the task is actually being done by a human. When George Carlin's estate sued the creators of a podcast who said they used AI to create a standup routine in the late comedian's style, the podcasters claimed that the script had actually been generated by a human named Chad. (The two sides recently settled the suit.) A company making AI-powered voice interfaces for fast-food drive-thrus can only complete 30% of jobs without the help of a human reviewing its work. Amazon is dropping its automated "Just Walk Out" checkout systems from new stores -- a system that relied on far more human verification than it was hoping for.

We've seen this before -- though it may already be lost to Silicon Valley's pathologically short memory. Back in 2015, AI chatbots were the hot thing. Tech giants and startups alike pitched them as always-available, always-chipper, always-reliable assistants. One startup, x.ai, advertised an AI assistant who could read your emails and schedule your meetings. Another, GoButler, offered to book your flights or order your fries through a delivery app. Facebook also tested a do-anything concierge service called M, which could answer seemingly any question, do almost any task, and draw you pictures on demand. But for all of those services, the "AI assistant" was often just a person. Back in 2016, I wrote a story about this and interviewed workers whose job it was to be the human hiding behind the bot, making sure the bot never made a mistake or spoke nonsense.

This discussion has been archived. No new comments can be posted.

Many AI Products Still Rely on Humans To Fill the Performance Gaps

Comments Filter:
  • ...or at least until the investors checks clear the bank.

  • by EightBells ( 715154 ) on Friday April 12, 2024 @03:54PM (#64390136)
    Pay no attention to the man behind the curtain.
  • by RobinH ( 124750 ) on Friday April 12, 2024 @03:56PM (#64390142) Homepage
    First there were driverless cars, which are the kind of technology that's easy to get to 90%, damn hard to get to 95%, and impossible to get to 100%, but we all saw the predictions that driverless cars were only 5 years away... what... 10 years ago? Then there's bitcoin, which has literally zero intrinsic value and depends entirely on convincing someone more gullible than yourself to buy your bitcoin at a higher price than you paid, and as long as the price kept going up, people like SBF found it easy to milk investors. Now there's AI, which as far as I can tell can only write high school essays and a few incorrect pages of code, and certainly doesn't do any kind of "thinking" because it's really just a word predictor but it's all the buzz so all the companies want to jump on the bandwagon and get investor money because they're "doing AI". These scams come and go all the time. Don't buy into the hype.
    • by gweihir ( 88907 )

      Fun fact: No actual scientists made these predictions. Their predictions were a lot more like 20-50 years for Level 5 self-driving 10 years ago. And, to be fair, Level 4 exists for some environments today and Mercedes has Level 5 for limited circumstances now (nobody else has level 5). For for practical fusion, predictions are 50-100 years. Note that this is not "in wide use", that requires another 50-100 years. And here is one more (by me, in a field were I am an expert): Secure and reliable computers and

      • I never trust self proclaimed industry "standards". They are just a bunch of lies designed to increase the company share price. I for one will wait until NIST creates a real set of comprehensive standards, backed by this new thing I heard about called "Science".
        • by gweihir ( 88907 )

          Indeed. Caveat emptor has been a real thing since the invention of money and probably before.

      • by nasch ( 598556 )

        Level 5 means no human intervention ever, no matter what. A Level 5 car can be not even equipped with human accessible controls. "Level 5 for limited circumstances" sounds like another name for Level 4.

    • This is all true and the hype train drives me nuts. However, there is some marketable value too that is important not to ignore.

      Driverless cars: lane keep assist & adaptive cruise control
      Bitcoin et al: financial transactions that would otherwise be impossible
      AI: the 80% of the job it does on some coding tasks IS helpful when poking around in the dark
    • by Cyberax ( 705495 )

      First there were driverless cars, which are the kind of technology that's easy to get to 90%, damn hard to get to 95%, and impossible to get to 100%, but we all saw the predictions that driverless cars were only 5 years away... what... 10 years ago?

      They actually were pretty on-point, with the first driverless taxies being launched in Arizona around 2019. And now you can get fully driverless Waymo taxies in SF.

    • by nasch ( 598556 )

      impossible to get to 100%

      Fortunately this doesn't really matter because all technologies are impossible to get to 100%. Everything fails sometimes; nothing is perfect. All we need is better than a human driver.

      Then there's bitcoin, which has literally zero intrinsic value

      So just like fiat currency! Although with fiat currency at least there's a government that will accept it.

      Now there's AI, which as far as I can tell can only write high school essays and a few incorrect pages of code

      I've found it useful for programming in areas I'm less familiar with. It has a long way to go before it will be a society changing technology, but these sorts of things can sometimes go a long way in a short time.

  • by quonset ( 4839537 ) on Friday April 12, 2024 @03:57PM (#64390146)

    Human: what is my purpose in life?

    Rick: to be a backup for AI.

    • by ebunga ( 95613 )

      Except in this case, it's really more like, "just pretend the human is AI until we figure it out or the money runs out"

  • This is little different than how on assembly lines occasionally something goes wrong and the machines need human babysitters to make sure the product or machine doesn't get mangled or jammed. In this version the product is custom pictures/text, and babysitters need to prevent defects such as hallucinations or offensive or irrelevant content. Like with assembly lines, there's potential for ever less human supervision, but even with a century of refinement we still need plenty of humans.

    • Sure, and over time there is gradual improvement.

      Yet the press is happy to endlessly churn out the two stories:

      "AI is being given too much power! There are no humans overseeing things!"

      and:

      "Well, actually, AI is overseeing things! It's not quite like the dystopian AI sci-fi we've been peddling in our previous stories based on our lack of understanding!"

      • by gweihir ( 88907 )

        The press has found out a few decades ago that emotion sells, while facts and rational analysis does not. Hence the press is mostly useless these days to get an actual understanding of things.

    • by gweihir ( 88907 )

      And that is exactly what we are seeing here. AI is _dumb_ and that cannot be fixed. It can do some things that require no insight well, but at least current approaches cannot manage insight, ever. They are just not suitable for it. As we have not even early lab-demos for systems that could generate insight (we do have automated deduction, but that gets bogged down in state-space explosion so early that it is mostly useless and cannot scale at all), there is no reason to expect any actual intelligence in mac

      • The internet does not make people stupid. It just makes the stupid ones more obvious.

        And you are a prime example!

  • by jpatters ( 883 ) on Friday April 12, 2024 @04:37PM (#64390224)

    ChatGPT is wayyy closer to Eliza than it is to a hypothetical AGI.

    • by gweihir ( 88907 )

      Indeed. Essentially Eliza with a massive, massive database, but no insight whatsoever.

      • Indeed. Essentially Eliza with a massive, massive database, but no insight whatsoever.

        Which of your inane posts has ever provided any insight?

        • by gweihir ( 88907 )

          Starts with the one you are responding to. My recommendation would be to not disgrace yourself quite this publicly. Sure, everybody already knows you are an idiot, but why give people more proof?

          • As I said, you have nothing to contribute to this forum except insults.

            • by gweihir ( 88907 )

              You are confused. In your case, "idiot" is merely descriptive and a valid diagnosis supported by the observable facts.

              • I agree. "Idiot" perfectly describes you and is validly observable by your long history of inane comments.

                Internet just made you very obvious.

                • by gweihir ( 88907 )

                  And now you have regressed to the name-calling a small child does. No surprise really.

                  • Go find another post on AI and unload your usual load of crap on it!

                    You've bee totally exposed on Slashdot. Get a new Id!

    • While this is true, ChatGPT is way more useful than Eliza was.

  • There is really no sane reason to assume AI will get significantly more capable. These systems are not early experiments. They are end-products. All improvements will be gradual and slow now and many issues, for example hallucination, will not get fixed at all because nobody knows how to do it.

    • There is really no sane reason to assume AI will get significantly more capable

      Well, other than that all technology of all kinds has been getting significantly more capable, since the industrial revolution began.

      For example, language translation apps were terrible, just 10 years ago, but today are pretty good. The apps are good enough now that you can actually communicate with a stranger who doesn't know your language, and you don't even have to figure out what language they speak.

      Developer IDEs have been improving their intellisense and refactoring capabilities for decades now.

      Why wo

      • Your comment is wasted on idiots like "gweihir". His sole purpose in life is to crap on any post related to AI. He has no logical arguments and will resort to insults after a couple of posts.

      • by gweihir ( 88907 )

        They had that path. It started about 30 years ago and essentially ended some years ago.

        • I assume your comment is about IDEs. I can't speak about all IDEs, but I do know first-hand that Visual Studio, and Visual Studio Code, are constantly improving, doing work for you that you used to have to do yourself. You go ahead and keep using Vim or Emacs or whatever antiquated editor you like. But don't assume that you know that modern IDEs aren't improving, because they certainly are. And GitHub Copilot takes it up a major leap forward. Now, instead of just helping auto-complete variables and function

      • For example, language translation apps were terrible, just 10 years ago, but today are pretty good. The apps are good enough now that you can actually communicate with a stranger who doesn't know your language

        I've been doing that for 15 years

  • Basically all companies selling "AI"-services today are not yet profitable. Of course at some point they will raise their prices to become profitable - or go out of business where it turns out that off-shore gig-worker crowd is cheaper to fuel with junk food than it is to fuel expensive silicon chips with expensive electricity. I mean, there is a reason why most species did not evolve large brains... those are energy hungry... even in the long optimized version built into puny humans.
  • Back in 2015, AI chatbots were the hot thing.

    Didn't one of them go full-on Nazi in like a day?

  • According to the Wall Street Journal Theranos claimed that their Edison machine could perform various blood tests on a few drops of blood but actually they had been using commercially available machines manufactured by other companies for most of the tests.

  • Of course humans still have fill the gap now, it's exactly the same with how humans learn, helped by the teacher. Even though AI (theory) has been around for decades, only in the last couple of years AI has become real fir practical use, and now it's speeding along faster and faster exponentially. And AI is still only on its infancy and yet already capable of a lot.

Prediction is very difficult, especially of the future. - Niels Bohr

Working...