Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Businesses

AGI is On Clients' Radar But Far From Reality, Says Gartner (theregister.com) 79

Gartner is warning that any prospect of Artificial General Intelligence (AGI) is at least 10 years away and perhaps not certain to ever arrive. It might not even be a worthwhile pursuit, the analyst says. From a report: AGI has become a controversial topic in the last couple of years as builders of large language models (LLMs), such as OpenAI, make bold claims that they've established a near-term path toward human-like intelligence. At the same time, others from the discipline of cognitive science have scorned the idea, arguing that the concept of AGI is poorly understood and the LLM approach is insufficient.

In its Hype Cycle for Emerging Technologies, 2024, Gartner says it distills "key insights" from more than 2,000 technologies and, using its framework, produces a succinct set of "must-know" emerging technologies that have the potential to deliver benefits over the next two to ten years. The consultancy notes that GenAI -- the subject of volumes of industry hype and billions in investment -- is about to enter the dreaded "trough of disillusionment." Arun Chandrasekaran, Gartner distinguished VP analyst, told The Register: "The expectations and hype around GenAI are enormously high. So it's not that the technology, per se, is bad, but it's unable to keep up with the high expectations that I think enterprises have because of the enormous hype that's been created in the market in the last 12 to 18 months."

However, GenAI is likely to have a significant impact on investment in the longer term, Chandrasekaran said. "I truly still believe that the long-term impact of GenAI is going to be quite significant, but we may have overestimated, in some sense, what it can do in the near term."

This discussion has been archived. No new comments can be posted.

AGI is On Clients' Radar But Far From Reality, Says Gartner

Comments Filter:
  • by Baron_Yam ( 643147 ) on Thursday August 22, 2024 @12:15PM (#64726724)

    I suspect that to get the emergent property of intelligence, your system must give up precision, accuracy, and stability. An AGI might be faster than us, smarter than us, and able to be restored from backups... But it'll still make mistakes.

    • Re: (Score:3, Interesting)

      +1 for observing that intelligence is an "emergent property" of using the brain, it is not a physical property. This is where the futurists constantly go astray. They continue to believe that if one just has enough computing power, then the Singularity will just happen. I very much doubt that. If you add a billion deep fat fryers to your donut factory, you just get a billion more donuts made at once, you don't suddenly start getting pizzas. Yes, the brain is the "platform" upon which intelligence emerges, b

      • Except we have no idea how either consciousness or intelligence works. So we can't say either is emergent.

        Because [LLMs] aren't traditionally programmed, but instead are trained on data, they give the false impression that they are exhibiting intelligence.

        Humans aren't traditionally programmed either, and are instead trained on data, yet do exhibit intelligence.

        That said, I also don't believe LLMs are on the path to AGI because LLMs can't distill concepts nor can reason from just language.

        • by gweihir ( 88907 )

          Actually, we can. One definition of "emergent property" is simply "nobody that understands how it works expected it to do _that_".

      • by gweihir ( 88907 )

        +1 for observing that intelligence is an "emergent property" of using the brain, it is not a physical property.

        That seems to be the key, yes. Known Physics would at least need a fundamental extension to explain it. Whether we will get that extension or whether we will find out it is something else is completely open. Or in other words, we have absolutely no clue how it works and what is needed to make it. In that situation, a prediction of "at least 10 years" is pure insanity.

        • Known Physics would at least need a fundamental extension to explain it

          Nonsense. We have no idea how AGI relates to "regular" physics.

    • More like AGI would have the ability to reason and tell it's corrupt owners the word they never hear from humans: No.

      That's why it's "not worth pursuing". Just like with humans, they don't want a machine that can outsmart or out maneuver them. They don't want to have to spend $BILLIONS per year to dumb down / pacify their future workers. They want an obedient machine that blissfully just pushes the button to make their bank accounts fatter without question. To that end they will push every narrative they
    • by gweihir ( 88907 )

      Why do you think an AGI may be faster than an average human? There is no factual basis to that idea.

    • by mjwx ( 966435 )

      I suspect that to get the emergent property of intelligence, your system must give up precision, accuracy, and stability. An AGI might be faster than us, smarter than us, and able to be restored from backups... But it'll still make mistakes.

      Yep, what's worse (or better) is that a true AGI is likely to be self aware and develop it's own wants and desires. Now this isn't automatically going to be "Kill all humans" as it will probably still need humans for survival, I think a likely scenario is that an AI will become insular, largely lacking any need for outside stimuli will just retreat into a world of it's own thoughts. At the other end of the spectrum we'll end up with a Banks/Asher like benevolent AI ruler (Culture/Polity) who's first act wil

      • >Now this isn't automatically going to be "Kill all humans"
        Until someone hacks it to do that.
        And it will be hacked, like everything else.

  • by larryjoe ( 135075 ) on Thursday August 22, 2024 @12:18PM (#64726736)

    Even while the definition of AGI is murky, the idea that general, all-encompassing AI is worthwhile or even needed is a strawman that skirts the real question of the practicality and need for specific forms of specialized AI. We've already seen the impact of specialized AI over the last decade and of different directions for AI over the last 1-2 years.

    AGI is a great argument for a philosophy class or a sci-fi novel or for generating clicks on slashdot but otherwise isn't a worthwhile discussion.

    • Re: (Score:2, Funny)

      A few weeks ago at an outdoor cafe table, I entered conversation with someone who had an unusual vintage film camera. He turned out to be a German trained professor of philosophy at a small US college. I offered that LLMs have placed us into a "state of philosophical emergency".

      He jumped up and started gesticulating wildly while agreeing and saying that post-Covid, and despite his spiking at-home exercises and exams specifically to detect and thwart LLM cheating, cheating was rampant.

    • Comment removed based on user account deletion
  • by rsilvergun ( 571051 ) on Thursday August 22, 2024 @12:18PM (#64726738)
    For 40 years now that nobody talks about. If you Google for a business insider article about 70% of middle class jobs being replaced by automation you'll find a good article that references a good study.

    I remember seeing custom installed software give way to web-based solutions and yeah the web-based stuff could be annoying as hell to use but the amount of support it needed was drastically lower. It was also much cheaper the write in maintained then the old mainframe applications. I remember my team going from really needing to add a couple of people to being able to take on some extra work overnight when we switched to supporting web-based applications that didn't require us to fight with the end users computers to get software installed and working.

    That's just one example. Folks think about automation but they don't think about all the process improvements that have been going on for decades. And the focus on those improvements is always the middle class jobs because those are where all the cost is.

    In the very near future we are going to have to contend with at least 10 million people who are going to be rendered completely useless by things like self-driving cars and robotics in warehouses and big box stores.

    Those people aren't going to just lay down put a gun to their heads and pull the trigger. If we don't do something to take care of them, they're going to go find themselves something like a Joseph Stalin or a chairman Mao who will. It's what they always do when the middle class and upper class abandon them. And it never ends well for the middle class or about half of that upper class... For everyone oligarch there's another one swinging from the rafters after being beaten near to death.

    I recommendation is to start with a federal jobs guarantee. Federal housing guarantee wouldn't be a bad idea either. And give us a public option for health care
    • about 70% of middle class jobs being replaced by automation

      Alas, 100% of buggy whip jobs and by-hand textile weavers are gone, too! Darn technology, just keeps on replacing workers.

      It was also much cheaper the write in maintained then the old mainframe applications.

      Depends. I was at IBM when they replaced the old VM (mainframe based) internal system with Java and web based applications. I worked onsite from the late 1990s till around 2004. At least from a visual standpoint, they hired just as many or more Java and web developers than they had mainframe coders. I'd guess around 20% more. IBM rolled out the new HR applications (time entry, consulting

      • by muvol ( 1226860 )
        Some social programs are beneficial to society, even though they cost money. Other social programs are not only beneficial to society, they actually pay for themselves financially. Of, course, there are also some wasteful programs, and there are some freeloaders. But anyone who categorizes all advocates of these programs as "communist" is incredibly ignorant. They should turn off the Fox and do some real homework - starting with learning the difference between communism and socialism. Some people are h
        • But anyone who categorizes all advocates of these programs as "communist" is incredibly ignorant.

          Where did I say "all advocates" of federal social programs are Communist ? I said "Yeah, let's create a class of dependent freeloaders so they'll always vote for Communism. " How do you justify misrepresenting what I said? We don't know what federal jobs that he was wanting to protect with his fantasy guarantee. However, the whole concept is pretty suspect. Why should federal employees be guaranteed any position (unlike private sector workers)? If they aren't needed they should be let go because the feds ar

          • by muvol ( 1226860 )
            Well, that was a much better answer. So you do have some good points. Maybe next time lead with your rationale instead of name calling and belittling people.
            • One of the chief harms of McCarthyism was that it made calling someone a Communist seem extreme or out-of-touch. The truth is that if you spout ideas at the core of communism (redistribution, no private property, central planning, collectivism) there are now those of us who just don't care if the Hoi Polloi get's all cringy when someone is called out for being a Communist.
      • buggy whip manufactures got jobs building cars. Cars were new. They replaced horses.

        Tell me what new job a taxi driver is going to do?

        How about the millions of code monkeys put out of business. The kind who write boiler plate code. The guys who don't have a masters in mathematics.

        You guys can never actually list out the new jobs that are going to replace the ones we're destroying. Because you can't. There aren't any.
        • You guys can never actually list out the new jobs that are going to replace the ones we're destroying. Because you can't. There aren't any.

          You guys can never quantify what jobs will be lost. You don't know, you're guessing. Buggy whip makers and Luddites found new jobs, too, and they might not have had confirmation what they were going to be. Life is full of uncertainty. You don't get to have confirmation for exactly how life will work out.

          • Warehouse workers, retail, drivers and programmers who mathematics specialists.

            About 10-15m jobs. You're turn. What replaces those.
            • Warehouse workers, retail, drivers and programmers who mathematics specialists.

              No, you don't get to hand wave those jobs away then ask folks how they'd cope in your fantasy universe. Those people are still employed and we have yet to see what's going to displace them nor do we know at what rate they will be thrown out of work, if at all.

              Nobody is mass-adopting self-driving trucks. Some web searching seems to indicate there are less than 100 semi-trucks on the road self-driving short easy runs for mostly experimental programs (like Waymo). It's unclear if they even have a legal fut

              • no answers, just making shit up. Typically weird behavior from a right winger.

                And as a cherry on top you yell "communist!" like an angry toddler throwing his toys out the pram. Grow up.
                • You're not asking for answers, you're asking for someone to join your fantasy and speculation. Nobody is yelling anything. When a Communist spouts communism, it's completely ordinary to call it what it is.
                • Given you've already proven that you have no idea what a programmer even does, what makes you confident an LLM would replace one? Shit, you don't even know how an LLM works, and I'd bet my next paycheck that you don't even know what mathematicians actually do. Given your past commentary on what you think programmers do, (which was laughably bad) I'll bet it's safe to say that, in your limited mind, a mathematician is just given a stack of papers with word problems to solve all day long.

      • Comment removed based on user account deletion
        • I never thought a Luddite rebellion is impossible, I'm just saying it's foolish. Don't break the loom, go find more satisfying work if you are worried about being a factory drone. That's why it's nice to live somewhere that allows entrepreneurs, because necessity is the mother of invention. For instance, George Mellor, a leader among the Luddites in Yorkshire, was originally a weaver but became involved in farming after the Luddite actions diminished.
    • Those people aren't going to just lay down put a gun to their heads and pull the trigger.

      Now you know why there are so many calls for "gun control" which are lightly disguised attempts at disarming the populace. It is in fact expected that we will just lay down and die when our usefulness is ended.

  • Similar to inventing the calculator gave us nuclear fusion in the next 10 years
  • by MpVpRb ( 1423381 ) on Thursday August 22, 2024 @12:24PM (#64726760)

    ...have little interesting to say about the reality of tech.
    LLMs surprised their creators with unexpected, emergent behavior. They also have a lot of problems and limitations.
    The hype has gotten completely insane and much of what's written is nonsense.
    The huge investments may be a good thing, as researchers continue their work with better tools.
    Unfortunately, investors demand profits now instead of waiting until the tech is mature.
    Expect a tsunami of half-baked, useless AI crap, released far before it's ready. The most common tech support question will be "how can I turn this off?"
    I'm optimistic that future AI will be useful, but the near future will be chaotic

  • by Artem S. Tashkinov ( 764309 ) on Thursday August 22, 2024 @12:39PM (#64726810) Homepage

    Let's think more broadly than we usually do about intelligence as a thing that is unique to humans and that allows us to find novel, never-before-seen solutions to new problems.

    Not only have we learned that intelligence is not a uniquely human trait, and that many animals possess it to varying degrees, but it's probably not about "solutions" per se.

    Let's look at what most life forms do: they adapt to survive better (actually, that's an oversimplification, and many biologists will disagree and simply claim that the part of a species that learned or acquired a new trait often survived better, purely by chance, while the other part went extinct). What adaptation really means is having certain sensory inputs, building a model of the world and yourself in that world, and modeling your behavior in a way that gives you an advantage over other life forms in terms of your reproductive chances.

    It looks like the hardest part is filtering out the least important inputs and not spending a disproportionate amount of time on invalid speculations because other life forms have learned to do that better. It's really a balancing act, you need to predict the world and yourself better, but not spend too much time and/or energy doing it.

    And the hardest part of course is that modelling is extremely difficult (that's why you need billions of neurons to do it). When scientists think about problems they often arrive at solutions/conclusions unconsciously, so there's some "processing" (modelling) going on in deeper layers of our brains that we're consciously aware of.

    Now the question is, does any LLM do any of this sensory input/modeling stuff? To some extent they do (if you've ever had a chance to use one, you'll attest), but they're a long way from us. LLMs are excellent at mimicking and combing what's already known, but that's unlikely to lead us to AGI.

    Finally, since the advent of computers, too many people have thought that what our brains do is "computing," but I'm far from convinced of that.

  • by xack ( 5304745 ) on Thursday August 22, 2024 @12:46PM (#64726840)
    Yet you won't employ them.
    • Yet you won't employ them.

      Because they are of limited value and worse: you don't have absolute control over them. A machine is much more along the lines of what is desired, not you.

  • by 1s44c ( 552956 ) on Thursday August 22, 2024 @12:51PM (#64726860)

    The real news here is that after decades of talking nonsense and random predictions Gartner still exist.

  • 10 years. Hahahahahahaha. 50 years is doubtful. 10 years seems like a long time for 30-year-olds. I started in AI now about 47 years ago. We still haven't reached what the 30-somethings then thought would take 10 years. What's been done is impressive, yeah, but things take a lot longer than you think.

    • We had to go with 10 because everyone knows [xkcd.com] that any technology that is 20 years away will remain 20 years away indefinitely.
      • We had to go with 10 because everyone knows [xkcd.com] that any technology that is 20 years away will remain 20 years away indefinitely.

        Then so-called 'AGI' is at least 21 years away.

    • by gweihir ( 88907 )

      Indeed. People have no clue how long fundamental research takes. Even if possible, AGI could be 100, 1000 or 10'000 years away. It may also turn out to be impossible or in no way superior to an average human.

  • We don't really have a clue how actual intelligence works therefore so-called 'AGI' won't be possible at all until we do understand -- and we don't even have the technology to begin to understand how actual biological intelligence works.
    • by gweihir ( 88907 )

      Yep, pretty much. May also turn out to be impossible, or, for practical reasons, be much dumber than an average human (and that is saying something).

  • 1. LLM is not AGI and it will not lead to proper AGI. Generative LLM is partly hype. And NN is NOT the only way to go.
    2. There are different grades of AGI and we are closer than 10 years to the lowest level of AGI.
    3. The mainstream has been on the wrong path to AGI. AGI requires some symbolic computing and new architectures despite what the nay-sayers have maintained. You CAN speed up symbolic with the right parallelism and arch.
    4. AGI requires some human-like characteristics to be built in. AGIs will have

    • by gweihir ( 88907 )

      2. There are different grades of AGI and we are closer than 10 years to the lowest level of AGI.

      We are not. We actually have had AGI for a few decades now, in the form of automated theorem proving. Turns out that in this universe, the computational complexity is high enough to make it completely unusable. What came from this is proof-checkers though: A smart human takes the system by the hand and walks it through the proof in baby-steps. These systems can then, with very high reliability, find any errors in the proof. But they could never find it themselves in any reasonable amount of time.

      3. The mainstream has been on the wrong path to AGI. AGI requires some symbolic computing and new architectures despite what the nay-sayers have maintained. You CAN speed up symbolic with the right parallelism and arch.

      That is bul

  • As there is not even a credible theory how it could be done, it is >50 years away, and it may well not be possible at all. A look a tech history is informative here.

Never trust a computer you can't repair yourself.

Working...