

AGI is On Clients' Radar But Far From Reality, Says Gartner (theregister.com) 79
Gartner is warning that any prospect of Artificial General Intelligence (AGI) is at least 10 years away and perhaps not certain to ever arrive. It might not even be a worthwhile pursuit, the analyst says. From a report: AGI has become a controversial topic in the last couple of years as builders of large language models (LLMs), such as OpenAI, make bold claims that they've established a near-term path toward human-like intelligence. At the same time, others from the discipline of cognitive science have scorned the idea, arguing that the concept of AGI is poorly understood and the LLM approach is insufficient.
In its Hype Cycle for Emerging Technologies, 2024, Gartner says it distills "key insights" from more than 2,000 technologies and, using its framework, produces a succinct set of "must-know" emerging technologies that have the potential to deliver benefits over the next two to ten years. The consultancy notes that GenAI -- the subject of volumes of industry hype and billions in investment -- is about to enter the dreaded "trough of disillusionment." Arun Chandrasekaran, Gartner distinguished VP analyst, told The Register: "The expectations and hype around GenAI are enormously high. So it's not that the technology, per se, is bad, but it's unable to keep up with the high expectations that I think enterprises have because of the enormous hype that's been created in the market in the last 12 to 18 months."
However, GenAI is likely to have a significant impact on investment in the longer term, Chandrasekaran said. "I truly still believe that the long-term impact of GenAI is going to be quite significant, but we may have overestimated, in some sense, what it can do in the near term."
In its Hype Cycle for Emerging Technologies, 2024, Gartner says it distills "key insights" from more than 2,000 technologies and, using its framework, produces a succinct set of "must-know" emerging technologies that have the potential to deliver benefits over the next two to ten years. The consultancy notes that GenAI -- the subject of volumes of industry hype and billions in investment -- is about to enter the dreaded "trough of disillusionment." Arun Chandrasekaran, Gartner distinguished VP analyst, told The Register: "The expectations and hype around GenAI are enormously high. So it's not that the technology, per se, is bad, but it's unable to keep up with the high expectations that I think enterprises have because of the enormous hype that's been created in the market in the last 12 to 18 months."
However, GenAI is likely to have a significant impact on investment in the longer term, Chandrasekaran said. "I truly still believe that the long-term impact of GenAI is going to be quite significant, but we may have overestimated, in some sense, what it can do in the near term."
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Computing in general is still in the stone age. It has brought on a new age of frailty, and criminality
We've passed through our infant stage and are heading into toddler stage. I'm not looking forward to the adolescent age. Do you remember how painful it was to be a teenager? As bad as the toddler years are being to us here? Yikes.
Re: (Score:2)
Re: (Score:2, Insightful)
AGI isn't even a thing, much less ready for prime time. LLMs can do some interesting and useful things, kind of like you can train a dog to do some useful things. But they are nowhere near AGI. It's more like how Photoshop can erase a person in a picture by guessing what would be behind the person, LLMs can guess what text is "missing" (i.e., the answer to your question), because it has seen enough existing content that contains that pattern of "missing" text.
Re: (Score:2)
Re: (Score:2)
LLM's pretty much suck because of the refusal to curate the input. All these wacky "hallucinations" the AI's are capable of, are a result of this.
LLM's that rickroll? Exist.
LLM's that generate subtitles claiming credit (I've seen both "touhou project" and attributions to other people that are unrelated to the video)
LLM's that tell you to eat rat poison and gyprock? Yep.
Like you would think that the bare minimum input filtering would be done to exclude joking, trolling, bullying, parody, satire, harassment,
Re: (Score:2)
LLM's pretty much suck because of the refusal to curate the input. All these wacky "hallucinations" the AI's are capable of, are a result of this.
Actually, no. LLMs cannot be made to not hallucinate, regardless of input. All you can do is bring the rate down. Somewhat.
Re: (Score:2)
LLMs cannot actually "guess". Guessing involves some plausibility checking and that requires insight. LLMs cannot do "insight". All they can do is a statistical prediction of what comes next.
Re: (Score:2)
Statistical prediction kind of sounds like guessing to me. But OK.
Re: (Score:2)
Well, if you guess "what is for dinner", to you consider "a car" a valid response? An LLM would.
Re: (Score:2)
An LLM considers it a valid response because it encountered similar patterns in its training. I humans were trained to respond that a car is an appropriate response to the question, they too would consider it valid. In such a case, would it still be proper to call the human version a "guess" but not the LLM version?
Re: (Score:2)
Nope. Seriously. Stop claiming bullshit. A human can and will do plausibility checking on a guess, and that clearly says a car is not edible.
Re: (Score:2)
You're assuming too much knowledge. If a person didn't have training regarding cars and their uses, and their training suggested that a car was a kind of food, they might indeed make the mistake of suggesting it as food. Because our culture is steeped in cars, people already have training that teaches them what a car is good for.
LLMs don't necessarily have that. So when an LLM is trained on "the internet" and it encounters satirical or joking posts that suggest that a car can be eaten, it has no intrinsic w
Re: (Score:2)
Re: (Score:2)
I was illustrating a point.
Worthiness of AGI (Score:3)
I suspect that to get the emergent property of intelligence, your system must give up precision, accuracy, and stability. An AGI might be faster than us, smarter than us, and able to be restored from backups... But it'll still make mistakes.
Re: (Score:2)
The list of people with an actual clue here (as opposed to your, ahem, "mental capabilities") is a bit longer.
Re: (Score:1)
Re: (Score:2)
but so far it never seems to learn from them, which is the main problem. Getting it to emit semi-coherent sentences is still a sore spot as well, and it still hallucinates often.
So, just like humans then.
Re: (Score:3, Interesting)
+1 for observing that intelligence is an "emergent property" of using the brain, it is not a physical property. This is where the futurists constantly go astray. They continue to believe that if one just has enough computing power, then the Singularity will just happen. I very much doubt that. If you add a billion deep fat fryers to your donut factory, you just get a billion more donuts made at once, you don't suddenly start getting pizzas. Yes, the brain is the "platform" upon which intelligence emerges, b
Re: (Score:3)
Humans aren't traditionally programmed either, and are instead trained on data, yet do exhibit intelligence.
That said, I also don't believe LLMs are on the path to AGI because LLMs can't distill concepts nor can reason from just language.
Re: (Score:2)
Actually, we can. One definition of "emergent property" is simply "nobody that understands how it works expected it to do _that_".
Re: (Score:2)
+1 for observing that intelligence is an "emergent property" of using the brain, it is not a physical property.
That seems to be the key, yes. Known Physics would at least need a fundamental extension to explain it. Whether we will get that extension or whether we will find out it is something else is completely open. Or in other words, we have absolutely no clue how it works and what is needed to make it. In that situation, a prediction of "at least 10 years" is pure insanity.
Re: (Score:2)
Nonsense. We have no idea how AGI relates to "regular" physics.
Re: (Score:2)
That's why it's "not worth pursuing". Just like with humans, they don't want a machine that can outsmart or out maneuver them. They don't want to have to spend $BILLIONS per year to dumb down / pacify their future workers. They want an obedient machine that blissfully just pushes the button to make their bank accounts fatter without question. To that end they will push every narrative they
Re: (Score:2)
Why do you think an AGI may be faster than an average human? There is no factual basis to that idea.
Re: (Score:2)
Emphasis on "maybe".
In other words, no way to know how fast it may be until it exists.
Which it doesn't.
Re: (Score:2)
I suspect that to get the emergent property of intelligence, your system must give up precision, accuracy, and stability. An AGI might be faster than us, smarter than us, and able to be restored from backups... But it'll still make mistakes.
Yep, what's worse (or better) is that a true AGI is likely to be self aware and develop it's own wants and desires. Now this isn't automatically going to be "Kill all humans" as it will probably still need humans for survival, I think a likely scenario is that an AI will become insular, largely lacking any need for outside stimuli will just retreat into a world of it's own thoughts. At the other end of the spectrum we'll end up with a Banks/Asher like benevolent AI ruler (Culture/Polity) who's first act wil
Re: (Score:2)
>Now this isn't automatically going to be "Kill all humans"
Until someone hacks it to do that.
And it will be hacked, like everything else.
AGI is a quixotic strawman (Score:4, Informative)
Even while the definition of AGI is murky, the idea that general, all-encompassing AI is worthwhile or even needed is a strawman that skirts the real question of the practicality and need for specific forms of specialized AI. We've already seen the impact of specialized AI over the last decade and of different directions for AI over the last 1-2 years.
AGI is a great argument for a philosophy class or a sci-fi novel or for generating clicks on slashdot but otherwise isn't a worthwhile discussion.
Re: (Score:2, Funny)
A few weeks ago at an outdoor cafe table, I entered conversation with someone who had an unusual vintage film camera. He turned out to be a German trained professor of philosophy at a small US college. I offered that LLMs have placed us into a "state of philosophical emergency".
He jumped up and started gesticulating wildly while agreeing and saying that post-Covid, and despite his spiking at-home exercises and exams specifically to detect and thwart LLM cheating, cheating was rampant.
Re: (Score:2)
We've had a massive automation boom (Score:5, Insightful)
I remember seeing custom installed software give way to web-based solutions and yeah the web-based stuff could be annoying as hell to use but the amount of support it needed was drastically lower. It was also much cheaper the write in maintained then the old mainframe applications. I remember my team going from really needing to add a couple of people to being able to take on some extra work overnight when we switched to supporting web-based applications that didn't require us to fight with the end users computers to get software installed and working.
That's just one example. Folks think about automation but they don't think about all the process improvements that have been going on for decades. And the focus on those improvements is always the middle class jobs because those are where all the cost is.
In the very near future we are going to have to contend with at least 10 million people who are going to be rendered completely useless by things like self-driving cars and robotics in warehouses and big box stores.
Those people aren't going to just lay down put a gun to their heads and pull the trigger. If we don't do something to take care of them, they're going to go find themselves something like a Joseph Stalin or a chairman Mao who will. It's what they always do when the middle class and upper class abandon them. And it never ends well for the middle class or about half of that upper class... For everyone oligarch there's another one swinging from the rafters after being beaten near to death.
I recommendation is to start with a federal jobs guarantee. Federal housing guarantee wouldn't be a bad idea either. And give us a public option for health care
Re: (Score:1)
about 70% of middle class jobs being replaced by automation
Alas, 100% of buggy whip jobs and by-hand textile weavers are gone, too! Darn technology, just keeps on replacing workers.
It was also much cheaper the write in maintained then the old mainframe applications.
Depends. I was at IBM when they replaced the old VM (mainframe based) internal system with Java and web based applications. I worked onsite from the late 1990s till around 2004. At least from a visual standpoint, they hired just as many or more Java and web developers than they had mainframe coders. I'd guess around 20% more. IBM rolled out the new HR applications (time entry, consulting
Re: (Score:2)
Re: (Score:2)
But anyone who categorizes all advocates of these programs as "communist" is incredibly ignorant.
Where did I say "all advocates" of federal social programs are Communist ? I said "Yeah, let's create a class of dependent freeloaders so they'll always vote for Communism. " How do you justify misrepresenting what I said? We don't know what federal jobs that he was wanting to protect with his fantasy guarantee. However, the whole concept is pretty suspect. Why should federal employees be guaranteed any position (unlike private sector workers)? If they aren't needed they should be let go because the feds ar
Re: (Score:1)
Re: (Score:3)
Tell me about these new jobs (Score:2)
Tell me what new job a taxi driver is going to do?
How about the millions of code monkeys put out of business. The kind who write boiler plate code. The guys who don't have a masters in mathematics.
You guys can never actually list out the new jobs that are going to replace the ones we're destroying. Because you can't. There aren't any.
Re: (Score:2)
You guys can never actually list out the new jobs that are going to replace the ones we're destroying. Because you can't. There aren't any.
You guys can never quantify what jobs will be lost. You don't know, you're guessing. Buggy whip makers and Luddites found new jobs, too, and they might not have had confirmation what they were going to be. Life is full of uncertainty. You don't get to have confirmation for exactly how life will work out.
You're dodging the question. (Score:2)
About 10-15m jobs. You're turn. What replaces those.
You don't get to dictate what questions I answer. (Score:2)
Warehouse workers, retail, drivers and programmers who mathematics specialists.
No, you don't get to hand wave those jobs away then ask folks how they'd cope in your fantasy universe. Those people are still employed and we have yet to see what's going to displace them nor do we know at what rate they will be thrown out of work, if at all.
Nobody is mass-adopting self-driving trucks. Some web searching seems to indicate there are less than 100 semi-trucks on the road self-driving short easy runs for mostly experimental programs (like Waymo). It's unclear if they even have a legal fut
I see you're still dodging the question (Score:2)
And as a cherry on top you yell "communist!" like an angry toddler throwing his toys out the pram. Grow up.
Re: (Score:2)
Re: I see you're still dodging the question (Score:2)
Given you've already proven that you have no idea what a programmer even does, what makes you confident an LLM would replace one? Shit, you don't even know how an LLM works, and I'd bet my next paycheck that you don't even know what mathematicians actually do. Given your past commentary on what you think programmers do, (which was laughably bad) I'll bet it's safe to say that, in your limited mind, a mathematician is just given a stack of papers with word problems to solve all day long.
Re: (Score:2)
Re: (Score:2)
Re: We've had a massive automation boom (Score:2)
Despite what the conspiracy theories claim, very few companies even have one H1-B employee, and the overwhelming majority of the ones who do have fewer than 10. There's no need to speculate because this is all public and you can see exactly how many each company has here:
https://www.uscis.gov/tools/re... [uscis.gov]
Unless you really want to work for google, amazon, facebook or apple, which really aren't that great to work for anyways, or one of the really bad ones that will work you to death like Deloitte, Infosys or p
Re: (Score:2)
Those people aren't going to just lay down put a gun to their heads and pull the trigger.
Now you know why there are so many calls for "gun control" which are lightly disguised attempts at disarming the populace. It is in fact expected that we will just lay down and die when our usefulness is ended.
Thats nice (Score:1)
Business consultants... (Score:4, Interesting)
...have little interesting to say about the reality of tech.
LLMs surprised their creators with unexpected, emergent behavior. They also have a lot of problems and limitations.
The hype has gotten completely insane and much of what's written is nonsense.
The huge investments may be a good thing, as researchers continue their work with better tools.
Unfortunately, investors demand profits now instead of waiting until the tech is mature.
Expect a tsunami of half-baked, useless AI crap, released far before it's ready. The most common tech support question will be "how can I turn this off?"
I'm optimistic that future AI will be useful, but the near future will be chaotic
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
LLMs surprised their creators with unexpected, emergent behavior.
Not really. Hallucinations, for example, are an old thing.
Intelligence (Score:3)
Let's think more broadly than we usually do about intelligence as a thing that is unique to humans and that allows us to find novel, never-before-seen solutions to new problems.
Not only have we learned that intelligence is not a uniquely human trait, and that many animals possess it to varying degrees, but it's probably not about "solutions" per se.
Let's look at what most life forms do: they adapt to survive better (actually, that's an oversimplification, and many biologists will disagree and simply claim that the part of a species that learned or acquired a new trait often survived better, purely by chance, while the other part went extinct). What adaptation really means is having certain sensory inputs, building a model of the world and yourself in that world, and modeling your behavior in a way that gives you an advantage over other life forms in terms of your reproductive chances.
It looks like the hardest part is filtering out the least important inputs and not spending a disproportionate amount of time on invalid speculations because other life forms have learned to do that better. It's really a balancing act, you need to predict the world and yourself better, but not spend too much time and/or energy doing it.
And the hardest part of course is that modelling is extremely difficult (that's why you need billions of neurons to do it). When scientists think about problems they often arrive at solutions/conclusions unconsciously, so there's some "processing" (modelling) going on in deeper layers of our brains that we're consciously aware of.
Now the question is, does any LLM do any of this sensory input/modeling stuff? To some extent they do (if you've ever had a chance to use one, you'll attest), but they're a long way from us. LLMs are excellent at mimicking and combing what's already known, but that's unlikely to lead us to AGI.
Finally, since the advent of computers, too many people have thought that what our brains do is "computing," but I'm far from convinced of that.
We have 8 billion general intelligence devices (Score:4, Insightful)
Re: (Score:2)
Yet you won't employ them.
Because they are of limited value and worse: you don't have absolute control over them. A machine is much more along the lines of what is desired, not you.
Gartner still exist? (Score:4, Funny)
The real news here is that after decades of talking nonsense and random predictions Gartner still exist.
10 years (Score:2)
10 years. Hahahahahahaha. 50 years is doubtful. 10 years seems like a long time for 30-year-olds. I started in AI now about 47 years ago. We still haven't reached what the 30-somethings then thought would take 10 years. What's been done is impressive, yeah, but things take a lot longer than you think.
Re: (Score:2)
Re: (Score:2)
We had to go with 10 because everyone knows [xkcd.com] that any technology that is 20 years away will remain 20 years away indefinitely.
Then so-called 'AGI' is at least 21 years away.
Re: (Score:2)
Indeed. People have no clue how long fundamental research takes. Even if possible, AGI could be 100, 1000 or 10'000 years away. It may also turn out to be impossible or in no way superior to an average human.
We don't understand anywhere near enough (Score:2)
Re: (Score:2)
Yep, pretty much. May also turn out to be impossible, or, for practical reasons, be much dumber than an average human (and that is saying something).
the herds are kind of wrong (Score:2)
1. LLM is not AGI and it will not lead to proper AGI. Generative LLM is partly hype. And NN is NOT the only way to go.
2. There are different grades of AGI and we are closer than 10 years to the lowest level of AGI.
3. The mainstream has been on the wrong path to AGI. AGI requires some symbolic computing and new architectures despite what the nay-sayers have maintained. You CAN speed up symbolic with the right parallelism and arch.
4. AGI requires some human-like characteristics to be built in. AGIs will have
Re: (Score:2)
2. There are different grades of AGI and we are closer than 10 years to the lowest level of AGI.
We are not. We actually have had AGI for a few decades now, in the form of automated theorem proving. Turns out that in this universe, the computational complexity is high enough to make it completely unusable. What came from this is proof-checkers though: A smart human takes the system by the hand and walks it through the proof in baby-steps. These systems can then, with very high reliability, find any errors in the proof. But they could never find it themselves in any reasonable amount of time.
3. The mainstream has been on the wrong path to AGI. AGI requires some symbolic computing and new architectures despite what the nay-sayers have maintained. You CAN speed up symbolic with the right parallelism and arch.
That is bul
10 years? Talk about hallucinations! (Score:2)
As there is not even a credible theory how it could be done, it is >50 years away, and it may well not be possible at all. A look a tech history is informative here.