
Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End (futurism.com) 120
Founded in 1979, the Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.
Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...
"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."
Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.
Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...
"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."
Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.
At the stage of Alchemy (Score:3, Insightful)
But it still ain't like the science of Chemistry yet.
Read any good books lately? (Score:4, Insightful)
On this topic I think A Thousand Brains by Jeff Hawkins was the best I've seen so far. There are some books about training and the toolkits, but such books are always a few versions behind, whereas Hawkins is talking about what's wrong with the approach relative to how the human brain actually works. (That was the good stuff in the first part. The later parts were provocative but not so "useful".) My conclusion is that the generative AIs are more like the "language machine" Chomsky was describing decades ago, but the actual functioning of human intelligence is more like a turf war... The cortical columns don't care what kind of processing they are doing and perhaps consciousness is something that emerges in the higher-level control channels?
Academic research pie (Score:2)
Conjecture - Mid and late career academics who've researched areas besides the current machine learning bubble are not getting funding for grants.
Re: (Score:2)
My hypothesis is that many single celled creatures can think too - they need to, to build shells, evade danger, hunt for food etc. So the problem of thinking was already partly solved.
Is that thinking, or just instinctive reflex? I'd say "thinking" is a higher level function, less like, "I'm cold; let's seek shelter," and more along the lines of, "How do I avoid this situation in the future so I don't have to seek shelter?" Which requires observation and analysis of a situation, planning, resource acquisition and deployment, etc.
Deepseek (Score:4, Insightful)
I think that Deepseek showed that the current AI approach of just throwing more hardware at AI is a dead end.
Unfortunately, the wizards at the tech monopolist companies haven't understood the message.
Re:Deepseek (Score:4, Interesting)
You are now talking about OpenAI. Google Deepmind invented ChatGPT (the technology it is based on), but they abandoned it, because they knew already years ago, that it is a dead end. They are constantly trying to make an AI that doesn't need to read the whole internet in order to learn.
Re: (Score:2)
Re: Deepseek (Score:2)
Re: Deepseek (Score:2)
Re: (Score:2)
It's hard to know what the closed models are doing. Open models getting stuck in dense or very unambitious levels of dynamic sparsity might not be representative of the closed models. For all we know most of the major closed models are just as cheap to train and run.
Re: (Score:2)
I think that Deepseek showed that the current AI approach of just throwing more hardware at AI is a dead end.
Actually, the DeepSeek example in the article demonstrates precisely the opposite. DeepSeek's innovation, called the "mixture of experts," involves multiple specialized neural networks efficiently coordinated to outperform single massive models. Rather than throwing more hardware at the problem, DeepSeek uses existing hardware more efficiently.
Unfortunately, the wizards at the tech monopolist companies haven't understood the message.
While it's true that major tech companies continue massive investments in scaling infrastructure, this does not reflect DeepSeek’s approach. DeepSeek explicitly
Re: (Score:2)
Indeed. It was pretty clear before that this approach will not deliver much, though. But there are always the blue-eyed hopefuls that have a tentative connection to reality at best.
Re: (Score:2)
the longer it takes for bubble to pop (Score:3, Insightful)
...the bigger the pop. Form a rainy-day fund.
Re: the longer it takes for bubble to pop (Score:1)
Not really. The pops leave a lot of leave-behinds that form the foundation of subsequent innovation and growth.
The broadband internet of the late 90s and early 2000s was motivated by telecoms cashing in on the first dotcom bubble, for example.
Re: (Score:2)
So short it. You've clearly got the brains.
Re: (Score:2)
I'm Not Surprised (Score:5, Insightful)
Re: I'm Not Surprised (Score:3)
Re: I'm Not Surprised (Score:2)
I can't speak to bio applications, but I've gotten a solid exposure to AI in weather.
It is not "better" than the traditional models, in fact the AI models fall apart on a fairly short timeline. However they are useful for very short term forecasts done with less computation. You want to offer specific, precise forecasts over the next 8 hours, AI is going to be a solid strategy. So "it's going to rain on your neighborhood in 17 minutes for 39 minutes" is a tempting one for AI. What's the weather going to b
Re: (Score:3)
Just like a human can view a series of radar pictures of rain clouds and immediately compute a likely coverage map in their mind for the next hour or two, a human engineer or weather scientist can also write down some basic dynamical equations and produce a likely path for the rain clouds over the next few hours. And *that* model will be fast and efficient to implement and evaluate, unlike the A
Re: (Score:2)
Indeed. But its main problem is that it is not a cheap trick. It is a trick, but a rather expensive one. And that is one reason why its limited usefulness (basically only "better search") may not be enough to even make it economically viable and it may go away again.
Re: (Score:2)
Yes it is a story that many with the least bit of knowledge have latched on to. Doesn't mean it's correct.
As for the academics and smaller industry players, I'm sure they are not happy about being locked out of the "real" research. It takes millions of dollars in electricity to do full training experiments which is only feasible for a small number of companies. What's left is mostly playing games with prompt
Re: (Score:2)
Why hasn't this been modded into the ground as "flamebait" already?
Don't let them live rent free in your head. Touch grass.
Re: (Score:2, Troll)
Not surprising at all (Score:2)
Rarely does just throwing money at a technology problem ever bear fruit. You still need to manage goals, expectations and provide solutions to problems - which many of these AI companies are lacking at the moment.
Re: (Score:2)
Really? Moon landing and human genome project were expensive, and they solved the problem. And even if problem has not been fully solved with money, research has given more information about the problem and partial solutions, cancer research as an example. Military projects could perhaps be an exception where money is often just wasted when $1000 drone destroys your $1B high tech machine.
Re: Not surprising at all (Score:3)
> Moon landing and human genome project were expensive, and they solved the problem. And even if problem has not been fully solved with money, research has given more information about the problem and partial solutions, cancer research as an example. Military projects could perhaps be an exception where money is often just wasted when $1000 drone destroys your $1B high tech machine.
Those projects both had clear-cut goals, and dedicated support from all stakeholders involved. I completely agree military i
Re: Not surprising at all (Score:2)
Re: (Score:2)
At its most basic level, it was a way to obtain space capability by defining a clear, attainable and inspiring goal. Reaching for the moon.
As a result, it extended the limits of human exploration beyond our own planet. It showed that human space exploration is possible, and also showed its constraints.
Most of all, and possibly the most important of all, it provided hope. In a difficult time, it showed then when goals align and everyone pulls in the same direction, the sky is literally no longer the limit.
Re: (Score:2)
He said clear cut goals. Nothing about "solving problems". Landing on the Moon was the goal, it was clear cut, and a remarkable job of planning and executing led to success.
Re: (Score:2)
"Expensive" and "throwing money at the problem" are not the same thing.
The human genome project is a particularly good example. Technology improved so much during the project that it finished under budget, and ended up competing with, and nearly beaten by, a private project with 1/10th the budget. If all the genomics funding had been poured into scaling up 1990s sequencing and mapping we'd still be waiting. Instead, we can now do whole genome sequencing for under $1000 (and still falling fast) instead of $3
Re: Not surprising at all (Score:2)
Re: (Score:2)
This sort of thing is usually even more common in computer science, and even more common that that in AI.
There wasn't really a specific reason to think that gene sequencing technology would reduce the price from $3 billion per genome to a few million, and even less to think that it would come down to the $100 bucks it is today, or the $1 people are talking about in the near future. But in CS we often expect, unless there are good mathematical reasons not to, that better algorithms that are orders faster wil
I've been saying this since the bubble started (Score:5, Insightful)
We're in for a giant market crash for everything ML. It's just a matter of time for investors to start panicking.
The ROI on AI doesn't seem to be here to the tune of hundreds of billions of dollars of yearly revenue like is being spent on hardware. There's a market, sure, but it's not that big.
Re:I've been saying this since the bubble started (Score:5, Insightful)
Yes, but no.
I think that the crash as you describe will happen, but then it will grow again and be bigger than before the crash but with new use cases and so on.
In that way similar to the dot com bust in 2000
Re: I've been saying this since the bubble started (Score:2)
I recall when cannabis was legalized in Canada, I was sorely tempted to plow into the stock
Re: (Score:2)
Re: (Score:2)
What's worse than that is I think the bubble started with "the cloud." They built out all those data centers and then larger orgs discovered, "No. It's better to have it on your own machines." Smaller orgs didn't buy the cost/benefit analysis either. "The Cloud" turned out to be more niche than transformative.
So they had all these data centers, they fit them with AI chips, Nvidia stock climbs. Then hope for the best, including cheap electricity.
I see this:
1. Sell everyone on the idea that their local comput
Re: (Score:2)
Re: (Score:2)
Indeed. LLMs have some limited use, basically "better search" and writing of not very impressive text. The cost to train, run and maintain them is nowhere near what it would need to be to justify those uses. And that is the problem: Nobody investing into LLM tech will even begin to recover those investments and nobody will have a profit on top in the foreseeable future. And that is what eventually will shut all these efforts down.
Dead end yes and no (Score:1, Flamebait)
I think Deepmind is the only company that can actually make real progress in AI research, if our end goal is AGI. In this sense, everyone else is just wasting money.
But, other companies are making some interesting implementations that is based on existing research. For example application that can identify birds based on what they sound like. It doesn't help AI research in any way, but it doesn't mean that it doesn't benefit the field of biology. Even things like optimizing 3D graphics with the help AI are
Re: Dead end yes and no (Score:2)
Re: (Score:2)
You're right of course. I think you're not going far enough. We forget how this is still al very new in societal terms, and we have no idea how a new generation of young people will adapt to these things.
If we assume that trend in AI continue as they are, getting better and better at imitating human output (at least as present though a computer interface) withouth ever crossing the "consciousness" line, how long will it take for the next generation of 10 years old to start considering them friends? Given th
Programmed (Score:5, Insightful)
Re: (Score:2)
You could make an AI that follows some simple rules and creates something bigger and more complex based on those rules. Ants are a good example. They follow really simple rules, like "follow the path that has strongest smell" and "if you see x ants coming to the nest within t seconds, go out". Just with these simple rules and a few random numbers, they can locate food sources in a maze, gather all the troops there and bring food back home using the best path. Despite the fact that they don't have any progra
Re: Programmed (Score:2)
Re: Programmed (Score:2)
Re: Programmed (Score:2)
Re: (Score:2)
But the question is how much more useful will it get than it is today. Besides, they have to have a right to process all that information. So far they have broken the law to get this far.
Why would you say that an artificial brain (say, neural network) would need special right to process information readily available, but a real brain would not need it? Your thinking ability is partially based on those books that you read as a child and in school. Why wouldn't artificial brain be allowed to be fed information in a similar way?
Re: Programmed (Score:2)
Re: (Score:2)
1) Real brains also need special rights to process information. That has been a feature of human society for thousands of years. You're currently growing up in a world where the people before you have fought to open up information so that you, and everyone your age, can take advantage of it. It looks to you like this is the normal state of the world. It is not.
2) Because you are so young, you may not know that "readily available" information as you p
Re: (Score:2)
I think you are too young to know how the world works.
Because you are so young, you may not know that
An interesting Argumentum ad hominem. If anything, it highlighted that you are not very comfortable with your own argumentation. Besides, I'm probably way older than you...
"readily available" information as you put it is mostly unauthorized.
I have thousands of books in my bookshelves. Bought used and new. There are no licenses restricting on how I use them. I can lend them to anyone, I can sell them. But if I train an LLM form my own use (local network only), I wouldn't be allowed to feed the LLM with those books?
Re: (Score:2)
Re: Programmed (Score:2)
Re: Programmed (Score:2)
The models are biased by their creators on purpose
Re: Programmed (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
We a
Re: (Score:2)
Yup. My dad used to tell me that about coding when I was debugging.
"The worst thing about computers is they do exactly what you tell them to do. It's all you, son."
Re: (Score:2)
The point of generative models is that they can create things on their own. And every trained model, as opposed to programmed, figures out how to organize information its own unique way.
Re: (Score:2)
The idea of "emergent behavior" is nothing new and AI really isn't programmed in the traditional sense. It is more like teaching a child.
I've been doing software development for decades and the impact it is having on my work is eye opening. Yan Lecun is pretty sure we won't get to AGI with just the existing LLMs and I'm sure many other scientists agree with this, However, that is not to say the existing LLMs are turning out to be a failure. It is more we have to advance in other vectors as we have pretty
Re: Programmed (Score:2)
Emergent, not programmed (was Re:Programmed) (Score:2)
The claim that AI “can never escape what it was programmed to do” does not hold up against what we are already seeing in practice. The tools are escaping their blueprints. That is the very definition of emergence.
If there is no provision in the AI to think independently or create on its own and everything is just a calculation on something that someone already did, then obviously you will always be bound to that no matter how much money you spend.
That’s a tidy conclusion—if the premise holds. But it does not. The idea that AI is "just a calculation on something someone already did" overlooks how modern AI systems exhibit emergent behavior: novel capabilities arising not from explicit programming, but from complex int
Re: (Score:2)
Re: Emergent, not programmed (was Re:Programmed) (Score:2)
Re: (Score:2)
Indeed. In actual fact, an LLM can only ever be significantly _less_ than its training data. It cannot create anything and it cannot ever leave the boundaries of its training data. What people fail to understand is that LLM training data (gathered by a massive Internet piracy campaign) is pretty amazing. But it is limited and those limits have already been reached.
Re: (Score:2)
Aren't we all really stuck with the same limitations?
Re: (Score:2)
You may be, but not everybody is.
False headline (Score:5, Informative)
Not a dead end (Score:2)
Yes, but.... (Score:4, Insightful)
..it's complicated
The early success of LLMs surprised its developers and excited investors
While it's true that simply throwing more data and compute power at LLMs is at the point of diminishing returns, other techniques are being developed
LLMs are now being used as the text module with other strategies being developed on top of them, like reasoning and deep research
These approaches are yielding useful results. I use perplexity and more often than not, find it useful, if imperfect
It appears that many researchers are looking at a wide variety of alternate approaches, they know that there is a limit to scaling LLMs
Short term investors who gambled on LLMs being the key to riches will lose
Long term investors who understand how research works will be fine
Re: (Score:2)
I've long noticed that for /. and Ars Technica it's either full-blown AGI/ASI or bust (current LLMs).
They barely care that "stupid" text generators are smarter than 99% of people on Earth including themselves: "They cannot discover new physics thus they are nothing but next word predictors". Never mind that as of now on the front page we have a news piece that shows that LLMs have already been disruptive when it comes to programming.
Re: (Score:2)
I think we need to be careful not to accept a sales pitch without analysing it. Disruption is not necessarily a good thing. The printing press was disruptive, and changed everything after that. But a bomb is disruptive as well. We should make sure it's the former and not the latter - the ones selling it certainly don't care either way as long as they get paid.
Re: (Score:2)
The early success of LLMs surprised its developers and excited investors
Hmmm, haven't we seen this before?
The early success of computer translations and perceptrons surprised its developers and excited investors in the 1950s and 1960s.
Then we got the first AI winter [wikipedia.org] of the 1970s.
The early success of expert systems surprised its developers and excited investors in the 1980s.
Then we got the second AI winter of 1990-2010.
Neural networks anybody? (Score:3, Interesting)
Meanwhile, you know the quantum computer guys are scared they will never make it; they had to talk Nvidia into having a pep rally for them lol.
Re: Neural networks anybody? (Score:3)
Re: (Score:2)
Neural networks anybody? What happened to them ? How did LLMs win out? Did imitation beat out rational design?
Neural networks didn’t actually disappear—in fact, LLMs are neural networks, just significantly scaled-up and specialized for language tasks. The idea that “imitation” (i.e., statistical training on massive datasets) overtook “rational design” (rule-based or explicitly engineered AI methods) isn’t quite right either. Rather, it’s that massive neural networks trained on enormous data turned out to be astonishingly good at generalizing, outperforming earlier han
Where's the report? (Score:2)
Interesting but where's the report? At least a summary would be nice.
It seems to me that scale isn't going to help LLMs much more but discoveries of how to improve results and make them more efficient are still being made so I wouldn't say we're done with them yet. Whether that jives with the investments is another matter.
On the other hand LLMs are just one use of transformer models and I think machine learning in general is still in its infancy so it's possible those investments will pay off in ways we can
Re: (Score:2)
Re: (Score:2)
I missed that, thank you.
It's about replacing white collar workers (Score:2, Insightful)
You can't win AC (Score:2)
more lies from the usual suspects (Score:3)
"In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up.""
No reason it couldn't, it just won't help. Great insight from the ultra-rich CEO of one of the world's most powerful companies.
"OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution."
More utter bullshit from OpenAI, every computer works this way, it's not a "method". What this is is yet another smuggling in of a lie into the way people talk about AI, that it "thinks" and that if we give it more time to "think" it will get better answers. Meanwhile, AI is deterministic computer software that runs to completion and that takes how long it takes.
The world really needs to rid itself of these liars.
Re: (Score:3)
A standard feedforward neural network, which is what GPT models up to to o1 are, input data, apply a fixed series of transforms to that data, and output the result. The computation time is essentially fixed because the computation is fixed.
"Test-time compute" is kind of a silly term that just refers to doing more than that. In the case of the o1 models and most of what followed them, that is some form of non-fixed, multi-ste
Re: (Score:2)
Hmmm. You seem frustrated primarily by three things: the anthropomorphic language used to describe AI (like calling inference-time methods "thinking"), corporate hyperbole around incremental advances, and a general distrust of AI companies you perceive as deliberately deceptive. Not sure why you chose this thread to air your personal grievances, but okay...
No reason it couldn't, it just won't help. Great insight from the ultra-rich CEO of one of the world's most powerful companies.
You're conflating legitimate criticism of corporate PR with personal resentment. Sure, Pichai’s comments might reflect optimism that's more market
Phew, that was close (Score:2)
LLMs are useful for a narrow range ... (Score:2)
... of tasks, like digesting vast bodies of literature and giving good summaries.
The next frontier is adding vision, haptics, smell and interaction with the physical world. There are already interesting first signs of that emerging in Vision-Language-Action Models (VLAMs). Think of self-driving cars which can explain their decisions in natural language [youtube.com]. More hardware won't hurt for this.
If the goal is AGI then yes (Score:2)
AGI is a made-up concept that is little more than marketing.
But there are lots of other goals for AI tools that have nothing to do with AGI. The tech industry is finding many other goals that are already making them money, and will likely continue to find more. To say that because AGI isn't realistic, that AI is a dead end, is taking too narrow of a view.
Re: (Score:2)
I disagree with the claim of AGI being just a marketing term. I agree that it is that for most companies, but I like they way Deepmind defines it. It is a table, which can be found from here:
https://aibusiness.com/ml/what... [aibusiness.com]
It splits the definition into weak and strong AI, and splits those again into different levels and gives measurable values for testing them. In short, I think they mean that the AI must be able to do equal or better job in random tasks than a group of humans. The more humans it can beat,
Re: (Score:2)
Sure, you can create or find a definition for which AI will one day satisfy the requirements to be called "AGI". But your definition still leaves a lot of room for playing with definitions.
What is a "task"? Is there a discrete number of tasks which can be quantified?
If you identify a set of tasks to test the AI, can you be sure that the set of tasks is random, with respect to the universe of possible tasks?
What does it mean to perform the task "better" than a human? The definition of "better" depends on the
It's never been about profits, just money (Score:2)
The goal of most startups today isn't profit. It's about extracting maximum investor money to the founders. People are going crazy for the latest fads and lots of companies are basically cashing in.
Tech is pouring billions in to AI because people are pouring money into AI. And people are simply trying to extract as much money they can before the music stops.
Remember when the markets went crazy over blockchain, then NFTs, and now AI? Tons of stupid money out there ready for extraction. Then the bubble will p
Re: (Score:2)
When scaling hits the wall (Score:3)
I think it's increasingly clear from surveys like this that scaling alone won't get us to AGI. While brute-force scaling made sense commercially—delivering marketable products rapidly—researchers rightly highlight diminishing returns and call for new approaches.
Interestingly, some of the most promising alternatives are moving beyond simple scaling. DeepSeek's mixture-of-experts (MoE) architecture gets an impressive efficiency win by coordinating specialized neural networks. OpenAI has demonstrated that significant performance gains can be made through test-time compute, which give models more inference time to refine responses without scaling up the hardware. These approaches are already out there, and look pretty good, so far.
And there are even more intriguing hybrid neuro-symbolic approaches, like Logic Tensor Networks, Neural Theorem Provers, and MRKL architectures. The is also Scallop, which directly integrates symbolic reasoning with neural learning and interfaces seamlessly with Python and PyTorch. These neuro-symbolic approaches increasingly align AI architectures with existing human cognitive models. I think that cognitive neuroscience and AI research are converging, and these hybrid models, IMHO, are a sophisticated and potentially fruitful path toward AGI. Granted, human cognition as a yardstick for AGI is probably not the best metric, but you have to use what you know, right? :)
Balancing commercial success and genuine R&D is tough, yeah, but it is essential. I think the next significant steps toward AGI will come from approaches that creatively fuse cognitive neuroscience and AI. Approaches informed—but not constrained—by human cognitive architecture.
What did the experts actually say? (Score:2)
You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...
There is not a single credible AI expert who thinks that scaling by itself leads to AGI or even further advances in AI. This is a strawman. Are there really 24 percent that thinks that scaling by itself leads to AGI? Wow, that's really hard to believe.
Yes, scaling helps, but scaling by itself is not what all the scramble in AI research is doing. It's sort of obvious. If scaling by itself were enough, there would be no need for AI model research. That is clearly not the case today. Every single big ad
Re: (Score:2)
There is not a single credible AI expert who thinks that scaling by itself leads to AGI or even further advances in AI. This is a strawman. Are there really 24 percent that thinks that scaling by itself leads to AGI? Wow, that's really hard to believe.
From this article [nytimes.com], there seem to be many who indeed believe that:
“Over the past year or two, what used to be called ‘short timelines’ (thinking that A.G.I. would probably be built this decade) has become a near-consensus,” Miles Brundage, an independent A.I. policy researcher who left OpenAI last year, told me recently.
Do people even want AI or the Metaverse ? (Score:2)
Drug problems? (Score:2)
Would 100% agree if their funding didn't depend on it?
Are many interviewed morons compared to the others?
Were the questions worded entirely wrong?
Most importantly, does anyone reading this have the slightest clue about economics? The billions spent on GPUs creates buzz and predictable share volatility for investors to see rapid short term ROI on trading. Everyone involved gets rich.
Meanwhile (Score:2)
Companies like OpenAI will do everything they can to make people believe that progress is still being made quickly. It doesn't really affect me outside of making the job hunt extra tedious this time around, but it is annoying how much people have been buying into their nonsense out of ignorance.