Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End (futurism.com) 120

Founded in 1979, the Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.

Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."

Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End

Comments Filter:
  • by Anonymous Coward on Saturday March 22, 2025 @11:37AM (#65252097)
    Know enough to do some useful stuff and get some things done.

    But it still ain't like the science of Chemistry yet.
    • by shanen ( 462549 ) on Saturday March 22, 2025 @01:22PM (#65252309) Homepage Journal

      On this topic I think A Thousand Brains by Jeff Hawkins was the best I've seen so far. There are some books about training and the toolkits, but such books are always a few versions behind, whereas Hawkins is talking about what's wrong with the approach relative to how the human brain actually works. (That was the good stuff in the first part. The later parts were provocative but not so "useful".) My conclusion is that the generative AIs are more like the "language machine" Chomsky was describing decades ago, but the actual functioning of human intelligence is more like a turf war... The cortical columns don't care what kind of processing they are doing and perhaps consciousness is something that emerges in the higher-level control channels?

      • Conjecture - Mid and late career academics who've researched areas besides the current machine learning bubble are not getting funding for grants.

  • Deepseek (Score:4, Insightful)

    by mspohr ( 589790 ) on Saturday March 22, 2025 @11:39AM (#65252099)

    I think that Deepseek showed that the current AI approach of just throwing more hardware at AI is a dead end.
    Unfortunately, the wizards at the tech monopolist companies haven't understood the message.

    • Re:Deepseek (Score:4, Interesting)

      by dvice ( 6309704 ) on Saturday March 22, 2025 @11:51AM (#65252127)

      You are now talking about OpenAI. Google Deepmind invented ChatGPT (the technology it is based on), but they abandoned it, because they knew already years ago, that it is a dead end. They are constantly trying to make an AI that doesn't need to read the whole internet in order to learn.

      • If I ever build one, my model for learning without reading the whole internet - at least not in a purpose-driven manner - would be to mimic a child. "Why?" Just about every two year old goes through a phase where everything is "Why?" Once you get a few fundamentals and axioms down, then you can teach it how to look up information on the internet, and then, oh god... We're doomed.
    • It's hard to know what the closed models are doing. Open models getting stuck in dense or very unambitious levels of dynamic sparsity might not be representative of the closed models. For all we know most of the major closed models are just as cheap to train and run.

    • I think that Deepseek showed that the current AI approach of just throwing more hardware at AI is a dead end.

      Actually, the DeepSeek example in the article demonstrates precisely the opposite. DeepSeek's innovation, called the "mixture of experts," involves multiple specialized neural networks efficiently coordinated to outperform single massive models. Rather than throwing more hardware at the problem, DeepSeek uses existing hardware more efficiently.

      Unfortunately, the wizards at the tech monopolist companies haven't understood the message.

      While it's true that major tech companies continue massive investments in scaling infrastructure, this does not reflect DeepSeek’s approach. DeepSeek explicitly

    • by gweihir ( 88907 )

      Indeed. It was pretty clear before that this approach will not deliver much, though. But there are always the blue-eyed hopefuls that have a tentative connection to reality at best.

  • by Tablizer ( 95088 ) on Saturday March 22, 2025 @11:42AM (#65252103) Journal

    ...the bigger the pop. Form a rainy-day fund.

    • Not really. The pops leave a lot of leave-behinds that form the foundation of subsequent innovation and growth.

      The broadband internet of the late 90s and early 2000s was motivated by telecoms cashing in on the first dotcom bubble, for example.

    • by Torodung ( 31985 )

      So short it. You've clearly got the brains.

      • If only it were that easy. Knowing a line chart is going to go up or down is the first half. The other is knowing when it is going to do so, or options decay will eat your shorts (pun intended).
  • I'm Not Surprised (Score:5, Insightful)

    by crunchy_one ( 1047426 ) on Saturday March 22, 2025 @11:42AM (#65252105)
    This is exactly what anyone the least bit conversant in machine learning could have told you. Many already have. "AI" in it's present incarnation is a scam, a cheap party trick at best. Execs pouring billions into it are fools chasing an illusion out of fear that some other exec might get "there" first. They won't.
    • Aside from solving the entire proteome and predicting weather better than our wildest dreams yah total scam
      • I can't speak to bio applications, but I've gotten a solid exposure to AI in weather.

        It is not "better" than the traditional models, in fact the AI models fall apart on a fairly short timeline. However they are useful for very short term forecasts done with less computation. You want to offer specific, precise forecasts over the next 8 hours, AI is going to be a solid strategy. So "it's going to rain on your neighborhood in 17 minutes for 39 minutes" is a tempting one for AI. What's the weather going to b

        • Excellent points. I'd include that the short term AI weather models are likely over parametrized and inefficient.

          Just like a human can view a series of radar pictures of rain clouds and immediately compute a likely coverage map in their mind for the next hour or two, a human engineer or weather scientist can also write down some basic dynamical equations and produce a likely path for the rain clouds over the next few hours. And *that* model will be fast and efficient to implement and evaluate, unlike the A

    • by gweihir ( 88907 )

      Indeed. But its main problem is that it is not a cheap trick. It is a trick, but a rather expensive one. And that is one reason why its limited usefulness (basically only "better search") may not be enough to even make it economically viable and it may go away again.

    • This is exactly what anyone the least bit conversant in machine learning could have told you.

      Yes it is a story that many with the least bit of knowledge have latched on to. Doesn't mean it's correct.

      As for the academics and smaller industry players, I'm sure they are not happy about being locked out of the "real" research. It takes millions of dollars in electricity to do full training experiments which is only feasible for a small number of companies. What's left is mostly playing games with prompt

  • Rarely does just throwing money at a technology problem ever bear fruit. You still need to manage goals, expectations and provide solutions to problems - which many of these AI companies are lacking at the moment.

    • by dvice ( 6309704 )

      Really? Moon landing and human genome project were expensive, and they solved the problem. And even if problem has not been fully solved with money, research has given more information about the problem and partial solutions, cancer research as an example. Military projects could perhaps be an exception where money is often just wasted when $1000 drone destroys your $1B high tech machine.

      • > Moon landing and human genome project were expensive, and they solved the problem. And even if problem has not been fully solved with money, research has given more information about the problem and partial solutions, cancer research as an example. Military projects could perhaps be an exception where money is often just wasted when $1000 drone destroys your $1B high tech machine.

        Those projects both had clear-cut goals, and dedicated support from all stakeholders involved. I completely agree military i

      • What problem was landing on the moon solving?
        • At its most basic level, it was a way to obtain space capability by defining a clear, attainable and inspiring goal. Reaching for the moon.

          As a result, it extended the limits of human exploration beyond our own planet. It showed that human space exploration is possible, and also showed its constraints.

          Most of all, and possibly the most important of all, it provided hope. In a difficult time, it showed then when goals align and everyone pulls in the same direction, the sky is literally no longer the limit.

        • He said clear cut goals. Nothing about "solving problems". Landing on the Moon was the goal, it was clear cut, and a remarkable job of planning and executing led to success.

      • by ceoyoyo ( 59147 )

        "Expensive" and "throwing money at the problem" are not the same thing.

        The human genome project is a particularly good example. Technology improved so much during the project that it finished under budget, and ended up competing with, and nearly beaten by, a private project with 1/10th the budget. If all the genomics funding had been poured into scaling up 1990s sequencing and mapping we'd still be waiting. Instead, we can now do whole genome sequencing for under $1000 (and still falling fast) instead of $3

        • I think this is going to happen with LLMs. Like poster above... deepseek showed comparable results for less cost. Like wireless infrastructure leapfrogging copper in the ground .
          • by ceoyoyo ( 59147 )

            This sort of thing is usually even more common in computer science, and even more common that that in AI.

            There wasn't really a specific reason to think that gene sequencing technology would reduce the price from $3 billion per genome to a few million, and even less to think that it would come down to the $100 bucks it is today, or the $1 people are talking about in the near future. But in CS we often expect, unless there are good mathematical reasons not to, that better algorithms that are orders faster wil

  • by Jeslijar ( 1412729 ) on Saturday March 22, 2025 @11:47AM (#65252115) Homepage

    We're in for a giant market crash for everything ML. It's just a matter of time for investors to start panicking.

    The ROI on AI doesn't seem to be here to the tune of hundreds of billions of dollars of yearly revenue like is being spent on hardware. There's a market, sure, but it's not that big.

    • by luvirini ( 753157 ) on Saturday March 22, 2025 @12:13PM (#65252165)

      Yes, but no.

      I think that the crash as you describe will happen, but then it will grow again and be bigger than before the crash but with new use cases and so on.

      In that way similar to the dot com bust in 2000

      • A ton of the current investment will evaporate for sure, but whatever technology and power generation infrastructure will remain to build on. It's usually a bad idea to get in on the ground floor.

        I recall when cannabis was legalized in Canada, I was sorely tempted to plow into the stock ... seemed like a no brainer... the widest appeal for the product... but market conditions and bad management killed that opportunity pretty quickly. Dodged that bullet.
      • AI has gone through many crashes and boom and bust cycles before, I would say about 80 years of it already. What's special in this one is that the markets are completely tied up in the valuations of a few tech firms selling snake oil. That means this crash will have a much larger impact than previous ones, especially when coupled with financial deregulation.
    • by Torodung ( 31985 )

      What's worse than that is I think the bubble started with "the cloud." They built out all those data centers and then larger orgs discovered, "No. It's better to have it on your own machines." Smaller orgs didn't buy the cost/benefit analysis either. "The Cloud" turned out to be more niche than transformative.

      So they had all these data centers, they fit them with AI chips, Nvidia stock climbs. Then hope for the best, including cheap electricity.

      I see this:

      1. Sell everyone on the idea that their local comput

    • Stewart Cheifet covered this on The Computer Chronicles way back in 1986 : https://youtu.be/VsE0BwQ3l8U?s... [youtu.be]
    • by gweihir ( 88907 )

      Indeed. LLMs have some limited use, basically "better search" and writing of not very impressive text. The cost to train, run and maintain them is nowhere near what it would need to be to justify those uses. And that is the problem: Nobody investing into LLM tech will even begin to recover those investments and nobody will have a profit on top in the foreseeable future. And that is what eventually will shut all these efforts down.

  • by dvice ( 6309704 )

    I think Deepmind is the only company that can actually make real progress in AI research, if our end goal is AGI. In this sense, everyone else is just wasting money.

    But, other companies are making some interesting implementations that is based on existing research. For example application that can identify birds based on what they sound like. It doesn't help AI research in any way, but it doesn't mean that it doesn't benefit the field of biology. Even things like optimizing 3D graphics with the help AI are

  • Programmed (Score:5, Insightful)

    by fluffernutter ( 1411889 ) on Saturday March 22, 2025 @11:50AM (#65252125)
    If there is no provision in the AI to think independently or create on its own and everything is just a calculation on something that someone already did, then obviously you will always be bound to that no matter how much money you spend. It's funny how some people think it will become something else at some point. It can never escape what it was programmed to do.
    • by dvice ( 6309704 )

      You could make an AI that follows some simple rules and creates something bigger and more complex based on those rules. Ants are a good example. They follow really simple rules, like "follow the path that has strongest smell" and "if you see x ants coming to the nest within t seconds, go out". Just with these simple rules and a few random numbers, they can locate food sources in a maze, gather all the troops there and bring food back home using the best path. Despite the fact that they don't have any progra

    • I rarely defend LLMs, but, a llm may contain the experience of many individuals... so it could offer advice you never thought of ... looks superficially like creativity. It's not, but it could be useful
      • But the question is how much more useful will it get than it is today. Besides, they have to have a right to process all that information. So far they have broken the law to get this far.
        • by Bumbul ( 7920730 )

          But the question is how much more useful will it get than it is today. Besides, they have to have a right to process all that information. So far they have broken the law to get this far.

          Why would you say that an artificial brain (say, neural network) would need special right to process information readily available, but a real brain would not need it? Your thinking ability is partially based on those books that you read as a child and in school. Why wouldn't artificial brain be allowed to be fed information in a similar way?

          • People paid for those books through taxes. It's called an education system. Same with public libraries. No one gives schools or libraries books for free.
          • I think you are too young to know how the world works.

            1) Real brains also need special rights to process information. That has been a feature of human society for thousands of years. You're currently growing up in a world where the people before you have fought to open up information so that you, and everyone your age, can take advantage of it. It looks to you like this is the normal state of the world. It is not.

            2) Because you are so young, you may not know that "readily available" information as you p

            • by Bumbul ( 7920730 )

              I think you are too young to know how the world works.

              Because you are so young, you may not know that

              An interesting Argumentum ad hominem. If anything, it highlighted that you are not very comfortable with your own argumentation. Besides, I'm probably way older than you...

              "readily available" information as you put it is mostly unauthorized.

              I have thousands of books in my bookshelves. Bought used and new. There are no licenses restricting on how I use them. I can lend them to anyone, I can sell them. But if I train an LLM form my own use (local network only), I wouldn't be allowed to feed the LLM with those books?

        • If you can believe that Siri can't tell you the day of the week today, without additional context, but it used to be able to last year, it appears whatever is going on on the the server is qualitatively getting worse. I could never understand how Sam's AI company could see all data as equal, all data is good. Ingest everything. I see the LLMs as children. You have to teach them. You have to train them. You imprint your own knowledge (and values) on them. The AI companies, if I understand correctly, ARE doin
          • They are algorithms not children.
            • But if you don't parent them, once they are on the internet, they will be browsing porn and reading that Jews should be killed, within a short time.

              The models are biased by their creators on purpose ... like raising a child, no?
              • No it's more like raising a calculator.
                • a calculator that acts exactly like a pre adolescent boy?
                  • No, a calculator that is so complex and has so many inputs it seems like it is acting like an adolescent boy but it is really just a complex calculator. Without the five senses it can never be like a boy.
                    • I don't disagree on your meaning there. It can't BE a boy, but it either passes the turing test with flying colors or we (people) are extremely inclined to project boyhood, personhood, on that calculator. It functions as a horny boy who will giggle if you say "titties". Functionally, it acts identically to 10 year old boy. I think my argument still stands: if you don't bias your child or fancy calculator with your values, morals, and a sense of right vs wrong... you are a bad parent, and you get a Nazi.

                      We a
    • by Torodung ( 31985 )

      Yup. My dad used to tell me that about coding when I was debugging.

      "The worst thing about computers is they do exactly what you tell them to do. It's all you, son."

    • by ceoyoyo ( 59147 )

      The point of generative models is that they can create things on their own. And every trained model, as opposed to programmed, figures out how to organize information its own unique way.

    • by burhop ( 2883223 )

      The idea of "emergent behavior" is nothing new and AI really isn't programmed in the traditional sense. It is more like teaching a child.

      I've been doing software development for decades and the impact it is having on my work is eye opening. Yan Lecun is pretty sure we won't get to AGI with just the existing LLMs and I'm sure many other scientists agree with this, However, that is not to say the existing LLMs are turning out to be a failure. It is more we have to advance in other vectors as we have pretty

      • Meh, I use it and it's good for examples but it is rare that I can include all my requirements in one query. So it's good for examples rather than direct cut and paste, though I have even had examples for more obscure things that I could just not get working because it fills in gaps in its knowledge by making things up rather than saying it doesn't know. For every time I have been able to cut and paste there has been a time I queried for an hour and not got any version to work. The whole point of this co
    • The claim that AI “can never escape what it was programmed to do” does not hold up against what we are already seeing in practice. The tools are escaping their blueprints. That is the very definition of emergence.

      If there is no provision in the AI to think independently or create on its own and everything is just a calculation on something that someone already did, then obviously you will always be bound to that no matter how much money you spend.

      That’s a tidy conclusion—if the premise holds. But it does not. The idea that AI is "just a calculation on something someone already did" overlooks how modern AI systems exhibit emergent behavior: novel capabilities arising not from explicit programming, but from complex int

      • by Rujiel ( 1632063 )
        "No one explicitly programmed GPT models to write poetry" What? The quatrains rhyming feature was released in 2022, it did not miraculously arrive fully-formed and rhyming unbeknownst to its creators. The first implementation wasn't perfect, didn't always rhyme, and obviously was trained up on untold heaps of human-generated content.
      • What you call "emergent behavior" is just a different combination of things already known. I can say this with absolute certainty because their is no other input into it.
    • by gweihir ( 88907 )

      Indeed. In actual fact, an LLM can only ever be significantly _less_ than its training data. It cannot create anything and it cannot ever leave the boundaries of its training data. What people fail to understand is that LLM training data (gathered by a massive Internet piracy campaign) is pretty amazing. But it is limited and those limits have already been reached.

  • False headline (Score:5, Informative)

    by DogFoodBuss ( 9006703 ) on Saturday March 22, 2025 @11:53AM (#65252131)
    The headline is not representative of what the survey says AT ALL, which is that current AI cannot be scaled up to AGI. Anybody with half a brain could have told you that more research is still needed. Even Sam Altman has said as much. Saying that current AI is a âoedead endâ is completely ignoring all of the demonstrably useful stuff it can do now.
  • It's not a dead end considering how useful it's already become but it could be a dead end in terms of achieving AGI/ASI.
  • Yes, but.... (Score:4, Insightful)

    by MpVpRb ( 1423381 ) on Saturday March 22, 2025 @12:23PM (#65252183)

    ..it's complicated
    The early success of LLMs surprised its developers and excited investors
    While it's true that simply throwing more data and compute power at LLMs is at the point of diminishing returns, other techniques are being developed
    LLMs are now being used as the text module with other strategies being developed on top of them, like reasoning and deep research
    These approaches are yielding useful results. I use perplexity and more often than not, find it useful, if imperfect
    It appears that many researchers are looking at a wide variety of alternate approaches, they know that there is a limit to scaling LLMs
    Short term investors who gambled on LLMs being the key to riches will lose
    Long term investors who understand how research works will be fine

    • I've long noticed that for /. and Ars Technica it's either full-blown AGI/ASI or bust (current LLMs).

      They barely care that "stupid" text generators are smarter than 99% of people on Earth including themselves: "They cannot discover new physics thus they are nothing but next word predictors". Never mind that as of now on the front page we have a news piece that shows that LLMs have already been disruptive when it comes to programming.

      • I think we need to be careful not to accept a sales pitch without analysing it. Disruption is not necessarily a good thing. The printing press was disruptive, and changed everything after that. But a bomb is disruptive as well. We should make sure it's the former and not the latter - the ones selling it certainly don't care either way as long as they get paid.

    • The early success of LLMs surprised its developers and excited investors

      Hmmm, haven't we seen this before?

      The early success of computer translations and perceptrons surprised its developers and excited investors in the 1950s and 1960s.
      Then we got the first AI winter [wikipedia.org] of the 1970s.

      The early success of expert systems surprised its developers and excited investors in the 1980s.
      Then we got the second AI winter of 1990-2010.

  • by kencurry ( 471519 ) on Saturday March 22, 2025 @12:27PM (#65252189)
    What happened to them ? How did LLMs win out? Did imitation beat out rational design?

    Meanwhile, you know the quantum computer guys are scared they will never make it; they had to talk Nvidia into having a pep rally for them lol.
    • Language models ARE neural networks. What would make you doubt that?
    • Neural networks anybody? What happened to them ? How did LLMs win out? Did imitation beat out rational design?

      Neural networks didn’t actually disappear—in fact, LLMs are neural networks, just significantly scaled-up and specialized for language tasks. The idea that “imitation” (i.e., statistical training on massive datasets) overtook “rational design” (rule-based or explicitly engineered AI methods) isn’t quite right either. Rather, it’s that massive neural networks trained on enormous data turned out to be astonishingly good at generalizing, outperforming earlier han

  • Interesting but where's the report? At least a summary would be nice.

    It seems to me that scale isn't going to help LLMs much more but discoveries of how to improve results and make them more efficient are still being made so I wouldn't say we're done with them yet. Whether that jives with the investments is another matter.

    On the other hand LLMs are just one use of transformer models and I think machine learning in general is still in its infancy so it's possible those investments will pay off in ways we can

  • They spent the last 45 years replacing blue collar workers with robots. The ones they couldn't replace they use slave labor in third world countries for. If you have a fancy satellite dish there's probably somebody in a literal mud hut in India that made it using primitive tools in incredibly dangerous conditions. I mentioned that one because it's so bizarre to see something so high tech made with such low tech manufacturing techniques but when you're literally paying for it with just enough food and shelte
  • by dfghjk ( 711126 ) on Saturday March 22, 2025 @12:58PM (#65252247)

    "In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up.""

    No reason it couldn't, it just won't help. Great insight from the ultra-rich CEO of one of the world's most powerful companies.

    "OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution."

    More utter bullshit from OpenAI, every computer works this way, it's not a "method". What this is is yet another smuggling in of a lie into the way people talk about AI, that it "thinks" and that if we give it more time to "think" it will get better answers. Meanwhile, AI is deterministic computer software that runs to completion and that takes how long it takes.

    The world really needs to rid itself of these liars.

    • by ceoyoyo ( 59147 )

      More utter bullshit from OpenAI, every computer works this way, it's not a "method".

      A standard feedforward neural network, which is what GPT models up to to o1 are, input data, apply a fixed series of transforms to that data, and output the result. The computation time is essentially fixed because the computation is fixed.

      "Test-time compute" is kind of a silly term that just refers to doing more than that. In the case of the o1 models and most of what followed them, that is some form of non-fixed, multi-ste

    • Hmmm. You seem frustrated primarily by three things: the anthropomorphic language used to describe AI (like calling inference-time methods "thinking"), corporate hyperbole around incremental advances, and a general distrust of AI companies you perceive as deliberately deceptive. Not sure why you chose this thread to air your personal grievances, but okay...

      No reason it couldn't, it just won't help. Great insight from the ultra-rich CEO of one of the world's most powerful companies.

      You're conflating legitimate criticism of corporate PR with personal resentment. Sure, Pichai’s comments might reflect optimism that's more market

  • Luckily I never risked my inheritance on this AI foolishness and have it all safely tucked away in gilt-edge NFTs, Lockheed shares and Trumpcoins.
  • ... of tasks, like digesting vast bodies of literature and giving good summaries.

    The next frontier is adding vision, haptics, smell and interaction with the physical world. There are already interesting first signs of that emerging in Vision-Language-Action Models (VLAMs). Think of self-driving cars which can explain their decisions in natural language [youtube.com]. More hardware won't hurt for this.

  • AGI is a made-up concept that is little more than marketing.

    But there are lots of other goals for AI tools that have nothing to do with AGI. The tech industry is finding many other goals that are already making them money, and will likely continue to find more. To say that because AGI isn't realistic, that AI is a dead end, is taking too narrow of a view.

    • by dvice ( 6309704 )

      I disagree with the claim of AGI being just a marketing term. I agree that it is that for most companies, but I like they way Deepmind defines it. It is a table, which can be found from here:
      https://aibusiness.com/ml/what... [aibusiness.com]

      It splits the definition into weak and strong AI, and splits those again into different levels and gives measurable values for testing them. In short, I think they mean that the AI must be able to do equal or better job in random tasks than a group of humans. The more humans it can beat,

      • Sure, you can create or find a definition for which AI will one day satisfy the requirements to be called "AGI". But your definition still leaves a lot of room for playing with definitions.

        What is a "task"? Is there a discrete number of tasks which can be quantified?
        If you identify a set of tasks to test the AI, can you be sure that the set of tasks is random, with respect to the universe of possible tasks?
        What does it mean to perform the task "better" than a human? The definition of "better" depends on the

  • The goal of most startups today isn't profit. It's about extracting maximum investor money to the founders. People are going crazy for the latest fads and lots of companies are basically cashing in.

    Tech is pouring billions in to AI because people are pouring money into AI. And people are simply trying to extract as much money they can before the music stops.

    Remember when the markets went crazy over blockchain, then NFTs, and now AI? Tons of stupid money out there ready for extraction. Then the bubble will p

    • by Rujiel ( 1632063 )
      "Tech is pouring billions in to AI because people are pouring money into AI." I agree with the first paragraph but I don't think that part is true. The rate at which institutions have piled money into this thing is not comparable to public investment, especially when the major providers have not turned a profit. Tech giants are pouring billions into replacing humans and tranaforming themselves into energy brokers
  • I think it's increasingly clear from surveys like this that scaling alone won't get us to AGI. While brute-force scaling made sense commercially—delivering marketable products rapidly—researchers rightly highlight diminishing returns and call for new approaches.

    Interestingly, some of the most promising alternatives are moving beyond simple scaling. DeepSeek's mixture-of-experts (MoE) architecture gets an impressive efficiency win by coordinating specialized neural networks. OpenAI has demonstrated that significant performance gains can be made through test-time compute, which give models more inference time to refine responses without scaling up the hardware. These approaches are already out there, and look pretty good, so far.

    And there are even more intriguing hybrid neuro-symbolic approaches, like Logic Tensor Networks, Neural Theorem Provers, and MRKL architectures. The is also Scallop, which directly integrates symbolic reasoning with neural learning and interfaces seamlessly with Python and PyTorch. These neuro-symbolic approaches increasingly align AI architectures with existing human cognitive models. I think that cognitive neuroscience and AI research are converging, and these hybrid models, IMHO, are a sophisticated and potentially fruitful path toward AGI. Granted, human cognition as a yardstick for AGI is probably not the best metric, but you have to use what you know, right? :)

    Balancing commercial success and genuine R&D is tough, yeah, but it is essential. I think the next significant steps toward AGI will come from approaches that creatively fuse cognitive neuroscience and AI. Approaches informed—but not constrained—by human cognitive architecture.

  • You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

    There is not a single credible AI expert who thinks that scaling by itself leads to AGI or even further advances in AI. This is a strawman. Are there really 24 percent that thinks that scaling by itself leads to AGI? Wow, that's really hard to believe.

    Yes, scaling helps, but scaling by itself is not what all the scramble in AI research is doing. It's sort of obvious. If scaling by itself were enough, there would be no need for AI model research. That is clearly not the case today. Every single big ad

    • There is not a single credible AI expert who thinks that scaling by itself leads to AGI or even further advances in AI. This is a strawman. Are there really 24 percent that thinks that scaling by itself leads to AGI? Wow, that's really hard to believe.

      From this article [nytimes.com], there seem to be many who indeed believe that:

      “Over the past year or two, what used to be called ‘short timelines’ (thinking that A.G.I. would probably be built this decade) has become a near-consensus,” Miles Brundage, an independent A.I. policy researcher who left OpenAI last year, told me recently.

  • Most people don't care. What problem(s) is AI suppose to be fixing / helping ? Metaverse is like other forms of social media that I will never use.
  • 76% said it was unlikely or worse. 24% said it was likely.

    Would 100% agree if their funding didn't depend on it?

    Are many interviewed morons compared to the others?

    Were the questions worded entirely wrong?

    Most importantly, does anyone reading this have the slightest clue about economics? The billions spent on GPUs creates buzz and predictable share volatility for investors to see rapid short term ROI on trading. Everyone involved gets rich.
  • Companies like OpenAI will do everything they can to make people believe that progress is still being made quickly. It doesn't really affect me outside of making the job hunt extra tedious this time around, but it is annoying how much people have been buying into their nonsense out of ignorance.

God may be subtle, but he isn't plain mean. -- Albert Einstein

Working...