Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

AI Isn't Creating New Knowledge, Hugging Face Co-Founder Says (x.com) 39

An anonymous reader shares a report: AI excels at following instructions -- but it's not pushing the boundaries of knowledge, says Thomas Wolf. The chief science officer and cofounder of Hugging Face, an open-source AI company backed by Amazon and Nvidia, analyzed the limits of large language models. He wrote that the field produces "overly compliant helpers" rather than revolutionaries. Right now, AI isn't creating new knowledge, Wolf wrote. Instead, it's just filling in the blanks between existing facts -- what he called "manifold filling."

Wolf argues that for AI to drive real scientific breakthroughs, it needs to do more than retrieve and synthesize information. AI should question its own training data, take counterintuitive approaches, generate new ideas from minimal input, and ask unexpected questions that open new research paths. Wolf also weighed in on the idea of a "compressed 21st century" -- a concept from an October essay by Anthropic's CEO Dario Amodei, "Machine of Loving Grace." Amodei wrote that AI could accelerate scientific progress so much that discoveries expected over the next 100 years could happen in just five to 10.

"I read this essay twice. The first time I was totally amazed: AI will change everything in science in five years, I thought!" Wolf wrote on X. "Re-reading it, I realized that much of it seemed like wishful thinking at best." Unless AI research shifts gears, Wolf warned, we won't get a new Albert Einstein in a data center -- just a future filled with "yes-men on servers."

AI Isn't Creating New Knowledge, Hugging Face Co-Founder Says

Comments Filter:
  • Surprise! (Score:5, Insightful)

    by Errol backfiring ( 1280012 ) on Monday March 10, 2025 @10:10AM (#65223037) Journal
    Most of the human inventions are also combining existing techniques in a clever way to solve existing or emerging problems.
    • Re:Surprise! (Score:4, Interesting)

      by Ol Olsoc ( 1175323 ) on Monday March 10, 2025 @11:03AM (#65223141)

      Most of the human inventions are also combining existing techniques in a clever way to solve existing or emerging problems.

      This is very true. I was gobsmacked long ago when I referenced what I believed was cutting edge technology that I was working on, and found it was originally proposed 50 some years ago. It just had to wait until materials and working those materials caught up and enabled me to continue with a final technological implementation. Happens all the time.

      Now all that said, I do have concerns AI will end up self referencing to the exclusion of human research? Governments or agenda driven groups can manipulate it, and we could end up with "proof" of things that don't exist, changing history, or many other things to suit various objectives.

    • Re: (Score:2, Informative)

      Also? AI isn't doing anything with regards to finding clever, i.e. novel or unexpected combinations.

    • Ya, who would've thunk it?

  • LLM!=AI (Score:3, Insightful)

    by Iamthecheese ( 1264298 ) on Monday March 10, 2025 @10:10AM (#65223041)
    AI absolutely is creating new knowledge. Even generative language models helped with that protein folding thing. The headline takes one narrow aspect of AI which doesn't create knowledge and extending that into the concept that AI can't create knowledge.
    • Re:LLM!=AI (Score:5, Informative)

      by gweihir ( 88907 ) on Monday March 10, 2025 @10:42AM (#65223105)

      No. In the "protein folding thing", AI is just used as a filter, it creates nothing.

      • by gtall ( 79522 )

        I am not entirely sure that is the case. Researchers have been using AI's tendency to hallucinate to their advantage. It can produce novel protein folds. The kicker is that some are impossible, some are useless, some are marginal, and some precious few might be usable....you just have to sort through the crap to get the golden droppings. My guess is that there are too many possibilities to simply enumerate them except in theory and even if you did, it is not clear which are useless, which are marginal, and

        • by gweihir ( 88907 )

          Well, randomization can occasionally (very rarely) create new things, but only in low-complexity scenarios. You could also just have done that randomization on the input.

      • by dvice ( 6309704 )

        So, before AI we did not know how hundreds of millions proteins are folded.
        After AI, AI filtered out incorrect ways to fold proteins. And now we know how hundreds of millions of proteins are folded.

        But even so, AI created new knowledge, because we didn't know as much about those proteins as we do now. And we would probably never know without AI.

      • The OP is absolutely correct: AI, or rather machine learning, absolutely is being used to create knowledge. We use various machine learning techniques in particle physics to find and reconstruct events in detectors, astrophysicists use it extensively to process images etc.

        In my own research group my former PhD student used Graph Neural Networks to find a subset of rare events and the paper from that is currently being written. Go look in any current particle physics journal and most of the experimental p
        • by gweihir ( 88907 )

          Not really. Filtering stuff out is not "creating new knowledge". It is merely "better search" and that is basically (besides "better crap") the only actual LLM application at this time.

          • Not really. Filtering stuff out is not "creating new knowledge".

            It is when the filter reveals new types of things that have never been seen before. The way filtering can create knowledge is by removing backgrounds that obscure a new signal revealing something that is unknown to science and thus creating new knowledge. The higgs boson was found this way, using filters and reconstructions that in some places used machine learning. So unless you would like to argue that finding the higgs boson was not new knowledge, machine learning can and has found new knowledge.

            Inde

    • That Nobel prize winning paper does not really agree with "ai is creating new knowledge". The method described in their paper uses a neural net to perform "unconstrained protein hallucination" to create lots of data to test against a rule. That's way oversimplifying it. It's not "infinite monkey theorem" territory. I'm not a computational biologist, but just scraping the surface of that paper tells me it's not "AI creating new knowledge".

      • It is literally doing tests we have not done before leading to knowledge we did not have before, there is absolutely zero logic to claiming that it is not creating new knowledge.

        It's knowledge we knew how to develop before AI but didn't have the time, how is that not transformative?

  • by bradley13 ( 1118935 ) on Monday March 10, 2025 @10:21AM (#65223061) Homepage

    Did anyone thing otherwise? The thing that current AIs do well, is bring together knowledge from disparate sources. Where you might enter some search terms, and look at 20+ sites to find information on some obscure topic, the AI has already consolidated that information.

    AIs that can extend and extrapolate, or even come up with completely new concepts? That's going to take some serious breakthroughs.

    • that has been my experience,, mixed results, some times good results that inspire, othertimes a messy word salad that did not inspire
      • by gweihir ( 88907 )

        Indeed. Somewhat better search and aggregation of existing knowledge if you are lucky. "Better crap" if not.

        Fixing that would require AGI. We are not getting AGI anytime soon, despite what some liars like to claim.

    • Did anyone thing otherwise? The thing that current AIs do well, is bring together knowledge from disparate sources.

      I've used it to aggregate knowledge, but the results can sometimes be better for a laugh at times.

    • by dvice ( 6309704 )

      Well we have AlphaFold, which alone is enough to prove that knowledge question can only be limited to chatbots, not AI in general. But that is obviously not the only AI: https://deepmind.google/discov... [deepmind.google]

      We have also interesting cases where chatbot is used to create functions that can call other functions generated by AI and construct larger functions from smaller functions. The end result of these functions is an AI that can solve Minecraft faster than old systems. The AI itself didn't exactly create anythi

    • by evanh ( 627108 )

      The vast majority of the laymen, and plenty of folks here on Slashdot too, are expecting AI to solve everything in the click of a finger. That's what we're being sold after all.

  • by gweihir ( 88907 )

    At least for LLMs. Next blatantly obvious statement?

  • by Z80a ( 971949 ) on Monday March 10, 2025 @10:44AM (#65223111)

    For example, you can come up with a novel gameplay concept never done before, and ask grok or chatGPT etc to describe parts of levels that use the concept, and they will "use" your concept and create pretty good ideas based on it.
    That's arguably "new knowledge".
    It's not because most use cases for it don't create new knowlege that you absolutely can't create new knowledge with em.

  • by Somervillain ( 4719341 ) on Monday March 10, 2025 @10:47AM (#65223121)
    Generative AI is extremely sophisticated autocomplete with an astronomical cost and carbon footprint. It's not useless, but it's overhyped and far from intelligent. As a tool, it's neutral. However, they call it AI instead of LLM and everyone thinks of the Terminator, Jarvis, HAL, or iRobot instead of thinking they have a fancy, expensive autocomplete tool...which is an economic disaster. CEOs all throughout the industry are slashing staff due to interest rate hikes and failed forecasts and lowered growth projections....routine stuff that's viewed as bad news to most investors.

    However, they're lying to investors and saying they're laying off these people due to AI productivity gains, not due to market saturation and reduced enthusiasm for their new offerings. Thus for those who don't work with LLMs, you've spent your lifetime thinking AI is what you saw in the movies, the biggest tech companies say they have AIs and thus you think they've invented Jarvis Junior...because they overtly lied and said they did.

    So instead of lowering expectations for Google, X, Salesforce, Oracle, etc....the huge party we've had for the last 25 years is winding down...we've run out of compelling world changing new things to do with technology until someone comes up with a good world changing new idea. Put simply, until someone invents the next equivalent of the iPhone, that creates a whole new market...the tech market is maturing, which means drastically reduced growth and people being happy with what they have and having less reason to upgrade....so tech ends up looking like the auto sector...some innovation, but a lot less growth and excitement.

    You're being falsely mislead that they have a bright future in AI. Really, mislead is too nice of a term. You've been overtly lied to about the capabilities of these AIs they're developing

    More should be outraged. As this founder correctly stated, we have no technology that can create new knowledge....just pattern match existing knowledge. That's very useful in many areas...but we're far far far away from the singularity....which is not the technology's fault, but I do fault the leaders who are lying about what it can do.
    • Re: (Score:2, Insightful)

      by Ol Olsoc ( 1175323 )

      However, they're lying to investors and saying they're laying off these people due to AI productivity gains, not due to market saturation and reduced enthusiasm for their new offerings. Thus for those who don't work with LLMs, you've spent your lifetime thinking AI is what you saw in the movies, the biggest tech companies say they have AIs and thus you think they've invented Jarvis Junior...because they overtly lied and said they did.

      AI is not much more than a bubble, like other bubbles, will burst at some point, and billions of dollars will evaporate overnight. I wonder how far along the renovation to restart the shuttered Three Mile Island reactors will be when that happens. If we need our very own nuclear power station to generate High school level term papers, it might be a time for introspection, not driving over the cliff to be the first to do so

  • Splitting the difference usually doesn't work very well, but for this question I think it gives us a pretty plausible picture of what's going to happen -- indeed what *is* happening.

    AI isn't going to user in a Utopia in which machines provide a superior answer to human beings can for every problem we apply our brains to. Not yet, anyway, and probably not ever. But it's not going to accomplish *nothing*. It is certainly going to change things, although not consistently for the better.

    Both humans and AIs

    • by gweihir ( 88907 )

      AI will be used *by a small number of the most capable humans* to create new knowledge, but those accomplishments will be against a backdrop of a rising tide of computationally supercharged mediocrity.

      Probably. And whether the first part of that will happen remains to be seen. The second part is a pretty solid prediction though.

  • What would equivalent investment in human intelligence produce? What are the tradeoffs?
    • by dvice ( 6309704 )

      Lets see, Deepmind's revanue is about 1.5 billion and it was founded 14 years ago. This is way off, but upper limit is just multiplication so 21 billion.
      Deepmind created many things, but one of those is AlphaFold. AlphaFold created protein folding for 200 million proteins.
      Cost to fold single protein is about 120 000 dollars. So in total the work is worth about 24000 billion.

      So you want to compare the 21 billion of work put into AI with the 24000 billion it would have cost to do without the AI. Humans can no

      • That is a very specific (cherry-picked) use case. I meant what we call "AI" broadly, where investments are in the hundreds of billions and returns seem quite elusive. I also don't equate revenue with value. What is the actual value of even this folding for humanity and the ecosphere in total? What is the opportunity cost?
  • I see AI as a supremely useful tool (eventually) for managing the information glut we live in. I look forward to that. Ie a backward-looking tool that can (for example) pore through molecular models faster than any human to propose new credibly-possible synthetic materials.

    However, right now it seems more like people are trying to crowbar it into the 'solves all problems we can't figure out' category, where it's prone to (in my experience) creating misleading (but credible-appearing) cul de sacs of inform

  • I've had an ongoing discussion with friends about what is a more significant waste to humanity. It's two parts.

    1) Crypto Mining or AI LLMs
    2) Electrical Usage by each

    Both require Terawatts of power annually.

    If LLMs are simply repacking old information, are they more or less beneficial than current search engines? Or
    Crypto Mining, with all the energy that it consumes, just paying enough to cover the costs of generating the currency?

  • My understanding is that LLM's as currently being designed and built are for quickly gathering and distilling current knowledge.

    Most of the use cases I've seen are around augmenting human effort, not necessarily replacing it.

    I'm sure others may see it differently. I also anticipate that as the technology grows and matures the use cases will likewise change.

  • by classiclantern ( 2737961 ) on Monday March 10, 2025 @11:54AM (#65223295)
    Our dreams are stored as random data in our brains at night, ready to be processed as new ideas during the day. That is my suggestion for creating Skynet.
  • The current AI's could easily create a Hallmark Christmas movie, but the could not create the Barbie movie. There are lots of examples of the Hallmark movies, but the Barbie Movie was unique.
  • Perfect employees! No wonder executives are rushing to replace all of their employees with AIs. This is what they've always wanted.

  • Because I thought it was very insightful when I read it on the train two hours ago. It seems Melonia has removed it now or something...

"What I've done, of course, is total garbage." -- R. Willard, Pure Math 430a

Working...