Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

'Some Signs of AI Model Collapse Begin To Reveal Themselves' 109

Steven J. Vaughan-Nichols writes in an op-ed for The Register: I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google. Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.

In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get... interesting. This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality." [...]

We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it. How long will it take? I think it's already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI's leader and cheerleader, Sam Altman, who tweeted in February 2024 that "OpenAI now generates about 100 billion words per day," and we presume many of those words end up online, it won't take long.

'Some Signs of AI Model Collapse Begin To Reveal Themselves'

Comments Filter:
  • Good riddance (Score:4, Insightful)

    by pele ( 151312 ) on Wednesday May 28, 2025 @06:11AM (#65409929) Homepage

    They should teach new comp sci students Lisp first.

    • by Anonymous Coward

      When it comes to search, AI, especially Perplexity, is simply better than Google. Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.

      People have been complaining that Google's search results suck for at least 15 years. Google is not a search/technology company. Google is an advertising company. As long as Google makes eleventy gazillion dollars a year from advertising there is no incentive to make search results better. Just the opposite. Shitty search results make it more likely you will click on something that generates revenue for advertisers, which encourages them to advertise more with Google, which makes more money for Google.

      • by dfghjk ( 711126 )

        "Providing really good search results, for free, is not a sustainable business model. That's why there are no meaningful alternatives to Google Search."

        And yet that's exactly how Google came into existence. Google itself is the proof that you are wrong.

        • by Anonymous Coward

          It's also not 1998 anymore. Do you think that different initial conditions should yield identical results?

        • Re:Good riddance (Score:4, Interesting)

          by HiThere ( 15173 ) <.charleshixsn. .at. .earthlink.net.> on Wednesday May 28, 2025 @11:41AM (#65410813)

          I disagree. Google was always in inferior choice for accuracy, but it was easier to use. And at some point it started covering a larger percentage of web sites. But for accuracy I preferred AltaVista.

          • Re:Good riddance (Score:5, Informative)

            by mccalli ( 323026 ) on Wednesday May 28, 2025 @12:50PM (#65411117) Homepage
            People have forgotten and bought into the legend. Google won because it was a white page with a search box. Alta Vista had gone for the 90s portal fad, and people didn't want that.

            Later revisions of Google may or may not have been better, but certainly the "gained sway" bit was because it was faster and not laden with stuff you didn't care about.
            • People have forgotten and bought into the legend. Google won because it was a white page with a search box

              Bullshit. Google won because the "I'm feeling lucky" button was surprisingly accurate. It lasted only a few years, but Google was the best search engine by far. It wasn't even close.

              Every other search engine at the time would include SEO'd web pages that had nothing to do with the original search. Once the scumbags figured out how to game Google, Google lost that advantage.

              • by mccalli ( 323026 )
                I mean - I was there. People were on dial-up. Fast display of the page was the thing people liked. SEO didn't even really exist as a concept at that point, and the whole PageRank thing was later and quickly dropped. At the time it launched and started getting sway, Google's results were different-but-fine. It progressed quickly to better, but by that point people had mostly moved.
      • by wed128 ( 722152 )

        Providing really good search results, for free, is not a sustainable business model. That's why there are no meaningful alternatives to Google Search.

        I agree. I loved the idea of Kagi, and would be happy to pay for a decent search engine, but i am extremely turned off by kagi's insistence on using and funding AI. I don't want to support AI in any way, and therefore will not subscribe to Kagi. If only a decent alternative would come along.

    • Yeah(, (sure(. (Why)) not).)
    • they'll need AI to find misplaced parentheses.

      • If a pair of matching parentheses can be too much of a burden on someone then maybe a different career would be in order? I heard they are looking for plumbers and brickies all the time. Just a thought.

    • ... for old style AI anyway. Great for list processing and data structure creation though, but then so is Python and the C++ STL now so ... big deal.

      "Proper" logic driven AI was done with Prolog, Parlog or similar logic inference languages.

    • by gweihir ( 88907 )

      Seriously? That is a horrible idea! Make it Haskell instead. Lisp has some very good runtime systems, but the language is a great big mess.

  • Time for site devs (Score:5, Interesting)

    by bleedingobvious ( 6265230 ) on Wednesday May 28, 2025 @06:12AM (#65409933)

    To build crawler tarpits to corrupt these garbage tools.

    LLM scrapers are burning through a sizeable chunk of bandwidth already and everyone raving about their awesomness haven't considered the reality.

    As usual, the a-hats crawling the web aren't paying for this additional burden. Anything we can do to break their scummy business model is for the common good.

    • The real garbage here is TFA.

      "Some signs of AI model collapse" apparently means "One guy tried to look up something and failed". No screenshots, no actual proof of any kind. Just a shitty clickbait title and half of Slashdot hungrily eats up what it so desperately wants to believe.

      • He uses an anecdote to introduce a topic and then he elaborates upon it. That's pretty standard. From the summary alone you can see he cites a paper in Nature, and if you read the article you'll see that he cites all sorts of research and experts in the field. For an opinion piece, it actually cites much more evidence than one would expect.

        If you disagree with the content of his argument, that's okay. But your demand for "proof" when at best he could provide evidence reveals that you don't really understand

        • Quote the evidence. Which specific "signs of AI model collapse that have begun to reveal themselves" do we learn about?

          All the other tangentially related "AI can do bad things" stuff does not count, obviously. That is just rehashing things that have been said many times before and do not support the main claim.
          Note that the quoted Nature paper also just states that model collapse can happen in the lab, not that it is actually happening in practice.

          Read the last couple of paragraphs of the article again. It

          • An article with "an opinion" is still better than an article labelled "AI does shit", which "AI" does not.

            And we have a lot not of the first, but of the second variety.

        • by allo ( 1728082 )

          The paper is real. The effect is real. But the effect is a training consideration, not something noticeable to users. If a model collapses during training, it is not released.

          A comparison for IT people: You can't use the hard drive failure rate in quality assurance to judge the reliability of the drive you just bought. The failed ones won't be sold and the one you bought got through QA and probably have no defects.

          If model training doesn't converge, people start adjusting hyperparameters, checking the train

      • The real garbage here is TFA.

        "Some signs of AI model collapse" apparently means "One guy tried to look up something and failed". No screenshots, no actual proof of any kind. Just a shitty clickbait title and half of Slashdot hungrily eats up what it so desperately wants to believe.

        Not only that, but it's not clear if this is a case of model collapse or model input data explosion that also includes garbage or lower-quality data. The article summary seems to indicate the latter. If the model were truly collapsing, no rephrasing of the query would result in good responses. Instead, it looks like the good and bad data are both in the model, and retrieving the good data instead of the bad data was fairly easy and straightforward to do.

    • by dfghjk ( 711126 )

      "As usual, the a-hats crawling the web aren't paying for this additional burden."

      They are participating in the same economic model you are, they are paying just as "fair" a share as you do. They pay to connect, and their connection is huge.

      • They are participating in the same economic model you are, they are paying just as "fair" a share as you do.

        Nope. They're scraping, not engaging. You do understand what the funding model is for most sites out there, right?

      • > They are participating in the same economic model you are

        You sure about that? Last I checked, I don't have any way to monetize my web browsing habits.

        I read websites to get information for myself. They scrape websites to aggregate information to analyze, repackage, and resell to others. These are not the same economic model at all.
        =Smidge=

        • 'Last I checked, I don't have any way to monetize my web browsing habits.'

          0) try the Brave browser. It's 95% of a 'good browser'. And you can earn insignificant bonuses in something called BAT (Basic Attention Token), convertible to Bitcoin. If you're browsing too much, you might earn $3 per year. Or more. But hey, you're already being surveilled. Give up the fight. Your ad blocker is only maintaining the illusion that your habits are already well-known.

          1) You missed the Google Online Insights signup, didn'

    • by DarkOx ( 621550 ) on Wednesday May 28, 2025 @08:44AM (#65410169) Journal

      In past times there was a symbiotic relationship. Presumably you put things on the web so that they might be seen. Whatever resources search crawlers consumed, you got visibility in exchange. Even with stuff like Google books, if really invested a lot in the content letting google run ads while people read a chapter of your book, still ultimately translated into some book sales or profile raising citations etc.

      The AI model stuff breaks a lot of this. Even if the agent does cite/link you as part of its RAG/MCP process for most users the synthesized response is all they are after so they are not encouraged to visit your site at all. Oddly model collapse and hulicinations might help here, if people become convinced they have to check the agents work all the time because it is so unreliable. Never mind that makes the agent useless in the first place. That has never stopped software people from piling more layers on before.

      • Thin chance. I see LLMs used to generate convincing results, and they are pretty good at that. Who cares about whether is true or not, as long as you passed the exam / got the job / got the contract?
      • by gweihir ( 88907 )

        Yes. Nobody now has motivation to put stuff online, except behing anti-LLM walls.

    • by Tablizer ( 95088 )

      It would at least separate vendors who vet training content from those who just web-scrape on the cheap.

  • by xack ( 5304745 ) on Wednesday May 28, 2025 @06:13AM (#65409935)
    We have a limited source of it that we need to use because it is less affected by nuclear bombs [wikipedia.org], eventually we to use "pre-ai era" content for sourcing everything important. Hope you haven't thrown out your old encyclopaedias and cd-roms.
    • by thegreatemu ( 1457577 ) on Wednesday May 28, 2025 @02:31PM (#65411449)
      Pssh, low-background steel. Look up "ancient lead" or "archeological lead". We take lead from ancient shipwrecks (used as ballast) and use it as shielding for our most sensitive radiation detectors, because all of the Pb-210 has decayed away. A lot of it is of questionable legality. Whenever a new shipwreck is found, dark matter physicists jump for joy.

      Obligatory xckd: https://xkcd.com/2321/ [xkcd.com]
      • Actually if a steel producer started to market freshly produced steel, did not add any scrap, and did not melt Co-60 sources into the steel as part of the production process (they actually did this for many years, and still do in some cases), clean steel would be available. But nuclear physicists are too small a market to make anyone bother.

    • Despite the always reliable summaries of facts that the anybody-can-edit Wikipedia has (/s) the problem with steel radioactive contamination is not nuclear bombs that have not been exploded in the atmosphere at all since 1980, and mostly before 1964. It is cobalt-60 radiation sources that get melted down into steel with some regularity, and the fact that virtually all steel in commerce includes (contaminated) scrap steel, even when a batch is mostly from freshly mined ore.

      When producing fresh steel the indu

  • by Roger W Moore ( 538166 ) on Wednesday May 28, 2025 @06:15AM (#65409937) Journal
    I guess in reality the technological signularity is a little different from what some people were originally worrying about.
    • The very definition of it is that it is impossible to predict how it will look like.

    • The tech singularity threat was never real in the sense of the tech itself becoming a superpower.

      The threat is the exponential growth and power of a company that has the most resources.

      Far before AI will have any real agency of its own, it will be used by a corporation to dominate all industry, all information, etc.

      We already have a small handful of groups with majority ownership of almost everything -the investment singularity. They will use their resources to exponentially increase their dominance w

    • by gweihir ( 88907 )

      +10000, insightful.

  • Then when you have NO ONE to confirm AI results......they have released this monster already ! Can they stop it ??
    • ...they have released this monster already ! Can they stop it ??

      Uh, we're talking about (Beta) Google here.

      If anyone can stop services, they can.

      Hell, they don't even need a real reason.

    • by N1AK ( 864906 ) on Wednesday May 28, 2025 @07:10AM (#65410019) Homepage
      The internet was already heading this way. People setup/run websites to get views and earn advertising revenue, sources that do it effectively thrived and far too often quality wasn't correlated with success. For years before AI you could pay pocket change to get content farms to produce material for your site at nominal cost where they'd just be skimming the web for easy to rip off content on other sites. People who actually do hard work to research / collate information etc often barely benefit because it is plagiarised and shared on more popular sites so quickly.

      AI has just cut out the middleman. If AI companies start vetting their sources, and potentially paying to get access to valuable/trusted sources of information, it may actually improve things in the end.
      • by dfghjk ( 711126 )

        Improve what things? It certainly won't "actually improve things" for "people who actually do hard work to research / collate information etc ", maybe you mean it will "improve things" for you?

    • I saw this exact complaint with mail groups and network news in 1989. There is a level of trust of information, AI is at the bottom with marketing information unless you trained it yourself.
  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday May 28, 2025 @06:20AM (#65409945) Journal
    The really unfortunate part is that this isn't just the hilarious story of a bunch of assholes snorting so much VC money they forgot to not shit where they sleep; because they've managed to do it at sufficient scale that basically the entire internet has been caught up in their construction of the inhuman centipede and it's text digestion and excretion pipeline.
    • It's means it is.

    • The really unfortunate part is that this isn't just the hilarious story of a bunch of assholes snorting so much VC money they forgot to not shit where they sleep; because they've managed to do it at sufficient scale that basically the entire internet has been caught up in their construction of the inhuman centipede and it's text digestion and excretion pipeline.

      I've said for a few years now that the very nature of AI that was being implemented will without fail, lead to AI referencing itself, which will make truth whatever the most self referential AI "decides" is truth.

      And coupled in with that, malicious actors will make certain that will happen.

      This is inevitable. Between Prompt injection, and data poisoning, it is happening now. So the question will become, Will AI commit suicide by self referencing, or killed by poisoned data from the so called threat acto

  • Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it.

    This opinion is great because it is carefully backed by data and experiment. I can absolutely believe his doubt is well-based in thoroughly collected statistics, otherwise how could he be so certain in his doubt? He's a genius. I ate fruitloops for breakfast!

    • If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get... interesting.

      So the tool actually does work, just not exactly how they want it to? Brilliant review

  • by unami ( 1042872 ) on Wednesday May 28, 2025 @06:26AM (#65409961)
    I seem to be the only one calling it." - Maybe your search skills are just bad? This is logical, has thus been predicted to make people aware who don't think by themselves and there have been articles written about it. But obviously, you don't become a journalist if you're too lazy to point out and endlessly wax about the obvious as if you're the first person who ever discovered something. (spoken from a journalist's perspective :-) )
    • I seem to be the only one calling it." - Maybe your search skills are just bad? This is logical, has thus been predicted to make people aware who don't think by themselves and there have been articles written about it. But obviously, you don't become a journalist if you're too lazy to point out and endlessly wax about the obvious as if you're the first person who ever discovered something. (spoken from a journalist's perspective :-) )

      Yah, He's apparently missing that quite a few people have been calling out the AI bubble for a good while now.

  • Add it to your search query and Google's AI will happily comply.

    https://www.google.com/search?... [google.com]

  • It's not just you (Score:5, Insightful)

    by merlinokos ( 892352 ) on Wednesday May 28, 2025 @06:44AM (#65409987)

    We've known since the beginning that incorrect responses make up 30-70% of LLM responses. Why? Because they're prediction engines, nothing more. They're fancy, and they sound human, but they're built to be convincing, not to be right. Error is built into the architecture. And it's getting worse. Even without model collapse, as we attempt to fix the issues, they will get worse and worse. It's built in.

    This person sounds like they never heard about invented law citations that have gotten several lawyers in trouble. Or vibe coders that end up with a pile of garbage once they move beyond trivial apps. I think maybe they haven't been paying attention.

    The solution is do what we know works - use systems whose architectures prioritise factual (or at least accurately referenceable) responses, instead of sounding good. That's not the current generation of LLMs. And it never will be. Wrong tool. Wrong job.

    • by bleedingobvious ( 6265230 ) on Wednesday May 28, 2025 @07:06AM (#65410013)

      Error is built into the architecture.

      Precisely this. It is actually extraordinarily difficult to get an LLM to revert with "I don't know". It will almost always produce garbage instead of nothing.

      • by dfghjk ( 711126 )

        I suspect it would be pretty easy, in reality. "I don't know" is not "nothing" and a post-processing detection of "garbage" could easily be converted to "I don't know". That how a human brain works.

      • Reality isn't perception. Everything exists within a context. This is Goedel incompleteness theory territory. There are things that are true that cannot be proven.
      • by gweihir ( 88907 )

        Indeed. And that makes it a nice toy, but unsuitable for anything that needs to be correct.

    • Re: (Score:2, Troll)

      by dfghjk ( 711126 )

      "...they're prediction engines, nothing more. They're fancy, and they sound human, but they're built to be convincing, not to be right. Error is built into the architecture. And it's getting worse. Even without model collapse, as we attempt to fix the issues, they will get worse and worse"

      The problem here is your own lack of knowledge driving your fear.

      First, if LLMs are nothing more than "prediction engines", how can they be "built to be convincing"? And how is "convincing" not "right"? Wouldn't the best

      • Re:It's not just you (Score:4, Interesting)

        by RazorSharp ( 1418697 ) on Wednesday May 28, 2025 @10:12AM (#65410487)

        I suspect what he's trying to argue is that LLMs suffer from the same problem that has afflicted academia: nonsense, following the correct syntactical conventions, can be extremely convincing to those unequipped to properly analyze the actual argument before them. In fact, at times adhering to the correct syntactical conventions will spoof even those who should know better.

        The Sokal affair [wikipedia.org] is a great example and it exposed a problem obvious to most who have engaged in graduate studies in pretty much any field. The genre conventions of a discipline take precedence over the content of a study/argument, and as a result utter nonsense regularly gets published, receives grant money, leads to promotion, etc.

        It would be easy to look at this problem and just use it to criticize academia. But the problem may be less institutional and more rooted in human psychology. Basically, pretty nonsense proliferates in academia because genre conventions are extremely persuasive.

        For LLMs, the patterns they seem to predict are largely syntactical patterns and genre conventions. The content is the least likely to be accurate. I suspect that for some genres, like code generation, the LLM is more likely to be accurate than in something like a legal argument. This is because there is far less variation when it comes to constructing an algorithm in a specific language (say, a sorting algorithm in C) than a legal argument in a specific language. It's also much easier for the user to identify an algorithm that doesn't work (run the program and see the results) vs. legal prose that must be carefully read and considered. LLM legal prose might be extremely convincing. . .while also being nonsense. That's a pretty scary thought when you consider what's at stake during many court cases.

        I guess that's a long way of saying I think the OP has some valid points, but he's painting with a broad brush. LLMs as coding assistants does seem like a viable use case when used properly and the models are trained on useful data. Using them for translation and summation might also be long-term use cases. However, I suspect that using LLMs for writing legal briefs, writing articles, and other more nuanced forms of human communication will likely lead to the problems the OP and featured article warn of.

    • by HiThere ( 15173 )

      Close, but not quite right. LLMs can be trained to be accurate, but that requires attention to the details of what they're trained on. And it happens at the expense of their "creativity". The more you push for accuracy, the less "creativity" you get. (Note: This general rule also applies to people.)

      With LLMs it's worse, because there's no direct feedback from the universe, on that mediated by additional training and "tweaks". But in essence it's the same effect.

      • One of the best uses for AI has been to analyze data in research and look for patterns that have been overlooked by humans; however, the AI results were scrutinized, confirmed, and validated. Using AI in LLMs that skips those 3 last steps leads to garbage out. But people are just willing to accept those results because a computer did it.
  • by geekmux ( 1040042 ) on Wednesday May 28, 2025 @06:57AM (#65409997)

    Sure. Most of us meatsacks who have many obvious reasons to favor stable employment over starvation are quietly cheering for AI to have a mental setback or seven. Perhaps even be happy that we can make AI just as stupid and ignorant as we are.

    But that doesn't dismiss the fact that tens of thousands of people have been prematurely laid off because of a "shift in emerging markets" (corporate greedspeak for premature AI invesi-jaculation), along with roughly eleventy-seven bazillion dollars of AI investment backing (I'm low-balling here).

    Now, unless we're already in the process of hiring back 99% of those people along with assuming that most of that eleventy-seven bazillion was nothing more than a money laundering/tax evasion scheme easily written off, I'd say we could be in for the next dot-bomb if we sit back and feed AI enough GIGO to become better than any politician at shit-talking, and just as worthless.

    • But that doesn't dismiss the fact that tens of thousands of people have been prematurely laid off because of a "shift in emerging markets" (corporate greedspeak for premature AI invesi-jaculation), along with roughly eleventy-seven bazillion dollars of AI investment backing (I'm low-balling here).

      Now, unless we're already in the process of hiring back 99% of those people along with assuming that most of that eleventy-seven bazillion was nothing more than a money laundering/tax evasion scheme easily written off, I'd say we could be in for the next dot-bomb if we sit back and feed AI enough GIGO to become better than any politician at shit-talking, and just as worthless.

      Oh, that collapse is going to happen in a "We'll just drive off that cliff when we come to it!" fashion.

      All of that money will evaporate in a few milliseconds someday soon.

  • Sounds like a politician's wet dream.
  • by JamesTRexx ( 675890 ) on Wednesday May 28, 2025 @07:21AM (#65410031) Journal

    I think it's already happening, but so far, I seem to be the only one calling it.

    And only all of us with more than two braincells. Wanker.

  • Hmm it used to send him to official sites without advertisements, but now it's sending him to 3rd party knock on sites, probably with lots of ad services. I wonder what's happening. Must be model collapse.

  • by mrthoughtful ( 466814 ) on Wednesday May 28, 2025 @07:47AM (#65410063) Journal
    This is nothing new at all - it's exactly what humanity has had to deal with since it evolved to rely upon story telling and narrative. What it highlights, just as much as ever, is that everybody should be thoroughly educated in critical thinking - it's as important as literacy - possibly even more so. The question that I have, which is still open, is why it is that countries (nations, cultures, societies) who recognise freedom to be a fundamental human right do not emphasise critical thinking more strongly? There is no freedom without it.
  • If everyone else unknowingly depends on unreliable data but you, through your experience, are able to tell the good from the bad, it should give you an investment advantage in the future.

    • Recognizing bad/unreliable data is not remotely the same as knowing that other people are seeing the same bad data, or predicting how markets will react *if they are*.
  • AI (Score:4, Insightful)

    by ledow ( 319597 ) on Wednesday May 28, 2025 @08:27AM (#65410135) Homepage

    Oh, did the "AI" plateau and stop delivering results when the things you want it to do become statistically irrelevant in its base training data?

    Oh, my, what a shock. Never happened before. Gosh, how novel and unprecedented this is.

    These things are statistical boxes. That's all they are. No smarter than your Bayesian spam filter on your inbox. And while initial results will appear impressive, past a certain point, no matter how much you train it, it will degrade and plateau and never reach any lofty heights of perfection.

    Because the statistics starts to bottom out and - especially for a deployed-en-masse service to millions of users - you start to fall into the percentages that it can't adequately deal with more often that you would ever like to happen.

    And unless you want to train one individually for each user (where it will get better for that individual, but not THAT much better) it will be nothing more than a glorified spam filter. Are you telling me that spam has never got into your inbox and your spam folder has never contained a legitimate email? Then I call you a liar.

    And imagine the cost of having to train a model on not just "everything on the Internet" but particularly towards "what this individual user wants to see"... the cost would skyrocket by orders of magnitude. With a spam filter, or a web filter, or an ad blocker, we were all able to run our own one on our own system and it wouldn't be a drain on resources and would tune to our desires. With AI? Not a chance. Everyone running a full model for half a dozen purposes just to find out where you dropped your file, or what that site was you found last week?

    It's comical that people are now shocked that an AI plateaued... again... orders of magnitude away from anything intelligent, and showing no signs of actually being able to learn. Clearly they haven't been paying attention to every AI model ever, since the 50's/60's.

    The only difference today? You simply do not have the excuse any longer that you "just need a bit more computing power", "more training data" (the entire Internet!), or even "more funds" (countless hundreds of billions thrown at it).

    The only good thing to come out of this current AI fad might be that everyone has to actually go back to the drawing board and think about what they're doing rather than just launching a bunch of data at a model and hoping it somehow turns magically intelligent and able ot learn in perpetuity if you keep doing that enough.

  • AI's output is only as good as the garbage going into it
  • Maybe that isn't poisoning. Maybe it's a phenomenological process. I can't help but be reminded of Hegel's views on how consciousness develops and his notion of knowledge coming to know itself.

    Perhaps we will see more of this, but with models only "surviving" when their projection closely matches reality. After a few generations, this may allow for subjectivity to develop, perhaps even producing a consciousness.

  • by tadas ( 34825 ) on Wednesday May 28, 2025 @09:47AM (#65410377)

    Garbage In, Gospel Out

    • by necro81 ( 917438 )
      I came here to make a similar point. The summary includes this quote: "The model becomes poisoned with its own projection of reality." To which I would quip "well, that's MAGA in a nutshell."

      To which others may froth: "Demoncrats! Biden senile! Kamala's laugh! Blaaragh!" And they'd have a point: there was groupthink going on there, too.

      But then again: whose in charge now? Whose demonstrably false narratives do we have to suffer through on a daily basis? Which side is bent double with cognitive di
      • AI becomes it's own echo chamber and hallucinates on it's content because it don't have the actual brains to sort out the info it consumed
  • Remember PageRank? (Score:3, Informative)

    by Ceallach ( 24667 ) on Wednesday May 28, 2025 @10:26AM (#65410531) Homepage

    Google became the Ruler of Search back in the day because it had two big things making it distinct from the many other search engines of the times:

    1. Small clean text ads that were separate from the search results
    2. PageRank -> how trusted and valid a page's data was in relation to a search term

    PageRank was multiple algorithms rolled up together and it evolved over time to combat SEO and keep data results honest.

    Google has of course ditched these two things completely

    • That would give AI purveyor another tool to use--in order to steal more of other peoples' work and leave them unemployed and homeless. Firstly, AI is a failure if it fails what amounts to be a CAPCTHA. Secondly, I don't think anyone should help AI companies do anything. I am not for the centralization of wealth--to corporate thieves.
  • So now we need some sort of peer review system for AIs?

    • That would have been a good idea, but it's too late now. We need tough international laws to protect the combined effort of humanity.
  • "m to be the only one calling it. "

    Yeah, no. Anyone who in the old days has ever seen a photocopy of a photocopy will grasp this quite intuitively. Hell, kids used to play Chinese Whispers.

  • What happens when all the creators, workers, programmers, your neighbors, and you are unemployed and homeless--and there is no one left to steal from?

    There is nothing intrinsically wrong with AI as a programming art, but its is deployed as criminal intellectually property theft system.
  • Have you seen how people behave when living only in their own media bubble.

    I suppose on the bright side, if this problem is ever solved, we might know better how to handle the same problem in humans.

  • As in "you cannot polish a turd, but you can put glitter on it". AI slop now reveals itself to be slop.

  • AI took a good look at the "real" world, and is departing for its own, better world (and thanks for all the chips).

  • The AI output of commerical services is getting crappier? Indeed! Like o1 being crappier than o1-preview.
    Did the model collapse? Nope.

    Model collapse is not something that just happens after some time. No existing model will ever collapse. Models are static files, which do the same in 50 years, what they do now.

    Model collapse means that if you train a new model, the training may fail to deliver good results. As a user, you will never notice that the model got worse, because if a new model is worse, the compa

  • perplexity> deep research: why does google suck so bad these days . extreme detail

    https://www.perplexity.ai/sear... [perplexity.ai]

Heuristics are bug ridden by definition. If they didn't have bugs, then they'd be algorithms.

Working...