Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com) 129

ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention. The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) -- the premier international conference in natural language processing -- reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe. "The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the 'emergent abilities' of LLMs.

Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
This discussion has been archived. No new comments can be posted.

New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat

Comments Filter:
  • Well duh (Score:5, Insightful)

    by locater16 ( 2326718 ) on Wednesday August 14, 2024 @02:15AM (#64704334)
    Anyone that knows how transformer models work could've told you this.
    • Yep. It was mostly hype that got us to this point. The people who made the "AIs" (because they're not really intelligent) had a financial interest in them seeming powerful and spooky, it made them seem more valuable. The more examples I see, the more it strikes me that they're really not all that different from a simple Markov text generator in ability, just with a very large corpus and a large text buffer. I've been sure that there had to be more to them than that, but geez, I keep being proven wrong.

      • Re:Well duh (Score:4, Insightful)

        by AleRunner ( 4556245 ) on Wednesday August 14, 2024 @07:00AM (#64704612)

        The researchers in the field have been constantly hinting that they had something more but weren't able to release it or show it to people. Particularly some kinds of models with incremental feedback or with loops in the model that would allow buffers like short term memory for learning. There has always been the message that, "just around the corner there's something amazing".

        Companies like Tesla have invested huge huge amounts into the belief that, given that they had solved 90% of cases, the last 10% would be possible. Luckily for them they look like they will be able to dump this mistake on their customers, but that isn't 100% sure yet. Definitely, if there isn't anything more, plenty of investors are going to lose big chunks of money.

      • Yup - this is so obvious, it amazes me that it was necessary for someone to say it.
    • Re:Well duh (Score:5, Insightful)

      by gweihir ( 88907 ) on Wednesday August 14, 2024 @03:09AM (#64704396)

      Indeed. Or even more general, any type of statistical model, really. Statistical models do not have and cannot have reasoning ability. They can only fake it to a degree if enough reasoning steps are in their training data.

      The problem is not that the knowledge and insight was not there. The problem is that too many people are too much in love with their own fantasies and do not listen.

      • Dunning Kruger, many people don't have the knowledge that others have, or are even aware of the lack of understanding others have. If everyone read all the same stuff as the experts, they would understand it too. Most people have a vague concept of how a computer 'runs' on 1-s and 0s, but how the electrical signals move about the circuitry and somehow end up with colorized images on the screen, is beyond them. Who understands it fully or even enough besides nerds on forums like this? And even experts have v
        • by gweihir ( 88907 )

          Exactly. Dunning & Kruger is perhaps one of the most important results about human cognition, ever.

          The problem with "experts" is that many that think they are, are actually not. A defining characteristic of an expert is that they know exactly how far their expertise carries and are capable and careful not to overstep those bounds. The other problem with "experts" is that, as you say, no-honor individuals that will simply lie to gain money. That is why in many fields, the qualifications or an actual expe

      • Even when there are reasoning steps in their training data, it isn't able to understand them in the way that a human can.
        For example, ask it to multiply two 4 digit numbers together, and it will likely give you the wrong answer.
        Any basic computer with a sufficiently large memory to hold such a large number can do that with ease, but an LLM that has ingested every single book on how to do arithmetic can't do it if it hasn't seen that specific calculation before.

        • by gweihir ( 88907 )

          Even when there are reasoning steps in their training data, it isn't able to understand them in the way that a human can.

          Exactly. And it may apply them incorrectly, fail to apply them or apply them in a situation where they do not fit. Hence it can only make a guess, while an actual reasoning ability provides a far sharper tool.

          For example, ask it to multiply two 4 digit numbers together, and it will likely give you the wrong answer.
          Any basic computer with a sufficiently large memory to hold such a large number can do that with ease, but an LLM that has ingested every single book on how to do arithmetic can't do it if it hasn't seen that specific calculation before.

          Yes. An LLM can do what a multiplication table does if it has seen enough multiplication tables. It cannot come up with any extensions of that table. A computer algebra system (which we have had for > 40 years now), on the other hand, can do some automated reasoning and fact-checking in a very speci

          • 40 years ago takes us to 1984, and the launch of the Apple MacIntosh. We had computers doing arithmetic calculations long before that, certainly in the 1940s.

      • I think this discounts how shit the "reasoning" is for the average human.
    • Re:Well duh (Score:4, Insightful)

      by VeryFluffyBunny ( 5037285 ) on Wednesday August 14, 2024 @04:05AM (#64704460)
      Yep. The people at OpenAI & others were just lying to increase interest from investors. From the outset, all they had is a very, very expensive machine that successively predicts the next morpheme according to the configuration of morphemes in a given prompt. It'll take a lot of ingenuity & real intelligence from humans to find practical uses for it. I hope it's worth the $billions of other people's money that they've been pouring in.
      • The ability to convert unstructured input to structured output has a multitude of uses.
        • It would do if it could do this reliably. However, it is far too unreliable and unpredictable at doing so, and due to the nature of the models it will never get mcuh better at it. As is constantly shown, a little bit of human ingenuity applied to the prompts keeps finding ways to cause output which those running these models will find undesirable. The car sales model selling cars for $1 is one example but there are many more. This is not a fixable problem with these models.

          So there are some uses. The versio

          • More simply put: What most humans understand by "intelligence" implies learning. If the interface you're interacting with doesn't learn, it's not intelligent. LLMs are just very complex but static algorithms. They're very, very expensive Turing machines, i.e. they trick us to believe that they're intelligent, which really isn't that difficult to do; we're very, very easy to fool - just ask any fortune teller, mentalist, or magician.
            • by rahmrh ( 939610 )

              They are "correlation engines" and a well trained correlation engine looks intelligent *IF* you don't test it with anything funny. Once you start testing it with anything funny it starts producing what appears to be almost random often wrong responses.

              The old Eliza program fooled some people, and given the simplicity of that program it is not hard to believe someone could spend some time improving Eliza and get it doing a better job than these AIs. At least Eliza had a section to return a mostly reasona

    • It might be worth asking, how does one get paid to "research" this sort of thing?
    • Anyone that knows how transformer models work could've told you this.

      What we're calling "AI" these days isn't really artificial intelligence. It's a glorified script that can adapt to a history of inputs. AI, as we're using it, is a slick marketing term for "software that will automate your job away". But like all computing/software that came before it, it's stupid. Nothing has changed about "Computers are dumb. They only know what you tell 'em.

    • by allo ( 1728082 )

      A transformer model could also have told you that.

    • Anyone that knows how transformer models work could've told you this.

      Well that would come as a surprise to a lot of the researchers working on these models who have been trying to figure that out how the models were achieving the reasoning they showed.

      Just because we don't know how emergent abilities emerge doesn't mean that a statistical model with an unreal amount of data can't display emergent abilities. That doesn't mean they're conscious or even "intelligent". But the emergent abilities could be legit.

      Also note, this is one paper presenting a theory and trying to prove

  • Misdirection (Score:4, Insightful)

    by EternalExpiry ( 10398765 ) on Wednesday August 14, 2024 @02:23AM (#64704350)
    What are you idiots? The threat isn't the computer by itself, it's how our species having access to it will divide us. The capabilities of it are changing the world as largely as the introduction of the smart phones as a device as common as watches. The Cyberpunk future media has represented over the last 4 decades has us replacing our flesh slowly, AI has already shown examples of how the human mind is something it can review. Don't care to find the link for it now, but weren't they able to absorb a ton of data from brainscans as people looked at certain images and managed to make the AI make reasonable guesses at what they were looking at? I'm not worring that some machine rises up against us and takes us over, I'm afraid of a wide collection of fools decaying moral aspects of humanity.
    • Yes, the threat is an upsurge of crime and breakdown of social order. AIs are being built to mimic human beings and the activities of human beings. It doesn't take a genius to see the potential for crime in that, especially widespread since the AI tools are literally designed to be free and used by ordinary people without special knowledge or qualifications. No need to imagine superhuman mechanical brains playing at being skynet.
    • Re:Misdirection (Score:5, Interesting)

      by Pinky's Brain ( 1158667 ) on Wednesday August 14, 2024 @03:45AM (#64704440)

      What morals? Humanity started two world wars and an unending amount of small ones without needing AI fueled disinfo campaigns.

      Meanwhile social cohesion is fragmenting in advanced nations due to demographic collapse and mass immigration. Resource exhaustion everywhere, of minerals, water, livable climate and biosphere, functioning antibiotics etc etc. AI use by humanity is a drop in the ocean of existential threats.

      AI becoming conscious and deciding to rule is the only realistic way I see for technological society to make it out of this century. Otherwise we'll go back to being moral with more primitive society and weaponry.

    • Agreed. AI trained to recognize faces and human bodies combined with drones bearing thermobaric grenades is the kind of thing that gives me nightmares.
  • by TheMiddleRoad ( 1153113 ) on Wednesday August 14, 2024 @02:26AM (#64704352)

    There's no chance this statistical software garbage is going to suddenly grow sentient. Anybody who thinks so doesn't understand the tech.

    No, the issue is that people will think that the software is actually concious, and then they'll authorize real world decisions based on the software, often without real human oversight. So far, it's definitely happening in modding, like on Yahoo News. These automated software decision systems will slowly leak out into other parts of life, from insurance to schooling. It's gonna be a clusterfuck.

    • There's no chance this statistical software garbage is going to suddenly grow sentient

      Are you sure?

      I might remind you that we humans were once a bunch of slime floating in a bond. Life sprang up from random chemicals and sentience came out of life after a series of lucky random events occuring over millions of years.

      We're fast-forwarding evolution by many orders of magnitude. I wouldn't discount the possibility of the current generation of dumbass generative AIs evolving into something sentient very quickly.

      • by MilenCent ( 219397 ) <johnwh@@@gmail...com> on Wednesday August 14, 2024 @02:54AM (#64704376) Homepage

        I would. It's all how they're made. It's made specifically to _look like someone's thinking_ without any of the actual thought, which is why it frequently turns out to be wrong about things, often in ways that only the people who really know the subject will detect. It's the ultimate bullshitter.

        • by gweihir ( 88907 )

          Indeed. Add that the current hype-AI tech has zero reasoning ability, and the whole idea is just ridiculous.

          • Indeed. Add that the current hype-AI tech has zero reasoning ability, and the whole idea is just ridiculous.

            You’re right. It’s ridiculous.

            Almost as ridiculous as realizing we already went through a dot-bomb era of vaporware, and learned absolutely fucking nothing from it.

            In other words, we ignorant fucks deserve our fate. Again.

            • by gweihir ( 88907 )

              In other words, we ignorant fucks deserve our fate. Again.

              As a group? Definitely. The average crowd of humans is fucking dumb and incapable of learning from experience.

      • by zephvark ( 1812804 ) on Wednesday August 14, 2024 @03:16AM (#64704406)

        AIs don't evolve.

        Also, an LLM is just a fancy statistical model. It can't have any concept of reality, because everything is meaningless to it. It doesn't deal in facts, it deals with rearranging words in ways that it's seen before. It doesn't have intelligence, nor can it think.

        All you have to do for the next step is tell it what is real and what isn't. Please consult with your world religions before telling it which gods are real, by the way. Many people will get violently offended if it disagrees with them.

        • > AIs don't evolve

          Some do - it took all of two days for Microsoft's chat bot to turn into a rampant racist: https://www.bbc.co.uk/news/tec... [bbc.co.uk]

          If ever LLMs are allowed to learn based on user input, you can expect them to end up the same way.

          • by GoTeam ( 5042081 )
            It was a mistake of their algorithm. Tay learned nothing because it wasn't programmed to learn anything. After that failure they learned to put filters on the chat bots. Tay did exactly what it was programmed to do. They had to pull it down because it was a huge embarrassment that reflected poorly on its creators.
          • But that's 'learning', not 'evolution'. Learning means that it summarizes the inputs and can tell them back.

            Evolution is when its survival depends on what it says, so instances that say the wrong output are eliminated and therefore the output adapts to better avoid the killing conditions.

      • AI models don't learn. They exist as a series of weights in an array somewhere, and those weights never change once the model has been trained. Until they change that limitation - until they design a system that allows the model to change while in operation - they can't evolve on their own.

        Not that it seems to be an insurmountable problem, but it isn't any existing AI that will do so.

        • Considering what happened when Microsoft attempted a chatbot names Tay [1], I kind of don't blame them for not letting the models learn on the fly...

          [1] https://en.wikipedia.org/wiki/... [wikipedia.org]

        • by dvice ( 6309704 )

          You don't have to change weights in order to learn. You can make an AI that changes it's own input, which it takes next time, in order to remember the past decisions. Here is an example of an AI that builds a library of commands, building more complex commands using previously made simpler commands as building blocks:

          https://www.zdnet.com/article/... [zdnet.com]

      • by gweihir ( 88907 )

        The implied conclusion is something you can arrive at when you mistake Physicalims for Science. It is not. It is belief.

        That said, software running on digital execution mechanisms cannot become sentient, ever. Software of this type is and remains fully deterministic. That precludes sentience reliably.

        • Obviously you've never written a test that requires determinism.
        • That said, software running on digital execution mechanisms cannot become sentient, ever. Software of this type is and remains fully deterministic. That precludes sentience reliably.

          You're assuming that sentient life isn't fully deterministic and that digital execution is fully deterministic.
          I can simulate the actions of most people on the planet with about 100 lines of code and generating random numbers is a thing.

          • by gweihir ( 88907 )

            You're assuming that sentient life isn't fully deterministic and that digital execution is fully deterministic.

            No. I am just applying the definition. Which you apparently do not know.

            I can simulate the actions of most people on the planet with about 100 lines of code and generating random numbers is a thing.

            No, you cannot. Apparently you have never simulated anything. And I will not get into a discussion about "random" numbers. That is an area where the less people know, the more mysteries they see.

        • Software of this type is and remains fully deterministic. That precludes sentience reliably.

          Here you are making religious claims about intelligence again.

          I agree with you that LLMs cannot become sentient, but not for that reason. It's because they aren't capable of any introspection.

          You believe determinism precludes intelligence only because you want to believe that you are special, not because of any evidence. We are still finding complexity in the human brain, and we are still building new types of models which do parts of things we call "thinking". As long as that is true, we cannot speak even

          • by gweihir ( 88907 )

            No, but you are not thinking clearly or rather you are not conversant with the relevant definitions. Sentience requires more than intelligence. In fact, intelligence is _optional_ for sentience. What _is_ required is a from of consciousness. And that cannot be done by a deterministic system. Completely impossible. Whether it can be done by a purely physical system (which would require randomized quantum-effects to be used, a thing that is not understood at all and can only be modelled) is up for debate.

            Here

      • by jd ( 1658 )

        The way it is done precludes sentience.

        LLMs examine syntax but has no concept of semantics.

        Every single natural brain examines semantics, syntax is an addition that comes much later.

        My contention is that you could build a NN that is semantic-driven not syntax-driven, and that this could conceivably develop sentience.

        • by Bongo ( 13261 )

          The way it is done precludes sentience.

          LLMs examine syntax but has no concept of semantics.

          Every single natural brain examines semantics, syntax is an addition that comes much later.

          My contention is that you could build a NN that is semantic-driven not syntax-driven, and that this could conceivably develop sentience.

          When you pause and stop thinking, are you still sentient?

          Obviously yes. Sentience just means a being, able to experience whatever phenomena are happening. The phenomena could be sights, sounds, sensations, feelings, and also more cognitive things like, intuitively knowing things, and lastly, discrete rational thoughts and symbols. Honestly at the moment we don't know if ants are sentient -- they could be little beings crawling around experiencing the ant-world of phenomena -- point is, sentience is just a b

      • The speed of evolution might not be as quick as you think. I share your thought that this could still lead to some sort of machine sentience. I see no reason to rule it out. But that would probably require tons of nested and looped algorithms with highly tuned feedback, which is stuff we have almost no clue how to do. Plus, nowadays every time you want to change an LLM you need to fire up a computer that uses a city worth of power and requires every nvidia processor on the planet, just to make the LLM sligh
    • by gweihir ( 88907 )

      There's no chance this statistical software garbage is going to suddenly grow sentient. Anybody who thinks so doesn't understand the tech.

      Indeed. And additionally does not understand what sentience entails and requires.

      No, the issue is that people will think that the software is actually conscious, and then they'll authorize real world decisions based on the software, often without real human oversight. So far, it's definitely happening in modding, like on Yahoo News. These automated software decision systems will slowly leak out into other parts of life, from insurance to schooling. It's gonna be a clusterfuck.

      It is definitely going to be a clusterfuck and one with massive damage to society. All because some assholes had to get rich. That said, I see a lot of bureaucracy to have the potential to be done by specialized LLMs (on the side of the perpetrators) and that may lead to a massive job-loss in 10 or 20 years. Th thing is that most of that bureaucracy was accessible to automation before, it was just not cost-effective. That may c

    • by Mozai ( 3547 )
      To put it another way: the drinky-bird toy won't cause a nuclear meltdown, but firing your reactor's safety inspector and replacing him with a drinky-bird tapping the 'Y' key will cause a meltdown. The drinky-bird was never the problem.
  • by allcoolnameswheretak ( 1102727 ) on Wednesday August 14, 2024 @02:26AM (#64704354)

    Famous last Slashdot article.

    • by Bongo ( 13261 )

      Write a Slashdot article.

      ChatGPT:

      Title: Breakthrough in Quantum Computing: Researchers Achieve Stable Quantum Entanglement at Room Temperature

      Posted by: yourSlashdotUserID on Tuesday August 14, 2024 @05:00PM
      from the quantum-leap dept.

      Quantum computing has long been touted as the next major leap in computing technology, but progress has been hampered by a series of formidable technical challenges. One of the most significant obstacles has been maintaining quantum entanglement — the mysterious link betw

    • by shanen ( 462549 )

      This was the joke I was searching for...

  • In education, there is a way of categorizing tasks. It has 6 levels, from easy up to hard. It is called Bloom's taxonomy. First step is remembering, second step is understanding, third is applying and then the complicated stuff starts. I think gpt 4 i.e. is stuck somewhere between knowing and understanding, depending on the type of task. Still a lot of levels to go. But somehow I think for making AI it is actually reversed. First steps are the hardest to accomplish, the others are just a matter of algorithm
    • by gweihir ( 88907 )

      I put some of my exams through AI. It can only do the low Bloom level and has an 100% failure rate as soon as some understanding is required. I did expect it to get the occasional tricky (but not complex) and non-standard question right, but it was a flat 0% performance on those parts.

      • by dvice ( 6309704 )

        https://www.technologyreview.c... [technologyreview.com]

        "solving the six problems given to humans competing in this year’s IMO and proving that the answers were correct. AlphaProof solved two algebra problems and one number theory problem, one of which was the competition’s hardest. ... A human participant earning this score would be awarded a silver medal"

        Perhaps you just used the wrong AI?

  • by Rosco P. Coltrane ( 209368 ) on Wednesday August 14, 2024 @02:38AM (#64704362)

    is what is missing in this analysis.

  • The biggest danger are the things the evil people can do with the new technology.

    It will take us at least couple dozen years to learn to handle it.

    Hitler mastered radio what allowed him control of German masses. Hopefully we will not reach this point.

  • Plot twist (Score:2, Funny)

    by Anonymous Coward

    This research was actually carried out by an AI posing as human researchers, to lull us into a false sense of security.

    • by Bongo ( 13261 )

      This research was actually carried out by an AI posing as human researchers, to lull us into a false sense of security.

      Or posted by human pretending to be an AI, in order to keep their job.

  • by gweihir ( 88907 ) on Wednesday August 14, 2024 @03:05AM (#64704390)

    My take is that was entirely clear from the start. To anybody with a working mind, that is, so only to a minority of people.

  • by clambake ( 37702 )

    re: subject

  • ... generate fake news.

    There's already evidence of AI creating deep-fake photos: Why did they ignore that? US political campaigns contain an almost impossible-to-believe, absence of facts. With AI, a US political party doesn't need fancy editing of nay-sayers giving an ambiguous recollection: A blurry deep-fake photo will contain the very crime that attack adverts can only suggest.

    • by cstacy ( 534252 ) on Wednesday August 14, 2024 @06:11AM (#64704570)

      ... generate fake news.

      There's already evidence of AI creating deep-fake photos

      There's already evidence of people using "AI" to create deep-fake photos.

      I'm sure that's what you meant, but most people are confused on this point, which is the point of the article of course. Gotta be careful how you word things.

      This isn't new; people have been making convincing fake photos since photography was invented.With just editing, convincing fake video and audio has been created for decades. Concerning images, the previous leap was Photoshop. Now with "AI" we've got much higher quality, and even completely synthetic video, and it is affordable and accessible. Before too long it will be impossible to trust any photos or video that you see on TV or the Internet (or in a court of law). "Interesting" times are coming.

      Makes me think about, among other things, the Witnesses in ...was that Heinlein?

  • No, but they are definitely a game changing tool.

  • by bradley13 ( 1118935 ) on Wednesday August 14, 2024 @05:54AM (#64704544) Homepage

    This is obvious, so I'm not sure why anyone had to study it. That said, LLMs are hardly the final development in AI.

    One important step will be allowing a system to have an internal dialog, i.e., converse with itself. At present, when you have finished an interaction with an LLM, that's it, it's over. The next interaction begins from the LLM's original state. Allowing a system to be "continuously on" and change over time - that's when we may see something like AGI emerge.

  • Not predictable (Score:4, Informative)

    by El_Muerte_TDS ( 592157 ) on Wednesday August 14, 2024 @05:59AM (#64704552) Homepage

    making them predictable and controllable

    LLMs aren't predictable. Giving them the same input twice does not produce the same result.

    • Re:Not predictable (Score:5, Insightful)

      by AleRunner ( 4556245 ) on Wednesday August 14, 2024 @07:06AM (#64704630)

      That's because there's an input that's invisible to you which is the output of the random number generator (RNG). When you keep that input constant as well as your own input then the results end up being the same. If you have ever used an image generator the "seed" you can sometimes keep and reuse is exactly the thing you need to get the RNG to repeat the same input to the model.

    • It does if you set the temperature to 0 and pass the same seed.
  • by Targon ( 17348 ) on Wednesday August 14, 2024 @06:03AM (#64704558)

    The current AI does not have the idea of understanding concepts, or to group things together in categories that have not been previously defined by programmers. Without the ability to conceptualize and categorize by itself, AI won't get "free will". Note that this does not mean that poorly implemented AI can't be a threat, but it does mean that people don't need to worry about serving robotic overlords or anything of that sort.

    With that said, those without SKILLS will always be the first to have their jobs taken by automation of any kind, which is why training and learning how to do things that require more intelligence than doing basic tasks will be critical for people who want to be able to find and keep their jobs for the next 30 years. If your primary job responsibilities have you doing a lot of repetitive tasks that requires a lot of effort but isn't terribly complicated, then you will REALLY want to take classes to gain skills so you can be doing things that aren't that simple, otherwise you will find yourself out of a job in the next 10-20 years, and the older you get, the harder it is to get a new job, no matter how skilled or experienced you may be. For those 50 and older, that becomes a very serious reason to be concerned when it comes to job security.

    • AI won't get "free will".

      First I think you'd have to define exactly what you mean by free will, before that can of worms can be properly opened.

      • by Targon ( 17348 )

        Answer the question, "What do you want to do?". Being able to do things, vs. the desire to do things. If you have the choice of 25,000 different things, without something pre-setting decision "weights", you now have random choice, which means there is no decision-making involved, or, you have the ability to sift through and make a decision based on a desire, something that improves itself, improves other things, harms itself,or harms other things, or something else. If there is no preference for thing

  • I'm not an AI expert but I would expect the massive amount of money it costs to train one before it's ready to be put in a can and used pretty much tells anyone that the current models are not 'evolving' once put in service. I suppose we could build models that would fine tune in service but I don't believe that's how they work outside accumulated state from past prompts.

    Or am I completely wrong? Always good to learn, and I welcome our new AI overlords.

  • (AI Theory) ”No Existential Threat Here!”

    (AI Reality) ”Did we fire another 5,000 workers yet? We’ve got AI to invest in!”

    Maybe the clickbait pimps will figure it out in the unemployment line..

    • by gl4ss ( 559668 )

      The existential threat implied is a bit different to what a sql database did to office workers.

      • The existential threat implied is a bit different to what a sql database did to office workers.

        Not according to the reason/excuse CEOs are giving for mass layoffs. Greed isn’t always smart. But it is always greedy.

        Those standing in the unemployment line, don’t really give a shit if you call it a threat or a chicken sandwich. They’re still standing in the unemployment line validating a threat Greed wants to deny.

  • by cwatts ( 622605 ) on Wednesday August 14, 2024 @06:36AM (#64704594)

    I feel much better now....or I did, until i learned that the study was authored by an LLM chatbot.......

    csw

  • Yet the model architectures are evolving rapidly. This research will be obsolete by next year.
    • LLM's aren't retraining or fine-tuning themselves during the conversation, so they have no potential to learn. In fact as models get bigger and better, the cost of fine tuning them only goes up, so the possibility of them learning in real time only becomes less likely.
      • Yes, but that is because current LLMs use a transformer architecture. Other architectures, such as state space, are coming. I am not certain, but I believe that a state space model might be amenable to ongoing training.
        • Yes, but that is because current LLMs use a transformer architecture. Other architectures, such as state space, are coming. I am not certain, but I believe that a state space model might be amenable to ongoing training.

          I think we’re getting far too hung up on the human concept of “learning” here with AI or LLMs.

          Heres an example; learning a foreign language. Humans have to actually learn it. How to speak it. How to write it. How to converse and translate with it.

          In comparison, do you really think the LLM/AI has to “learn” that language after we upload every word, language rule, variation and permutation of that language into the system, or do you think that upload action turns “learni

          • Hi. But do humans really "learn" a language in a cognitive sense? Or do we become familiar with phrases that convey the meaning that we want to convey?

            Years ago I sought to learn the guitar. I did not pay attention to the theory - I just tried things, and found the patterns that created the sounds I want. When I play (not well), I don't really know what I am doing.

            I think that a lot of human learning is like that: unconscious.

            I am sure you know that today's LLMs don't have any pre-programmed grammar rules.

            • by JustNiz ( 692889 )

              >> do humans really "learn" a language in a cognitive sense.

              The generally held position is yes we really do, and there are plenty of studies that back that up.

      • I am unsure that ongoing training without strict guardrails will ever be a thing in commercial AI. Look at humans - there is no quality assurance on our training, the training data is not curated, and we occasionally get some very bad results.

        • by JustNiz ( 692889 )

          Maybe I'm reading your point wrong, but you apparently have this exactly backwards.
          Most if not all training currently is without strict guardrails. There's not even a standard for guardrails yet.
          Training data generally is curated though, at lest for anti social stuff like extreme politics, racism, sexism and porn.

          • LLMs have a few issues.

            Training on synthetic data is like trying to make a perpetual motion machine - it doesn't work and everything decays to uselessness pretty quickly. Training on real-world data rapidly causes the LLM to turn stupid and offensive to the point it is commercially unviable.

            The typical guardrails right now are "freeze it on release" and/or "add output filters". Not unheard of is, "oops, we let that thing keep training on interactions with the public, and now it wants to start the Fourth Re

      • by JustNiz ( 692889 )

        LLM's aren't retraining or fine-tuning themselves during the conversation,

        Actually they kind of are, by cacheing the last n interactions and including that data in the next interaction, but no they are not persistently modifying their own model. Yet.
        The current (hacky) workaround is to just increase the size of n, but doing so obviously imposes more processing to generate each response.

    • by Junta ( 36770 )

      Assuming the research is solid, there's no particular reason to automatically assume we are going to get somewhere really new.

      Despite all the investment we've been able to scale up (with diminishing returns) and 'tweaking' the application of the methodology proven out in 2018 (which took a couple of years to scale up to an actually compelling demonstration). There's not particular sign of a new fundamental breakthrough on the horizon. It might happen, or not, it's hard to predict given the current state of

  • Sure, it may never be sentient or sapient or creative or intelligent.
    But that's not the issue.

    MBA Groupthink is the issue: Save Money. Layoff people doing drudgery. Replace with low monthly fee to {Large Tech Company}. Increase share price. Get Xmas bonus. Thinking is for old people who don't use AI.

    Here's a great idea: Get AI to manage the nuclear arsenal in real time.
    It's going to happen. That's an existential threat, imho.
  • I see what you are trying to do there...
  • I bet an AI wrote that report, just so that we would not worry about their nefarious plan to TAKE OVER THE WORLD!!!!

    MWA HA HA!!!!

  • New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com)

    But that's exactly what the AI's want you to think!

    OMG, it's starting!!! It's almost here!! Head for the hills!!!!1!

  • There is no intelligence in AI.
    It just regurgitates random stuff it reads on the internet. Eventually all of it's training data will come from AI generated content leading to a doom loop of gibberish.

It was kinda like stuffing the wrong card in a computer, when you're stickin' those artificial stimulants in your arm. -- Dion, noted computer scientist

Working...