Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find (mit.edu) 131

Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learning some general truths about the world, that isn't necessarily the case. The recent paper showed that Large Language Models and game-playing AI implicitly model the world, but the models are flawed and incomplete.

An example study showed that a popular type of generative AI model accurately provided turn-by-turn driving directions in New York City, without having formed an accurate internal map of the city. Though the model can still navigate effectively, when the researchers closed some streets and added detours, its performance plummeted. And when they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.

Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find

Comments Filter:
  • No kidding (Score:5, Insightful)

    by alvinrod ( 889928 ) on Sunday November 10, 2024 @05:41PM (#64935477)
    LLMs don't have an understanding of anything. They can only regurgitate derivations of what they've been trained on and can't apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.
    • by gweihir ( 88907 )

      That nicely sums it up. When you pool billions of details with simple connections, you get the illusion of a model. You do not get a model, because that requires abstraction and that requires actual intelligence.

      • Or dogs.
        We may be doing much the same thing. All we are actually good at is 3D navigation and language. Everything else like say logic , math, science is super hard. We actually rely on models precisely because these go beyond intuition. But when it comes to symbolic reading then we exploit or ability to use language. I think AI are doing the same thing. They just lack that intuitive 3D navigation training like our brains evolved. But they clearly have the language part. So they can reason but mayb

    • I've been saying this for two years now. AI is GIGO- Garbage in, Garbage out. There's been nearly ZERO effort put into accurate model creation.

    • Exactly. I have no idea where/why people even thought of the idea of our current version of AI as having ANYTHING relating to the concept of "understanding". Are we living in crazy world here?

      I just realized that most people never examine themselves and their own thinking. It is creeping me out, living with automatons.

    • I realized recently that what passes for insight in LLMs is really just âoethe wisdom of the crowdsâ. Itâ(TM)s well known that If you average together enough peopleâ(TM)s faulty guesses of quantitative information, you often get something close to the truth. (This is also known as the Delphi process.)

      This is essentially what LLMs do. Instead of thinking independently about something, they apply a fancy statistical estimate of what the average person on the internet would say about it. Th

  • by ls671 ( 1122017 ) on Sunday November 10, 2024 @05:41PM (#64935479) Homepage

    Seriously, did we need a MIT study to know that?

    • And if it was MIT, why didn't they first try it out on a map of Boston [app.goo.gl]?
      • And if it was MIT, why didn't they first try it out on a map of Boston [app.goo.gl]?

        Probably because the city of NY (Manhattan island especially) is much simpler road-wise than Boston.

        • Funny enough I spoke to somebody from Boston who knows how the roads were basically first built by cows, and says that somehow the cows managed to do a better job of planning the roads than whoever designed the ones here in Los Angeles. I still remember when I first moved here, one of my first WTF moments was when I saw a green right turn arrow just above a no right turn sign. Though admittedly that came some time after I realized that they only put the freeway exit number signs after the actual exit.

          • by flink ( 18449 ) on Sunday November 10, 2024 @11:16PM (#64935947)

            Lifetime Bostonian here. This is a common myth. The roads aren't cow paths. They are people paths. The roads follow natural contours that were most convenient for people to walk. They followed natural ridge lines of elevation or the contour of the shore line. However over the years, a lot of the elevation was shoveled into the sea to make new land, so neither the original hills nor the original shore that the roads followed are still around, leaving a seemingly non-sensical layout. However, if you look back at old maps the roads make very much sense for a person moving under their own power to travel on. The newer areas that were reclaimed from the sea have straighter roads and simpler layouts. There is even a grid in Back Bay with alphabetical street names where there was a large land reclamation project.

              Here's a decent little video [youtu.be] on the subject.

        • Even MIT researchers still haven't figured out how Boston roads work except as a four dimensional non-Euclidean space.

    • by narcc ( 412956 ) on Sunday November 10, 2024 @07:38PM (#64935681) Journal

      Apparently. There are a surprising number of people, even in the field, who have what I can only describe as religious beliefs about emergence. It's disturbing.

    • Well, probably yes. It won't be enough though to convince the true believers in marketing who want AI to be everywhere. You also don't need MIT to prove the earth isn't flat, and yet we have a thriving flat earther movement arising from what was a vanishing handful of kooks.

    • Still a lot of business men are jumping and dancing around their new AI god. I think it needs more than MIT to get them to wind down. So eager to replace those disobedient clumsy humans. They will never learn
      Hey, I just got an idea! Let's replace them with an AI!
    • by Touvan ( 868256 )

      I came here to say the same. It's obvious based on even a shallow understanding of how the technology works that it doesn't "understand" anything - it's just predicting tokens based on a previous body of text, in a way that generates something that has the appearance of intelligence. It's true "artificial intelligence" in the old sense. I don't understand why so many highly technical people keep missing that.

      • by ls671 ( 1122017 )

        I call it a sophisticated Bayesian filter similar to what spam assassin uses to determine if an email is spam or not. Bayesian filters use tokens too.

    • Seriously, did we need a MIT study to know that?

      I'm typing this over lunch, so sorry for any typos:

      Half of my brains agrees with you about asking if we need such studies. For us, it's obvious LMs do not understand anything at all.

      However, we reach our (correct) conclusion just by inferring on our own understanding of things work. We argue our conclusions, but we do not demonstrate, for obvious reasons (it's hard.)

      And it is not wrong of us to simply argue rather than demonstrating it. How often do we demonstrate that 2 + 2 is indeed four, or that th

  • understanding? (Score:5, Insightful)

    by dfghjk ( 711126 ) on Sunday November 10, 2024 @05:41PM (#64935483)

    More anthropomorphizing neural networks. They don't have "understanding" at all, much less "coherent" understanding.

    • Moreover, neural networks hate to be anthropomorphized.

      the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.

      BTW, does anybody have a link to the report from the researchers MIT sent to New York to check out the glitches in the matrix discovered by this neural network?

    • by gweihir ( 88907 )

      "Understanding" is a synonym for "deduction abilities" here, a term too advanced for many people. And yes, LLMs and neural networks have no deduction abilities.

  • ...CaptObviousGPT

    Very few experts ever claimed it had common sense-like reasoning, and those who did usually added caveats to their claims.

  • by ClickOnThis ( 137803 ) on Sunday November 10, 2024 @05:49PM (#64935499) Journal

    I have met lots of people who don't have a coherent understanding of the world. This week I watched them ... oh, never mind.

    • These studies really would be vastly more interesting if they tested humans on the same tasks.
      • Re: (Score:3, Insightful)

        by ClickOnThis ( 137803 )

        These studies really would be vastly more interesting if they tested humans on the same tasks.

        Yeah, not sure how you could do it though. It might not even be apples and oranges, more like apples and Apple computers. The models can handle enormous amounts of data and can be examined to determine their internal structure. To get the same thing from a human, you'd need to test behaviors and ask questions. Lots of questions.

        The interesting thing is that the models started to infer the existence of roads without seeing them. I suppose humans do the same thing! And the models' performance "plummeted" when

      • We have. People have managed to walk, and drive, around NYC for a very long time before the invention of in-car nav systems.
    • You could try to step out of your comfort zone and listen to podcasts of people who don't agree with you or read newspapers with more objective views of the world.

    • by gweihir ( 88907 ) on Monday November 11, 2024 @12:47AM (#64936051)

      True. Only about 20% of all people are accessible to rational argument. That pretty much means the rest has no understanding how things actually work. For example, they do not understand what a fact is or how science works. They think their gut-feeling is as good or better than an expert analysis. They think physical reality cares about their wishes and can be changed by belief. And other utter crap. Or to speak with Charles Stross: "The average person understands nothing."

      • This. When I hear someone say "do your own research" -- I cringe. It really means "ignore the experts who have spent their lives studying the problem."

        I fear a revolution is brewing, and those who embrace reason are not going to fare well.

        • Hitchens Razor. That which can be asserted without evidence can be dismissed without evidence.

          Whenever I hear anybody make a claim about anything and then attempt to substantiate that claim by insisting other people do their own research to arrive at the same conclusion, I immediately dismiss everything they have to say. If you cannot back up your claims with real, testable evidence, then your claims are without merit.

        • by gweihir ( 88907 )

          I fear a revolution is brewing, and those who embrace reason are not going to fare well.

          We are only in demand when we deliver weapons for the cavemen to kill each other or things for them to buy as personality-prostheses.

  • I'm wondering if it would be possible to hook it up to the likes of Cyc, a logic engine and common-sense-rules-of-life database. The engine could find the best match between the language model (text) and Cyc models, weighting to favor shorter candidates (smallest logic graph). Generating candidate Cyc models from language models may first require a big training session itself.

    I just smell value in Cyc's knowledge-base, there's nothing on Earth comparable (except smaller clones). Wish I could by stock in it

    • Corrections:

      "weighing to favor shorter candidates" [No JD jokes, please]

      "Wish I could buy stock in it"

      (Bumped the damned Submit too early)

    • It's amazing how many startups are out there just repeating the same LLM approach with more data, but none (afaik) are trying something like joining it with Cyc. If I were raising billions for an AI startup, I would consider at least trying that as a side project.
      • by Tablizer ( 95088 ) on Sunday November 10, 2024 @06:27PM (#64935585) Journal

        Indeed. With all the investing going into increasingly questionable AI projects you'd think somebody with money would zig when everyone else is zagging to try bagging a missed solution branch/category.

        Reminds me of the Shuji Nakamura story on the invention of a practical blue LED. Red and green LED's were already commercial viable. Blue was the missing "primary light color" in order to mix to get the full rainbow. Many big co's spent a lot of R&D on blue, but kept failing. Their blue LED's were just way too dim.

        Zinc selenide (ZnSe) was the most productive technology for LED in the past, but gallium nitride (GaN) had some promising theoretical properties, despite early failures. The rest of the industry felt ZnSe seemed clearly the safer bet, being easier to tame. Shuji decided it was worth exploring GaN instead after so many ZnSe fails for blue. He found a promising incremental improvement and kept at it, worked long hours without pay and pissing off his boss, but it eventually paid off, and he won a Nobel.

        Not having a PhD (yet) like most his colleagues, he was often given the grunt-work of repairing equipment. But this work also taught him how to tweak the crystal-growing machines for new variations. He eventually learned to "play the crystal-growing machine like a piano", getting almost anything he wanted.

        Shuji's a true underdog Nerd Hero.

        There is too much Me-Too-Ism in IT in general. Don't get me started about IT projects ruined by fad chasing, I won't shuddup.

      • Aside from being novelty-addled herd animals; I think that there's a much stronger cultural affinity for the technology that is all about the fact that you can sometimes get surprisingly plausible outputs from nescience so profound that it would be anthropomorphizing to call it ignorance; than for the technology founded on the hope that if you systematically plug away at knowing enough you might eventually be rewarded by competent outputs.
    • I had a similar thought. Expert systems are good at some things; LLMs are good at others. We need to combine them. LLMs are superb at converting unstructured input to structured input. There has to be something there.
    • AlphaGeometry [wikipedia.org] works somewhat as you suggest: a LLM to suggest ideas, and an old-school symbolic inference engine to check if those make sense and lead to a solution.
    • by gweihir ( 88907 )

      My impression was that the Cyc project is mostly considered a failure at this time...

    • I wonder why LLM don't have some kind of "this is true facts" database it can refer from. I know that the whole point of LLM is that it contains all that data anyway but I just think you lose a bit of information when you don't use full floating point numbers. It doesn't feel like your working with a full deck when your using 16bit floats:P
      • It doesn't feel like your working with a full deck when your using 16bit floats:P

        Wow wait until you realize that for optimization purposes, a lot of these models are rounding to 8-bit integer math (with surprisingly little drop off in quality). https://en.wikipedia.org/wiki/... [wikipedia.org]

  • Cool approach (Score:5, Interesting)

    by phantomfive ( 622387 ) on Sunday November 10, 2024 @05:58PM (#64935525) Journal
    Of course, everyone knows these models hallucinate. The question is, what is going on inside the model to make it hallucinate? (Or alternately, what is it doing to be right so often?). Once you can figure out what's going on inside the model, then you can improve it. Actually a lot of work has been done in this area, so they are just adding to it. From the article:

    'These results show that transformers can perform surprisingly well at certain tasks without understanding the rules. If scientists want to build LLMs that can capture accurate world models, they need to take a different approach, the researchers say.'

    The key thing here is they don't understand the rules. For example, an AI model might make legal chess moves every time, but if you modified the chess board [wikipedia.org] then it would suddenly make illegal moves with the knight. With current AI technology, you would try to "fix" this by including as many possible different chess boards as possible, but that's not how humans think. We know the rules of the knight and recognize that in a new situation, changing the board doesn't change the way the knight moves (but it might). And if you wanted to clarify,, you could ask someone, "Do all the pieces still move the same on this new board?", but that is what these researchers did (modified the map of NY with a detour), and it really confused the model.

    It is of course obvious that current LLMs do not have human intelligence because they are not Turing complete, but to understand what that means you'd need to have an internal understanding and mental model of what Turing machines are, and LLMs don't have that. :)

    • Re: (Score:3, Interesting)

      These things are really good at sussing out patterns in (seemingly) random data. It's an extremely useful feature/ability.

      But of course they're lost when the rules and patterns change because they mastered the initial patterns already. They have no ability to think and realize, "oh, this is a new thing" because they don't ever "realize" anything. Change the chess board, the fake-I fails.

      I assume the hallucinations come from finding patterns that really weren't there but only appeared to be based on prev

      • But of course they're lost when the rules and patterns change because they mastered the initial patterns already.

        That's part of it, but also they have no ability to recognize what changes are "important" and what are not. Change the color of the chess piece from white to ivory and it won't recognize it (unless it has ivory in its training set).

    • It is of course obvious that current LLMs do not have human intelligence because they are not Turing complete, but to understand what that means you'd need to have an internal understanding and mental model of what Turing machines are, and LLMs don't have that. :)

      You are correct they don't have human intelligence yet wrong about the reason why.

      "Memory Augmented Large Language Models are Computationally Universal"
      https://arxiv.org/pdf/2301.045... [arxiv.org]

      The key thing here is they don't understand the rules. For example, an AI model might make legal chess moves every time, but if you modified the chess board then it would suddenly make illegal moves with the knight. With current AI technology, you would try to "fix" this by including as many possible different chess boards as possible,

      Have you tried asking the AI to convert to a normal chess board before moving?

      but that's not how humans think.

      LLMs obviously don't work like humans.

      • "Memory Augmented Large Language Models are Computationally Universal"

        Kind of cool approach. There are a lot of ways to augment neural networks to make them Turing complete, but they don't work as well. In this case, the compute cycle in 2.3 (page 5) is actually doing the work of the Turing machine. Without it, the LLM is not a Turing machine. Also the cringe phrase "brute force proof" mentioned on page 12 is not a proof at all, but merely a few test cases. It is not at all rigorous, and almost certainly would fall down under more complete analysis (as mentioned, a lot of mo

        • Kind of cool approach. There are a lot of ways to augment neural networks to make them Turing complete, but they don't work as well. In this case, the compute cycle in 2.3 (page 5) is actually doing the work of the Turing machine. Without it, the LLM is not a Turing machine.

          Turing machines don't exist at all in the real world. All anyone can do is create a machine that if you stipulate lasts forever and has access to an external infinite memory can theoretically act as a Turing machine. Nothing lasts forever and there is no infinite anything. All 2.3 is doing is implementing the role of the interface to the external memory. The processing is being handled by the LLM.

          Try asking a human to run this machine in their head without access to a paper and pencil then report back t

          • Turing machines don't exist at all in the real world.

            Ok, but LLMs can't count parenthesis. Fail. Turn your brain on.

    • by narcc ( 412956 )

      The question is, what is going on inside the model to make it hallucinate?

      So-called 'hallucinations' are not errors or mistakes, they are a natural and expected result of the how these models function.

      The key thing here is they don't understand the rules.

      They don't understand anything. That's not how they work. They don't operate on facts and concepts, they operate on statistical relationships between tokens.

    • Re:Cool approach (Score:5, Interesting)

      by martin-boundary ( 547041 ) on Monday November 11, 2024 @02:18AM (#64936151)

      Of course, everyone knows these models hallucinate. The question is, what is going on inside the model to make it hallucinate? (Or alternately, what is it doing to be right so often?). Once you can figure out what's going on inside the model, then you can improve it. Actually a lot of work has been done in this area, so they are just adding to it.

      That's a fundamental misunderstanding of these models. The thing that makes them hallucinate is their very nature: they are non-uniform random generators of text. The hallucinations are randomly generated pieces of text. You cannot have an LLM speaking English without hallucinations, ever.

      What are they doing right? Nothing on their own. But when you couple the output with a human who is willing to interact with it until an acceptable result comes out, you get a bias towards acceptable results. But now you have a combined human-LLM, whereas the LLM itself is incapable of the same. And the output depends on how smart (and patient) the human is.

      The question of figuring out what is going on inside these models is scientifically very interesting indeed, but not for the reasons you think. It won't stop the hallucinations. For that you'd have to throw these models out and go back to something like Prolog (look it up if you've never heard of it).

      TL;DR. An LLM is a stochastic parrot. It's in its nature to hallucinate variations on stuff it found on the Internet. You can't get rid of the hallucinations.

      • That's a fundamental misunderstanding of these models. The thing that makes them hallucinate is their very nature: they are non-uniform random generators of text. The hallucinations are randomly generated pieces of text. You cannot have an LLM speaking English without hallucinations, ever.

        Given that we don't really know how humans or LLMs work, I think it's valid to say that you hallucinated this answer. I guess the important difference is that LLMs often hallucinate things that are obviously checkable f

        • We most certainly know how LLMs work. It's not how they work that's interesting, it's what the output looks like as a function of the training set and interactions that's interesting.

          Slashdot used to do car analogies, so here's one: we know how a car works in excruciating detail, but what's actually interesting is what can be done with it.

          Your hallucination analogy is false unfortunately. When human beings are given the exact same training set as an LLM, the outcomes are measurably different.

    • by vyvepe ( 809573 )

      It is of course obvious that current LLMs do not have human intelligence because they are not Turing complete, but to understand what that means you'd need to have an internal understanding and mental model of what Turing machines are, and LLMs don't have that. :)

      I think LLMs are likely Turing complete (well in a sense when unbounded memory requirement is not strict). Notice that LLMs take their output as an input. So they have memory within their context window size. Neural networks can approximate any function so they can approximate Turing machine state machine as well. One can emulate random access memory with a log based memory (like Log Structured Filesystems do). You have a memory; you have a state machine. That indicates they are likely Turing complete sans

      • That indicates they are likely Turing complete sans unbounded memory requirement.

        Just for any less CS oriented people in the audience I'll point out that this is exactly the same as any other practical system that is considered "Truring complete". Your own computer is "Turing complete" and fully capable of emulating a Turing machine. You can even download turing machine programs to run one and play with it, but in real life you have a limited amount of memory (e.g. probably 8gb of RAM on your phone)

        Also "Turing complete" just means "is a normal computer able to solve the problems that w

      • Notice that LLMs take their output as an input. So they have memory within their context window size.

        That just means it's going to fail the parentheses matching problem [avikdas.com].

        • by vyvepe ( 809573 )
          Anything which has only a bounded memory will fail when checking some long enough sequence of parenthesis. A better approach to attack the idea can be a claim that encoding DFA state (of an TM) in the output token sequence is cumbersome. But still, in such a case, it is about efficiency and not about whether it can be done in principle.
          • Anything which has only a bounded memory will fail when checking some long enough sequence of parenthesis.

            That's...such a horrible misunderstanding of the situation that I don't know how to respond to you. For a parenthesis counting algorithm, all you need is an integer. We're not talking about a lot of memory. LLMs fail catastrophically at counting matching parenthesis. This is well covered theoretically, you may be ignorant of that.

    • by MobyDisk ( 75490 )

      With current AI technology, you would try to "fix" this by including as many possible different chess boards as possible

      First of all, that is one way to "fix" the problem, but certainly not what we would try first. First, you would explain the arrangement of the new board to the AI. But the AI would probably still make mistakes sometimes. But that isn't the only way. I would solve this by telling the AI the rules. But it sounds like we would agree that the AI would still make mistakes. The trouble here is that humans do as well. All this example demonstrates is that humans and AIs think quite similarly.

      , but that's not how humans think

      Humans work two

      • I would solve this by telling the AI the rules. But it sounds like we would agree that the AI would still make mistakes.

        What exactly does this mean? How exactly would you tell the AI the rules? Introduce something in its training set?

        Turing completeness is not a measure of intelligence.

        The Chomsky hierarchy [wikipedia.org] shows what problems a system theoretically can solve, and what problems it provably can never solve, with a Turing machine being the most capable (known) at type-0.

  • Anyone who thought this was even possible doesn't have a coherent understanding of the world.

  • as humans that grow up in the world, we hardly have a coherent understanding of the world, let alone an LLM that is trained on huge data sets and forms patterns to mimic a form of intelligence.

    Sounds like the people that wasted their time on this don't have a coherent understanding of the world either. Marketing wank is just that, were they expecting actual Intelligence?

  • by WaffleMonster ( 969671 ) on Sunday November 10, 2024 @07:15PM (#64935645)

    This is all rather interesting. People create systems inspired by how brains work then they turn around and get all upset it isn't perfect and criticize the system for its failure to magically compile and execute some kind of robust model of how the world works that would enable it to always generate infallible predictions.

    On one hand we have people who either hate with a passion or dismiss LLMs outright as cut and paste machines which don't even deserve to be called AI. On the other hand we have people running around comparing them to nuclear weapons and worry about the prospects for the world to be turned into paperclips.

    Personally from my own experience I've seen LLMs demonstrate the ability to generalize. I've uploaded documentation for things not in its training set and it was able to apply its experience to answer questions even generating working code in a language it has never seen before. I've used LLMs to base64 decode, perform language translation and figure out simple ciphers albeit sometimes they fuck up. This is demonstrably more than cut and paste.

    Humans are highly intolerant to incoherence... you get home from work to find your sofa floating in the air or unplug the blender only for it to keep running you would become highly agitated. People build coherent understanding of how the world works even if those models are fundamentally lacking or misguided and they get highly agitated when they are presented with contradictory information. While LLMs don't appear to have any comparable mechanisms it doesn't mean they are simply cut and paste machines either.

  • LLM's will have achieved true intelligence when they answer the question with <Maine_accent>Ya' can't get thea' from hea'</Maine_accent>.

  • Announcing that the sky is, in fact, blue

    Brilliantly deducing that fire is hot.
  • Bummer. She seems pretty good at telling me all things I'd like to hear!

  • Generative AI doesn't even know how many R's are in strawberry.
    • It does now. As soon as a problem becomes publicized, it gets fixed. If you don't understand the problem, you won't be able to exploit it: you'll just be parroting information like an LLM.

      Here is a sample conversation I just had with ChatGPT:

      Me: how many Rs are in Strawberry?
      Chat: The word "strawberry" has three Rs.
      Me: How many Rs are in Srawberry?
      Chat: "Srawberry" has two Rs. If you meant "strawberry," then it has three Rs.

      Nice spellcheck.

    • by allo ( 1728082 )

      Sigh. Ever heard about tokenizing? An AI tokenizes Strawberry, for example, as St-raw-berry and see something like 0x723-0x681-0x612. If it knows how many r there are in 0x123-0x681-0x612 it memorized it, but it didn't count it as the token for r (e.g. 0x72) is no part of st-raw-berry, so strawberry has 0 r in the language of AI.

  • In related new researchers discover most humans don't have a coherent understanding of the world...
  • Been there; done that.
    The central AI in Logan's Run didn't know everything. Enough said.

  • by Beeftopia ( 1846720 ) on Monday November 11, 2024 @01:32AM (#64936095)

    An AI / LLM is a database you can talk to (query) using natural language. It is an amazing achievement, famously fooling one of Google's own software engineers (Blake Lemoine) into believing the machinery was sentient.

    But it's still a database. It's trained for weeks on a datasets to set the weights in the neural network. Tokens [openai.com] from prompts filter through the (static) neural network repeatedly building a response ("inferring").

    The problem is as the tokens cycle through the neural network, building the response by filtering through the weights, it's impossible for a human to know exactly what it's doing - specifically how it's reasoning to come to its conclusions. That's where the field of Explainable AI [google.com] comes in.

    To help people get a handle on AI, here's how they're priced - based on tokens:
    https://learn.microsoft.com/en... [microsoft.com]
    https://help.openai.com/en/art... [openai.com]

    A bunch of weights in a neural network (the weights set by weeks of continuous training on a dataset), tokens extracted from prompts, filter through the neural network, building the output. Is the possibility of sentience in there? Consciousness? What's the core action being taken? How exactly is a token response built?

    Here's a description from IBM [ibm.com]:

    During the training process, these models learn to predict the next word in a sentence based on the context provided by the preceding words. The model does this through attributing a probability score to the recurrence of words that have been tokenized— broken down into smaller sequences of characters. These tokens are then transformed into embeddings, which are numeric representations of this context.

    To ensure accuracy, this process involves training the LLM on a massive corpora of text (in the billions of pages), allowing it to learn grammar, semantics and conceptual relationships through zero-shot and self-supervised learning. Once trained on this training data, LLMs can generate text by autonomously predicting the next word based on the input they receive, and drawing on the patterns and knowledge they've acquired. The result is coherent and contextually relevant language generation that can be harnessed for a wide range of NLU and content generation tasks.

    Once it's trained, it's trained. The neural network and its weights are set. Then it's time to query / prompt.

    It's amazing stuff. That the machinery can do this is astonishing. But it's feeding tokens through a trained, static neural network.

    Now... right now it's trained on binary data, audio, video, images, text. Prompts are tokenized from incoming text strings. The technology is in its infancy. Could you program something around this core system to be a decision making platform that could be placed in a robot which could navigate its environment, and make decisions about what to do? I think that's coming. That would require being able to tokenize the world around it. I suspect it would take a vast amount of training data that may be beyond current computing and electrical power capabilities. This is nascent technology and there's a long road ahead.

    On the other hand... quantum computing, fusion... these have promise, but engineering limitations limit the ability to realize those promises. So, one needs to have a balanced view. We're just at the beginning though. Relational databases were introduced in the early-mid 70s. This technology has been introduced just now, so who knows what it'll look like in 50 years.

    Disclaimer: I'm not remotely an LLM / AI expert. Reading the CACM, thinking about it, but there are lots of people out there programming these things [stackexchange.com], of which I'm not one. But I am interested and think I have it right.

    • > But it's still a database

      Yet we could train models to route under all sorts of restrictions and then AI would work out. Of course if you only train the model on the perfectly open map you can't route with obstacles later.
    • How are you different than a database? What is your brain doing that a database can't?
      • How are you different than a database? What is your brain doing that a database can't?

        The first big difference between an LLM and a human is that the LLM is set in stone after training. It is a static filter. It gains no more information. The structure is set. With a human, it can immediately update its own personal knowledge store (database) with new information. It's also able to autonomously decide to do this. "Berries taste good. Red berry taste good, make Gorok feel good. Blue berry make Gorok feel b

  • by Visarga ( 1071662 ) on Monday November 11, 2024 @02:22AM (#64936153)
    Navigating a city with dynamic traffic conditions is hard for humans as well. We don't easily route around problem areas with just our heads. Maybe an experienced taxi driver would, but not someone who just goes home-to-work on a standard route, they don't form a detailed city level model.
    • Only because humans don't have access to enough knowledge about the traffic conditions. Knowing the existing conditions, a human can easily route around the problem areas. Most have driven their routes for years and know all the shortcuts that work and don't work.
    • NYC, at least north of Greenwich Village, is based on a grid system of Avenues running north-south, and Streets running east-west. You'd have to be a moron not to be able to route around any road closures.

  • That's what the AIs want us to think........

  • by RobinH ( 124750 ) on Monday November 11, 2024 @07:11AM (#64936395) Homepage
    Go on YouTube and find the AI-generated minecraft videos. This is a project that uses AI to generate real-time minecraft based on mouse and keyboard input, trained on actual gameplay. You see very quickly that the output is only generated based on the last frame or series of frames, not on an actual internal representation of the game world. It's kinda trippy, and a lot like a dream. Very odd. It's worth a watch.
    • by allo ( 1728082 )

      But keep in mind, that's a proof of concept for one technique (using n frames + userinput to generate frame n+1) and not meant to be a full game.
      If you'd like to build something on that, you could couple it for exmaple with a simple memory for at least your position in the world. The counter strike demo (look it up, it is cool as well) would benefit from ammo counter and enemy positions. Not all of them need to be neural networks, you would just have as input (n frames, user input, ammo count, enemy positio

  • Did we really need someone to tell us this?
    • It's how the scientific method works.

      "It's self evident/obvious" is how we represent feelings and intution. Neiher have anything to do with measureable data.

      There exists a large cohort of ignorant individuals out there claiming that "AI is becoming self-aware". We don't combat ignorance with more ignorance. We do so with facts.

      Unless you have some other magical method for doing this?

  • All this AI stuff is just copy and paste. There is nothing intelligent going on.

    Even a Parrot is far ahead of any Al regurgitation machine.
  • by ledow ( 319597 )

    "AI lacks inference and is found to be nothing more than a statistical machine."

    Same as every "AI" since the 60's. Except now you no longer have the excuse of not enough training data / not enough processors / not long enough to train on / lack of funding.

    Seriously - where are you going to go from here, where we spend billions and years to train on the entire Internet with millions of processors?

    Maybe back to the drawing board to make something actual capable of intelligence, I hope.

  • REALLY disappointed in Slashdot for missing this obvious joke. A clever version would wrap it around some version of the Turing Test. Perhaps saying humans now pass the test because their coherent understanding of the world is even worse than ChatGPT's?

    Or maybe a joke working it into the context of the recent election? Incoherent candidate wins again?

    Or some kind of quantum mechanical joke on the coherent bit? My reality function collapsed and killed (and ate?) my dogma?

Wherever you go...There you are. - Buckaroo Banzai

Working...