Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Top Physicist Says Chatbots Are Just 'Glorified Tape Recorders' (cnn.com) 216

In an interview with CNN, Michio Kaku, professor of theoretical physics at City College of New York and CUNY Graduate Center, said chatbots like OpenAI's ChatGPT are just "glorified tape recorders." From the report: "It takes snippets of what's on the web created by a human, splices them together and passes it off as if it created these things," he said. "And people are saying, 'Oh my God, it's a human, it's humanlike.'" However, he said, chatbots cannot discern true from false: "That has to be put in by a human." According to Kaku, humanity is in its second stage of computer evolution. The first was the analog stage, "when we computed with sticks, stones, levers, gears, pulleys, string." After that, around World War II, he said, we switched to electricity-powered transistors. It made the development of the microchip possible and helped shape today's digital landscape. But this digital landscape rests on the idea of two states like "on" and "off," and uses binary notation composed of zeros and ones.

"Mother Nature would laugh at us because Mother Nature does not use zeros and ones," Kaku said. "Mother Nature computes on electrons, electron waves, waves that create molecules. And that's why we're now entering stage three." He believes the next technological stage will be in the quantum realm. Quantum computing is an emerging technology utilizing the various states of particles like electrons to vastly increase a computer's processing power. Instead of using computer chips with two states, quantum computers use various states of vibrating waves. It makes them capable of analyzing and solving problems much faster than normal computers. But beyond business applications, Kaku said quantum computing could also help advance health care. "Cancer, Parkinson's, Alzheimer's disease -- these are diseases at the molecular level. We're powerless to cure these diseases because we have to learn the language of nature, which is the language of molecules and quantum electrons."

This discussion has been archived. No new comments can be posted.

Top Physicist Says Chatbots Are Just 'Glorified Tape Recorders'

Comments Filter:
  • by NomDeAlias ( 10449224 ) on Tuesday August 15, 2023 @08:52PM (#63770814)
    He takes snippets from textbooks created by a human, splices them together and passes it off as if he created those things.
    • by LostMyBeaver ( 1226054 ) on Wednesday August 16, 2023 @01:06AM (#63771160)
      Haha... I was thinking the same.

      I'm not sure what a top physicist is... and uh, I work on the LHC. They all seem pretty normal to me. And most of them like to talk and play with their thoughts out loud. Lunch conversations are funny because it's never good enough to say "it was nice weather this weekend". There always has to be a discussion about what made it nice and theorizing what chain of natural events led up to the nice weather.

      The press should never be allowed to talk with these people. In physics, they are often brilliant, but it generally comes at the cost of being far less in areas like... communication.

      Nothing he said was profound or original. I will even say that the words I've written here were generated precisely the same way an LLM works.

      I use LLMs all day now. They save me massive amounts of time. Even when they're wrong, with some prodding they are often helpful. We're in pre-alpha testing of LLMs. Within a few years, I expect LLMs to be more accurate than humans. Sadly, that bar is not high. Just read articles like this one, it's clear the physicist has no idea how to speak on record and the journalist had a physicist to talk with and made a headline that makes them both appear foolish.
    • by Whateverthisis ( 7004192 ) on Wednesday August 16, 2023 @09:04AM (#63771722)
      I get you don't like how he's denigrating the concept of AI, but:

      A) he's right. LLMs are not conscious, they just appear to be, are limited by the thousands of people who have to tag the data to make it work in sweatshop conditions [techgoing.com], and are getting worse [fortune.com]. Over time they should get better, but they're not much more than a glorified language parser.

      B) Michio Kaku is a significant contributor in quantum mechanics and theoretical physics. The text books you refer to? He wrote many of them [wikipedia.org].

    • He takes snippets from textbooks created by a human, splices them together and passes it off as if he created those things.

      You are not wrong, but you are wrong.

      Every word you said is true, accurate, and correct... and yet the statement as a whole is incorrect. There is missing information.

  • Ok that is actually a decent argument as to why quantum computing is actually going to be useful. It is closer to the way that nature computes things, whereas binary is extremely limited. Although binary is simple to learn and implement and understand from a human brain perspective. However I disagree that rearranging existing ideas into new ones based on prior inputs is sub human. I really believe this is exactly what humans do, starting in the months before they are born. So a computer that learns the ex
    • by dfghjk ( 711126 )

      "Ok that is actually a decent argument as to why quantum computing is actually going to be useful."

      So we found the dumbass author's target audience.

      "I really believe this is exactly what humans do, starting in the months before they are born."

      Yes, and imagine modeling a computing technique around how the human brain works, perhaps even calling the technique a "neural network" as though the "extremely limited" binary device could somehow be made to emulate neurons. Maybe, if we are lucky, Elon Musk could in

    • by Kisai ( 213879 ) on Tuesday August 15, 2023 @09:53PM (#63770940)

      Please remember that Michio Kaku is someone who likes to be on camera. Same with Neil deGrasse Tyson. Both of these guys may be pretty smart, but they tend to speak authoritatively about things they aren't really authorities on.

      Like the greatest way to detect BS youtube videos is to see of Kaku or Tyson are in it, cause it's probably been taken out of context or misunderstood.

      And one of the reasons I kinda cringe when I see Kaku's name show up in things is because of his appearance in a pseudo-science film put out by a cult. I thought the film was kinda neat, but I certainly could tell where the production just used him to try and sound authoritative about their nonsense.

      You can be smart, and still do STUPID things. And yes, Kaku's is right, Chatbots are essentially just regurgitating text that's been said before, but the analogy of "auto-correct" is more correct since auto-correct often guesses words by how you use them rather than just trying to find spelling errors.

      Like a more literal reason why chatGPT does what it does is simply because it looks for word relationship sequences. It simply doesn't "know" what things are, just that these words often are placed together in this sequence. It only looks like it's smart, but it's the kind of "looking smart" that you'd bluff your way through a job interview with because the HR person doesn't know any better, but the supervisor or manager certainly knows.

      • Please remember that Michio Kaku is someone who likes to be on camera. Same with Neil deGrasse Tyson. Both of these guys may be pretty smart, but they tend to speak authoritatively about things they aren't really authorities on.

        Appears smart but tends to speak authoritatively about things they really aren't authorities on. Are you talking about Kaku or chatbots?

        Hmm, has anyone ever seen Kaku with a chatbot? Can anyone definitively prove Kaku isn't a chatbot?

      • by gtall ( 79522 )

        That's my impression as well. With Neil, sometimes I'll see him interviewing someone I care to hear from. I made the mistake of clicking on one of those. All I get is Neil opening his mouth every 15 seconds and most of that is repeating "context" so that the entire interview could have been reduced to 5 minutes of information and the rest devoted to a 45 minute advertisement for zephyrs from Neil's brain.

        Kaku strikes me as a salesman. He's realized he can monetize himself.

    • by Roger W Moore ( 538166 ) on Tuesday August 15, 2023 @11:40PM (#63771076) Journal

      Ok that is actually a decent argument as to why quantum computing is actually going to be useful. It is closer to the way that nature computes things

      Well, it would be a good argument if it were true. Claiming that "Nature does not use ones and zeroes", particularly in reference to electrons, is something that someone calling themselves a physicist should know better than to claim because an electron has two spin states as indeed does every fundamental fermion (quarks and leptons) in nature. Clearly, nature can, and does, use "ones and zeroes".

      Quantum computing relies on entangled states which are a mixture of the one and zero states but there is no clear evidence that the best computational device that nature has yet created, the human brain, uses quantum entanglement. It has been suggested that this is how our consciousness works but no evidence yet, just conjecture. Indeed, it is entirely possible that our brains operate on purely classical principles, we simply do not know.

      Quantum computers are extremely interesting and likely to be useful because they operate on completely different physical principles to existing computers which will give them different strengths, and weaknesses, compared to binary computers. However, they may well be as different to our own brains as binary computers are but the fact that they can run different algorithms very rapidly will mean that we can solve new classes of problems just like binary computers enabled us to easily solve problems that human brains find really hard.

      • so he should stick to his knitting.

        Also, the statistical-weighted representations/models that GPT arrives at are more likely to generate true statements than the average person. Reason is that they are trained on the statistics of the entities and associations that humans most express about, The many falsehoods and irrelevances in the vast corpus of human expression are likely to cancel out, because creatively different from each other (imagined, and often carelessly, and not constrained to be isomorphic to
    • Re: (Score:2, Insightful)

      Sorry, no, humans are capable of original thought and wild imagination created entirely from internal processes. Computers are not.

      Humans do learn from outside experiences, obviously, but are not limited to that. Computers are limited to outside input.

      The dumbest child is smarter than the most powerful computer ever built.

      • by vivian ( 156520 )

        Human original thought is actually a combination of previous experiences and random processes recombining those experiences in new ways, with the occasional flash of insight to do something a bit different. It is extremely difficult for someone like a guy who has spent his whole life in some remote village deep in the amazon jungle who has no exposure to say, science fiction, to suddenly come up with ideas and stories about starships and warp drives and teleportation - he's more likely to come up with stori

      • Computer outputs are a result of a) input, b) Turing-complete algorithm, and c) memory.

        Not just a simple result of inputs.
        • Re:Wrong. (Score:4, Insightful)

          by iAmWaySmarterThanYou ( 10095012 ) on Wednesday August 16, 2023 @06:38AM (#63771494)

          Yes but those are unimportant details. Giving a computer 100000x memory will not suddenly make it self aware or intelligent nor provide it the capacity to doso.

          Giving a computer a better algorithm just means it might take a little longer to realize you're dealing with a computer and not a person with a communications disorder.

          It also doesn't matter if you use a data model that contains all human knowledge, a billion custom built AI cpus and an algorithm from Star Trek. It is still a computer with no cognition or self awareness and is incapable of original thought or imagination.

          If we magicked up a computer using technology that doesn't exist and no one is working on then we're in sci-fi land so we can make up anything we want. In that case Star Trek's Commander Data becomes possible.

          It took me years after graduating to understand what one of my professors was saying about computers vs real life cognition. It's a very difficult topic and not necessarily immediately obvious that he was correct in this regard that computers are just compute boxes and always will be. But now that we can see them right there in our browser and "talk" with them, it becomes hard to see anything else. Computers aren't just dumb. They don't belong on the intelligence scale at all.

          You want smarter than any computer? Go buy a puppy. It can figure out its world without a hundred million trial and error training sessions and it's a lot cuter, too.

          • Re:Wrong. (Score:5, Insightful)

            by The Evil Atheist ( 2484676 ) on Wednesday August 16, 2023 @08:45AM (#63771680)

            You want smarter than any computer? Go buy a puppy.

            Puppies can't drive cars. Like, not at all.

            It can figure out its world without a hundred million trial and error training sessions

            It does. First, let's not neglect the trillions of life forms that died in the service of natural selection training what eventually became a dog brain.

            Then, let's not neglect the continual neuronal training of the dog... basically from the time the brain was developed enough in its mother's womb.

            it's a lot cuter

            That's the only correct thing you've said.

          • Re:Wrong. (Score:5, Insightful)

            by noodler ( 724788 ) on Wednesday August 16, 2023 @09:22AM (#63771790)

            It took me years after graduating to understand what one of my professors was saying about computers vs real life cognition. It's a very difficult topic and not necessarily immediately obvious that he was correct in this regard that computers are just compute boxes and always will be. But now that we can see them right there in our browser and "talk" with them, it becomes hard to see anything else. Computers aren't just dumb. They don't belong on the intelligence scale at all.

            You make such bad categorical errors it's hard to imagine you left uni with a degree in anything.
            For one, you seem to be immensely confused on the categorical differences between LLM's and AI in general. The fact that LLMs have certain properties does not mean all possible computer programs will have the same properties.
            You also don't seem to understand that a structure like a neural net (artificial or not) produces an informational system that is largely decoupled from the base that provides the facilities that make up that system. This means that such a neural net can, and in factdoes have, properties that are not present in the components of the system. The neurons in your brains don't experience anything that you experience in your consciousness. They don't have that capability. Yet, when lots of them are connected in certain ways your brain as a whole gains that capability. So it's kindof silly and dumb to assume that computers couldn't do the same on the basis that computers just shuffle bits around. Your brain does basically the same, just more complicated.

            It can figure out its world without a hundred million trial and error training sessions and it's a lot cuter, too.

            No, it couldn't "figure out its world without a hundred million trial and error training sessions". It has genes that are literally the result of billions of years of evolution and couldn't possibly be itself without this training. It is a highly specific organism that has been honed into its current form by billions upon billions of 'trial and error' experiments.

      • by noodler ( 724788 )

        Sorry, no, humans are capable of original thought and wild imagination created entirely from internal processes.

        For a thought to be completely original the person would need to exist in total isolation from the rest of the universe.
        No person that you can communicate with is in this situation and thus you cannot state that any one thought is 'created entirely from internal processes'. We humans are deeply rooted in the external information world.

        The dumbest child is smarter than the most powerful computer ever built.

        The most powerful computer doesn't post bullshit like you do so i'm afraid the point goes to computers.

      • Humans do learn from outside experiences, obviously, but are not limited to that. Computers are limited to outside input.

        The dumbest child is smarter than the most powerful computer ever built.

        The dumbest child has thousands of years of information stored in his dna.

      • The dumbest child is smarter than the most powerful computer ever built.

        I take it you haven't been around many kids?
        That's probably for the better, though.

  • by blue trane ( 110704 ) on Tuesday August 15, 2023 @09:02PM (#63770832) Homepage Journal

    Hasn't generative AI fulfilled Chomsky's dream of a generative context-sensitive grammar (the most comprehensive grammar), because it easily analyzes sentences linguistically at least as well as Chomsky can and much, much faster?

    If it is just rearranging snippets, how does it know how to make the grammatical changes that are often necessary?

    • by gweihir ( 88907 ) on Tuesday August 15, 2023 @09:10PM (#63770854)

      Nope. It is a partial, randomized, unreliable implementation of a context sensitive grammar only. You know, context-sensitive without all the assurances it would need to qualify.

      • If it uses grammar impeccably and is able to answer why it did so as plausibly as you, does it matter? If it even agrees with you, does that make it even more like you?

        • Re: (Score:3, Interesting)

          The key difference between the AI and gweihir is the AI is incapable of agreeing with anything.

          It has no thought process, zero capacity for imagination, zero self awareness, zero external awareness.

          It is a very clever pattern matching and response system, nothing more.

          Whereas despite occasional appearances to the contrary, gweihir is capable of true original thought and understanding, self and external awareness and so on.

          Gweihir is a real person, the computer is just faking it and rather poorly at that.

          It

          • by noodler ( 724788 )

            The key difference between the AI and gweihir is the AI is incapable of agreeing with anything.

            It has no thought process, zero capacity for imagination, zero self awareness, zero external awareness.

            Again, you're talking about AI when you should really be talking about this specific class of LLMs.

          • by gweihir ( 88907 )

            Thanks, I think. Maybe.

            The problem with ChatGPT/LLM type AIs is that they are way more convincing than I am, all the while having none of the insight or reasoning ability that I have.

            Apparently, most people tend to believe well-worded nonsense over poorly worded factual arguments and even the words of pretty people over the ones of ugly people. In other words, most people are shallow as hell and cannot fact-check for shit.

            Reference:
            https://news.rice.edu/news/200... [rice.edu]

        • If it uses grammar impeccably and is able to answer why it did so as plausibly as you, does it matter?

          It cannot do either of those things. Sometimes it might appear to do so, the answers seem logically consistent. Other times the exact came process produces obvious bullshit. There's no "using grammar" occurring there, it's just shitting out statistically-possibly-valid text. Ask it for something that it wasn't trained to do and out will come something that maybe looks good, and has a plausible sounding explanation only if you don't know anything about the subject at hand.

          • by gweihir ( 88907 )

            AFAIK, there is some kind of "language beautification" layer that is employed in addition to make the answers sound better. No idea how much the actual raw model can do by itself.

    • Re: (Score:2, Insightful)

      by peterww ( 6558522 )

      > If it is just rearranging snippets, how does it know how to make the grammatical changes that are often necessary?

      Statistics.

      You can translate an alien language automatically by just having a large enough sample of it, and then comparing that statistical analysis to a sentence some alien says to you.

      The trouble is that natural languages are contextual. A phrase in Mandarin (or any language) translated literally does not tell you what the speaker really means.

      That's why ChatGPT is wrong all the time. It

      • Statistics gets grammar right 100% of the time now? Don't even textbooks it trained on have typos and grammar errors?

        • by narcc ( 412956 )

          gweihir's earlier reply to you is spot-on. It's not 100%. Given how models like this work, it's astonishing that it works as well as it does. It's a testament to the incredible power of statistics.

          Don't even textbooks it trained on have typos and grammar errors?

          Of course. Thankfully, the massive amounts of training data will naturally minimize the influence of those errors on the model. (Remember that the model is significantly smaller than the training data.) Once again, statistics saves the day!

          Why push back on this? Do you think you could produce a grammar witho

        • If you read a book from a human and one from gpt, I'll bet it won't take you more than a short few minutes to tell which is which.

          And it isn't just text unaware. It is generally unaware. Did you see the D20 pictures they published? Wolves with human feet? Objects sticking out of other objects? Hands that were in weird contorted positions? No human would have done that for a D&D art book. The AI has zero real understanding of anything so it stuck human feet on the wolves because it's just a comput

          • Wolves with human feet? Objects sticking out of other objects? Hands that were in weird contorted positions? No human would have done that for a D&D art book. The AI has zero real understanding of anything so it stuck human feet on the wolves because it's just a compute engine.

            What's always fascinated me about visual models like stable diffusion is models seem to understand a metric fuckton of next level visual shit. Illumination, reflections, all kinds of material properties and artistic styles. When things are placed in water they make rippled disturbances. Scenes with water seem to have fairly accurate reflections of surrounding terrain. If you ask it to draw a room with a table and fireplace you can often see the table being illuminated by the fire and the effects differ

            • I agree that the technology is fascinating and often produces amazing results. My only point here is to the people who want to attribute human cognition of any sort to the AI or reduce humans to the AI level. In that regard we are not simply vastly superior. The AI isn't even in the game.

  • He is not wrong (Score:5, Insightful)

    by gweihir ( 88907 ) on Tuesday August 15, 2023 @09:08PM (#63770844)

    Of course, it is an averaging tape recorder and it can do some simplistic matching of questions to answers. But on the other hand, it gets this frequently wrong and it sometimes (not so rarely) starts hallucinating connections that may be there statistically, but are complete nonsense factically.

    • Why are you moving goalposts, while projecting?

      • by HBI ( 10338492 )

        This seems my tagline in action. I watched ...at this point literally decades... of people claiming crypto wasn't a scam on this site. This is similar.

    • Of course, it is an averaging tape recorder and it can do some simplistic matching of questions to answers. But on the other hand, it gets this frequently wrong and it sometimes (not so rarely) starts hallucinating connections that may be there statistically, but are complete nonsense factically.

      I don't think that's quite correct, it's doing some much more impressive stuff than that [medium.com]:

      Furthermore, when the drawing of the unicorn was described in code, a portion of the code responsible for drawing the horn was mirrored and removed. Then ChatGPT was asked to add the horn in the correct place, and it appended the code that generated the horn on the unicorn’s head.

      I mean the "consciousness" bit from that post is nonsense, but this is more than matching inputs and outputs, it's reasoning.

      Or you coul

      • by noodler ( 724788 )

        That's not the result of pattern matting, that's actual mathematical reasoning.

        It's neither. It's statistical balancing.
        The cow, the mathematics, all can be derived at by weighting the statistics in the training data.
        What is maybe special is that these newer models have the capacity to 'hold' and 'consider' more relations at a time, which helps with accurately weighting various aspects of the question. To put it in another way, the model needs sufficient descriptions of a cow to be able to derive where the horns are. The fact that the horn should be on the head comes from the statisti

      • by gweihir ( 88907 )

        That is because you have no clue how ChatGPT works. ChatGPT cannot even do a simple addition of two arbitrary numbers, the model is simply incapable of doing something like that. What it can do is (with limited reliability) identify mathematical expressions via pattern matching and then hand them off to Wolfram Alpha, which is capable of selecting algorithms for arithmetic solving and running them on expressions.

        Of course, the expression you quoted is invalid (unbalanced parentheses) so if you actually got

        • That is because you have no clue how ChatGPT works. ChatGPT cannot even do a simple addition of two arbitrary numbers, the model is simply incapable of doing something like that.

          Well I don't know how ChatGPT in particular is doing math.

          It's traditionally tricky getting NNs to do math because the Neurons don't really handle numbers in that way (unless they made some custom neurons for that, I've heard people try that for a physics model, but I don't know if it worked well).

          It seems like it's developed some more generalized rules (which scale surprisingly well for addition and multiplication).

          What it can do is (with limited reliability) identify mathematical expressions via pattern matching and then hand them off to Wolfram Alpha,

          It can, but it didn't. The Wolfram Alpha plugin is a non-standard add-on [wolfram.com].

          Of course, the expression you quoted is invalid (unbalanced parentheses) so if you actually got an answer for that, then that answer is wrong.

          Further evidence ag

    • Of course, it is an averaging tape recorder

      Is there some way of telling what is an averaging tape recorder apart from what is not an averaging tape recorder?

      • by gweihir ( 88907 )

        There is. Activating your brain and stop being an ass. I do realize that is not within your capabilities though.

  • by narcc ( 412956 ) on Tuesday August 15, 2023 @09:09PM (#63770850) Journal

    He's up there with Neil Tyson as a top pop-sci guy and he's a qualified physicist, but I would think someone would need to first make significant contributions to their field to be called a "top physicist".

    • ...and he's a qualified physicist

      He may have been in the past but I have to question that when he says that "nature does not use zeroes and ones" when specifically referring to electrons that, like all fundamental fermions, have two spin states. He then goes on to state conjecture about how nature computes - which presumably refers to brains - as if it were scientifically established fact. This is not how any qualified physicist I know behaves.

    • I am a physicist and the more I hear Kaku speak, the less I believe he actually has a real PhD granted in Physics from any school that requires actual work !
  • he is not wrong, however the exact same can be said of a lot of experts and scientific opinion from people too.
  • moron (Score:4, Insightful)

    by dfghjk ( 711126 ) on Tuesday August 15, 2023 @09:11PM (#63770858)

    This guy is an idiot regarding computing, he needs to stay in his lane.

    Neural networks may use "0's and 1's" but their results are not bound by two states. Perhaps this "professor of theoretical physics" should start with what a floating point number is, then progress to what CONVOLUTIONAL neural networks are and what kinds of results they generate. AI does not simply regurgitate snippets and splice them together.

    "Mother Nature would laugh at us because Mother Nature does not use zeros and ones,"

    Yes it does. All over the place.

    "Mother Nature computes on electrons, electron waves, waves that create molecules. And that's why we're now entering stage three."

    But "Mother Nature" encompasses FAR more than how it "computes", whatever that means.

    "Instead of using computer chips with two states, quantum computers use various states of vibrating waves."

    Almost like how binary processors use multi-bit words to represent more than two states. Hmmm, is this a SuperKendall physicist?

    Quantum computing distinguishing feature is not merely that it represent more than two states. What kind of idiot would let his name get associated with that?

    • Michio Kaku is a fucking moron.
      I don't give a shit what his qualifications are, just listen to the dumb shit he says on [random bad science documentary here].

      Alright, I'll dial that back just a bit. He may not be a fucking moron. He may be very good at knowing what to say in front of people who grant publicity so that the publicity gravy train keeps rolling.
    • It's an analog world.

    • Re: (Score:3, Interesting)

      There is no "lane" for him to stay in. He is just a guy that likes to be on TV. He is extremely incompetent in any field he tries to have an opinion on, he glances over most obvious issues and concerns, doesn't have any in depth knowledge about any topic. Basically he is just a guy spewing bullshit about stuff he doesn't know anything about. How he gets any airtime is anyone's guess...
    • Yep. In general, the zeros and ones are manifestations of discrete math, in physics the discrete units are called quanta, same thing. Besides, the neural networks basically are simulating an analogue system, once trained they have been simulated on analogue systems, like glass optics.

  • After that, around World War II, he said, we switched to electricity-powered transistors.

    You skipped right over the relays and vacuum tubes!

  • On the other hand, I'm sure everyone knows a few people in their social or professional circles who would also fit the description of a walking tape recorder.

    If you don't know any such people...then there's nothing to worry about.

  • Yet laws rule our life. I agree that this isn't a revolution, quantum will be. but he's limiting his prediction to merely increasing capacity of what already exists. that's not a revolution, that's an evolution. quantum will change the way we interact with the world.
  • by FudRucker ( 866063 ) on Tuesday August 15, 2023 @09:47PM (#63770924)
    until cleverbot hears about this
  • by christoban ( 3028573 ) on Tuesday August 15, 2023 @09:48PM (#63770930)

    "In an interview with CNN, Michio Kaku"

    Two reasons to stop reading right there.

  • The past decade has shown that most humans are just glorified tape recorders.

    They consume and produce sentences that are understandable by humans, but those sentences mainly serves to confirm whatever they think the world SHOULD be, rather than looking at how the world ACTUALLY is.

    People read, believe, and then parrot back whatever FUD blogpost they've read, instead of thinking about what is actually happening in the messier physical world.
  • Humans are glorified tape recorders that can’t tell right from wrong.

  • The difference isn't in the material medium. This isn't to necessarily join the "humans aren't special" gang directly, because a distinction yet remains.

    If you handed a man, an employee, a crate of legos and said "Make this as tree-like as possible" you'd get a result.

    Then tell him to do it again. And again. And again. Give him some extra tools. Some materials. Give him years. A thousand years.

    Stick this man in a room and tell him to spend the next 10,000 years doing 10,000 iterations. The end result may in

  • because GPT's are more like glorified tape recorders connected to a random number generator
  • Not clear why Fareed bothered asking a physicist about computers and algorithms. Perhaps next week he'll decide to ask a lawyer medical questions?

    More importantly I don't know what to make of the tape recorder meme asserting x is only regurgitating what y had said in the past. There seems to be a general pool of excuses for why "chatbots" are merely worthless illusions that seem entirely subjective, devoid of any objective means of evaluation and all around robustly difficult to falsify.

    Lets say I feed a

    • Re: (Score:3, Insightful)

      by vivian ( 156520 )

      It would be more like asking a snooker player about nuclear physics and him saying the LHC is just a glorified pool table physicists use to do opening breaks.

  • Reality hits tech marketing in the face. There is no AI(as in intelligence) everything at this point is just automation. And not very good automation at that.
    But really great for killer robots!, detect motion or body heat shoot!
  • But I have heard many other people say this. If he used chatgpt himself for just a few hours and used it imaginatively (such as combining several subject, concepts and genres into one query), he would realize it can't possibly work through stitching together text. The combinatorial explosion is simply too big that there is something to stitch together. It shows it really does operate at a more conceptual level with some level of understanding. Even more so for gpt4. I wonder if it is a kind of denial. Peop
    • The combinatorial explosion is simply too big that there is something to stitch together. It shows it really does operate at a more conceptual level with some level of understanding. Even more so for gpt4. I wonder if it is a kind of denial.

      Yes, your religious belief in AI is a kind of denial.

      It has been explained to you that there is no reasoning going on there. It's just generating stuff that makes sense based on the other stuff. That means that if enough examples of what you are asking for were used to train the model, you might get something valid-looking out. But a human could extrapolate from just a few examples.

      • But a human could extrapolate from just a few examples.

        Really? I'd be able to ask a toddler just starting to learn to read to extrapolate whole pages from a few examples of text?

        Whatever your position on AI, I wish people would stop putting humans on some imaginary perfect pedestal.

        Each human brain is being trained from the day they are able to perceive things in their mother's womb, not to mention the natural evolution weeding out the worser brains for hundreds of millions of years, to get to where we are. Some things that humans do are so specialized th

      • Yes, your religious belief in AI is a kind of denial.

        It has been explained to you that there is no reasoning going on there. It's just generating stuff that makes sense based on the other stuff. That means that if enough examples of what you are asking for were used to train the model, you might get something valid-looking out. But a human could extrapolate from just a few examples.

        I'm going to hold my breath and wait patiently for AI deniers to say something:

        1. Objective
        2. Falsifiable

        Until then I'm going to dismiss all of the evidence free musings and increasingly conspiratorial notions whereby any and all evidence of understanding, reasoning, intelligence, etc demonstrated by these systems is conveniently dismissed by merely asserting computers are just playing increasingly clever tricks and fooling us into thinking they are able to apply learned knowledge and concepts across a wide

  • ..were built with vacuum tubes. Transistors didn't exist yet.
    It would make one much more credible if one can get the basic historic facts straight.
    Don't totally disagree with him about AI chatbots though, but I'd say they were more like ELIZA programs, but running on much larger, faster hardware.
    • by pjt33 ( 739471 )

      There's an argument to be made for relays, as used in Konrad's Zuse's Z3 and Z4. It depends on precisely where you draw the boundaries of the category.

  • And in terms of expertise in AI Research he is a physicist, and biology he is a physicist, and computing he is a physicist

    He probably knows more than the average person on these subjects, but he is far from an expert ...

  • ...was that LLMs are no more than stochastic parrots, simply predicting the most probable morphemes that follow those already selected according to the given context & prompt.

    What he actually described was a Markov chain generator, which have a narrower range of generative capability but can also be fun to play with, e.g. https://sebpearce.com/bullshit... [sebpearce.com] =D
  • ...chatbots cannot discern true from false: "That has to be put in by a human."...

    When narcissism is considered a valued social trait and bullshit-riddled clickbait becomes a valid marketing strategy, I'd love for the professionals in the room to clarify how they still feel humans can still discern true from false.

    Truth, is whatever the ad says, because profits. False, is merely a negative mind state, easily degradable. This is how feelings started becoming more important than facts, and with predictable results.

  • You could argue all of computing is essentially glorified tape recorders. Doesnâ(TM)t mean that they canâ(TM)t be incredibly powerful.
  • by furry_wookie ( 8361 ) on Wednesday August 16, 2023 @09:20AM (#63771774)
    ChatGPT like bots are all trained on HUMAN content. They produce none of their own. All they are really is fancy search engines, with advanced indexing and topic association matrix, and some code to output their results in some predefined formats.

    The really interesting thought experiment is this.

    If the AI's are all trained on human work output, but then they AI replace all the humans, who is going to create the original source material that the AI will be trained on? Or will all knowledge be frozen in time at the point where the AI replace the humans? Will nothing new ever be created? Will no new programming languages ever be designed if AI is doing all the programming?
    • The fact that they find new chemical formulations or ways of doing things that do work out and which hadn't previously been spotted puts on a level with humans in an important sense; indeed far too many humans never show any sign of intelligent thought.

  • ...that we are all glorified chatbots made out of meat.
  • His remark about Mother Nature is especially funny because, he, a physicist, knows that Nature is, at the lowest levels, in fact, quantized. This is one reason (amongst several) why I have always disliked Kaku.

You're using a keyboard! How quaint!

Working...