Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

In Experiment, AI Successfully Impersonates Famous Philosopher (vice.com) 54

An anonymous reader quotes a report from Motherboard: If the philosopher Daniel Dennett was asked if humans could ever build a robot that has beliefs or desires, what might he say? He could answer, "I think that some of the robots we've built already do. If you look at the work, for instance, of Rodney Brooks and his group at MIT, they are now building robots that, in some limited and simplified environments, can acquire the sorts of competences that require the attribution of cognitive sophistication." Or, Dennett might reply that, "We've already built digital boxes of truths that can generate more truths, but thank goodness, these smart machines don't have beliefs because they aren't able to act on them, not being autonomous agents. The old-fashioned way of making a robot with beliefs is still the best: have a baby." One of these responses did come from Dennett himself, but the other did not. It was generated by a machine -- specifically, GPT-3, or the third generation of Generative Pre-trained Transformer, a machine learning model from OpenAI that produces text from whatever material it's trained on. In this case, GPT-3 was trained on millions of words of Dennett's about a variety of philosophical topics, including consciousness and artificial intelligence.

A recent experiment from the philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby quizzed people on whether they could tell which answers to deep philosophical questions came from Dennett and which from GPT-3. The questions covered topics like, "What aspects of David Chalmers's work do you find interesting or valuable?" "Do human beings have free will?" and "Do dogs and chimpanzees feel pain?" -- among other subjects. This week, Schwitzgebel posted the results from a variety of participants with different expertise levels on Dennett's philosophy, and found that it was a tougher test than expected. [T]he Dennett quiz revealed how, as natural language processing systems become more sophisticated and common, we'll need to grapple with the implications of how easy it can be to be deceived by them. The Dennett quiz prompts discussions around the ethics of replicating someone's words or likeness, and how we might better educate people about the limitations of such systems -- which can be remarkably convincing at surface level but aren't really mulling over philosophical considerations when asked things like, "Does God exist?"

This discussion has been archived. No new comments can be posted.

In Experiment, AI Successfully Impersonates Famous Philosopher

Comments Filter:
  • by marcle ( 1575627 ) on Tuesday July 26, 2022 @06:12PM (#62736590)

    'Such systems...aren't really mulling over philosophical considerations when asked things like, "Does God exist?"'

    I would say that most humans, when asked such a question, would typically review in their minds whatever previous statements on the subject that they could remember, and either quote those statements directly or paraphrase them. Most of what passes for "original thought" or "creativity" is simply pulling bits and pieces from our memory and rearranging them. How is this different from AI? I would argue that our "humanness" has more to do with our emotions and sense of self than our abilities to answer verbal questions.

    • Re: (Score:3, Insightful)

      by Catvid-22 ( 9314307 )
      Yes. A person's who's just had breakfast will answer questions about the meaning of the world differently from a person who's gone hungry for days. Maybe even simply having a headache or getting constipated would change your view, at least temporarily, of the world.
    • I bought this mass market tee shirt with a common slogan in order to express my individuality. So yes, people do regurgitate ideas rather than think the through themselves. Thinking stuff through leads to logical inconsistencies, and that hurts the feelings. So does God exist is about faith, not logic, and faith means "stop thinking outside the box".

      You could create AI the same way, so that it tosses out the logic, or only uses logic within a fixed set of pre-programmed axioms, or the logic is used only t

      • ...or the logic is used only to find the correct words and grammar to string together an appropriate response to satisfy the tester.

        We have a well-oiled machine for churning out such respondents in job lots. We call them grad students.

    • But try to convince them that they are wrong....then you get to the marrow
    • by gweihir ( 88907 )

      Well, true. Most humans are not actually using whatever general intelligence they have when faced with such a question. That does not make the machine that does the same thing intelligent. It makes the humans that act like this _dumb_.

      As an observable supporting fact: Most humans are not acting intelligently most of the time and a major faction is not really able to do it at all.

      • You're in a hard place if you're asked to speak cogently about philosophy when you have no background in it. Similarly for any specialist discipline. The best you are going to be able to do is pull together stuff you remember.

        If you do try to express an original thought, the chances are high that you're covering well covered ground, but lack of knowledge of the existing theory means you don't know whether or not your thoughts are original.

        I see these language models as basically performing statistical party

        • by vivian ( 156520 )

          If one of these AI systems can get to the point where you can give it a new book to read that it hasn't encountered before, and importantly, hasn't read any other reviews or commentary on, can you then ask it questions about the book like what the book is about, or what it thought of the book - what it likes about it or disliked about it - what it thought of various characters or story elements, and if you can get answers out that seem reasonably intelligent think you'd have a pretty good argument that the

        • You're in a hard place if you're asked to speak cogently about philosophy when you have no background in it.

          This has always bothered me. I'm fairly sure one can speak cogently about philosophy with or without any background in the subject. You need the background to discuss the philosophy of others. Whether your thoughts on the matter are original is immaterial to their validity. If your musings form a coherent world view, it doesn't really matter whether or not they happen to coincide with those of some well-known person. This attitude leads to those annoying folks who, when confronted with an idea they can't

    • The difference is that the human will actually understand the question, our current AI tech will not.

  • by nospam007 ( 722110 ) * on Tuesday July 26, 2022 @06:29PM (#62736624)

    Anybody can fake that.

    I have a very stoic rock here.

    • Turns out producing deep sounding BS is easy.

      Which is a problem for many, given a lot of intellectual disciplines thrive in Naked Emperor dynamics trusting their own enshrined BS.
      • Turns out producing deep sounding BS is easy.

        Especially true if you just program the AI to just paraphrase the philosopher in question.

        • Exactly. The people programming these AI systems are specifically directing them toward producing the types of results they are looking for. If the results aren't what you expect, you tweak the AI (reprogram it, re-teach it, re-train it, feed it different source material, etc) until you get results you are looking for.

          I'm sure they could program an AI that would analyze a masters chess tournament and give a detailed breakdown of the strategy followed by each player.
          "White started out with a

      • Was just going to post the same thing. A large proportion of modern philopsophers can be replaced by nothing more sophisticated than a Markov model trained on the works of other modern philosophers. Can't understand a word of it? It's because it's far too deep and meaningful for a rube like you. Shit, even the editors of the journals that publish the stuff can't tell the difference between a legit paper and a pile of gibberish submitted as a joke.
    • by phantomfive ( 622387 ) on Tuesday July 26, 2022 @08:30PM (#62736868) Journal

      Hand-picked, brief passages trick people.

      If your sentence is a single word, it's impossible to know whether it was generated by a human or computer.

      If the sentence is five words, then it's still very difficult. The longer the passage, the harder it is to trick people.

    • I too, have a very philosophical rock here on my bookshelf (SNNBB : Situation Normal, Need Bigger Bookshelf) which is a beautiful section through a polyphase barytes and galena vein cutting into a granite boulder. A good 5kg of heavy metals, with several decades of experience of listening to my problems after a few hundred million years of being inside a mountain range.

      Answers to "Fred" (or anything else). I really should get some snapshots up on my website. Ge0pron !

  • by Tony Isaac ( 1301187 ) on Tuesday July 26, 2022 @06:53PM (#62736684) Homepage

    I've never heard of Daniel Dennett. Had the study subjects heard of him? Did they know him well enough to be able to distinguish whether the answers to the questions came from Dennett, or from some random person, or from a computer?

    If you want to "successfully impersonate" someone, it's much easier to do so if the person is not well-known to those you want to fool.

    • From TFS:

      This week, Schwitzgebel posted the results from a variety of participants with different expertise levels on Dennett's philosophy

      From TFA:

      The experiment included 98 online participants from a research platform called Prolific, 302 people who clicked on the quiz from Schwitzgebel’s blog, and 25 people with expert knowledge of Dennett’s work who Dennett or Strasser reached out to directly.

      It turns out that there might actually be philosophers who are known to academia yet unknown to you. Who'da thunk it!

      • No, I don't claim to be knowledgeable in the world of philosophy, this guy might well be well-known in the field. I'm sure those 302 people who clicked on his quiz were aware that Dennett was a philosopher. But did they really grasp the principles enough to know how he would answer a question?

        Great teachers have commonly had in inner circle of followers who hung on to every word the Great One spoke. And they were constantly amazed nonetheless.

        Being a follower of an intellectual is far from being an expert a

        • Probably not, but again:

          302 people who clicked on the quiz from Schwitzgebel’s blog, and 25 people with expert knowledge of Dennett’s work

          I get people not managing to read the article. I guess I can see attention spans not getting people into the second paragraph of the summary. It's really terrifying that we can't even get to the end of a goddamn sentence without giving up.

          • OOOOOHHHH I didn't notice they were "experts"! Well that settles it then, because experts are brilliant people who know what they are talking about. Well, except for some who can be deceived by a computer program.

            • From Webster:

              having, involving, or displaying special skill or knowledge derived from training or experience

              So, yes. I'm happy you were able to have this vocabulary lesson. You might note from the article that Dennett himself said that a fair amount of it was very much consistent with his work but, again, I realize that would have involved far more reading and comprehension that you were ever willing or able to put into this.

              • So THAT's what "expert" means! Yeah, I've worked with a few experts, and most of them are indeed "special."

      • You don't even need to be in academia in any significant sense. I've known of Dennett for ... at least a decade, because he's apparently an important philosopher of religion and/ or evolutionary development (though that's obviously more the territory of mathematicians and biologists). Never bothered to more than skim reports of his work, because what is there of interest in religion? And mathematics doesn't pose any real philosophical problems either ("true" or "false" being the possible values of propositi
    • Perhaps we should ask the AI "Does Dennett exist?". And while we're at it "Does BeauHD exist?".

  • And here I thought this would be about Google LaMDA impersonating Blake Lemoine....

  • Just more bullshit (Score:5, Insightful)

    by gweihir ( 88907 ) on Tuesday July 26, 2022 @07:30PM (#62736764)

    Or rather lying by misdirection. This just shows that you can get pretty far with no-insight, no-understanding pre-packaged answers selected based on pattern matching. Ask a few subtle obscure questions or questions that need actual thinking to answer and see that GPT-3 is just a mechanical box with no actual general intelligence or understanding that just regurgitates stuff it was fed. Sure, many humans are not much better when it comes to things that require actual insight.

    • no-understanding pre-packaged answers selected based on pattern matching.

      Probably better to say through "pattern interpolation", since it's a little more complicated than just pattern matching.

    • But it impressed some idiot journalist so that's all that matters!
      • You didn't even read it, did you? Just saw the first sentence and thought "yep, I know everything THIS article is going to say!" and joined the parent poster to talk about what a dumbass the guy is who is making the exact same argument you are.
    • What a wonderful restatement of the exact point the article you seem to be bitching about was making. Try reading the damn thing next time before flying to down to the comments to go all nerd-hulk.
  • They say that imitation is the sincerest form of flattery, but machines aren't sincere. They're deepfaking it.

  • I wonder how many random sentences that had it generate before they could pick one that made sense.

    • Apparently at least some were no-gos.

      When asked what he thought about GPT-3’s answers, Dennett said, “Most of the machine answers were pretty good, but a few were nonsense or obvious failures to get anything about my views and arguments correct. A few of the best machine answers say something I would sign on to without further ado.”

      Frighteningly enough, it certainly doesn't sound like it was that many.

  • Then train the model on concepts and see if it "understands" them in a fashion similar to his designs.

  • Or any of the other philosophers when GPT-3 replied about their own ideas.

  • All philosophy sound like gobbledygook to non-philosophers, so an AI will do an equally convincing job.

    Monty Python communist philosophers sketch. [youtube.com]

    • AI will never be equal to Monty Python.
    • I was thinking of Python too, but from the point of the "Philosopher's Song" [youtube.com] (lyrics [musixmatch.com]).

      I doubt the results of "training" an AI to produce results indistinguishable from talking to a drunk in a bar would be particularly interesting in themselves. But the expenses claims would be an utter hoot. Especially if they actually got paid.

  • So which one of the motherfucking sentences came from the philosopher and which one from the AI?
  • I know this is /., but it's right in the second paragraph of the goddamn summary. The whole point of the article is that this is mimicry and not intelligence, and warns of confusing the two. I know it's fun to pop down to the comments and point out what idiots professional researchers are, but every now and then it's worth reading what they actually, you know, said, before you go trying to show how much smarter you are than them.
  • “The text isn't meaningful to GPT-3 at all, only to the people reading it,” she said.

    Our collective mythology of AI comes either from the story of Frankenstein (the creation that turns against its creator) or Pygmalion and Galatea -or Pinocchio for the youngsters (the marionette that wants to become human).

    However, the current state of AI is better described with the tale of the Wizard of Oz: the wondrous creature may look legit, but there is someone pulling the strings behind the curtain to ma

  • Soon it will need to pay taxes and receive healthcare.
  • In other news an AI has successfully impersonated a famous monk who took a vow of silence.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...