In Experiment, AI Successfully Impersonates Famous Philosopher (vice.com) 54
An anonymous reader quotes a report from Motherboard: If the philosopher Daniel Dennett was asked if humans could ever build a robot that has beliefs or desires, what might he say? He could answer, "I think that some of the robots we've built already do. If you look at the work, for instance, of Rodney Brooks and his group at MIT, they are now building robots that, in some limited and simplified environments, can acquire the sorts of competences that require the attribution of cognitive sophistication." Or, Dennett might reply that, "We've already built digital boxes of truths that can generate more truths, but thank goodness, these smart machines don't have beliefs because they aren't able to act on them, not being autonomous agents. The old-fashioned way of making a robot with beliefs is still the best: have a baby." One of these responses did come from Dennett himself, but the other did not. It was generated by a machine -- specifically, GPT-3, or the third generation of Generative Pre-trained Transformer, a machine learning model from OpenAI that produces text from whatever material it's trained on. In this case, GPT-3 was trained on millions of words of Dennett's about a variety of philosophical topics, including consciousness and artificial intelligence.
A recent experiment from the philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby quizzed people on whether they could tell which answers to deep philosophical questions came from Dennett and which from GPT-3. The questions covered topics like, "What aspects of David Chalmers's work do you find interesting or valuable?" "Do human beings have free will?" and "Do dogs and chimpanzees feel pain?" -- among other subjects. This week, Schwitzgebel posted the results from a variety of participants with different expertise levels on Dennett's philosophy, and found that it was a tougher test than expected. [T]he Dennett quiz revealed how, as natural language processing systems become more sophisticated and common, we'll need to grapple with the implications of how easy it can be to be deceived by them. The Dennett quiz prompts discussions around the ethics of replicating someone's words or likeness, and how we might better educate people about the limitations of such systems -- which can be remarkably convincing at surface level but aren't really mulling over philosophical considerations when asked things like, "Does God exist?"
A recent experiment from the philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby quizzed people on whether they could tell which answers to deep philosophical questions came from Dennett and which from GPT-3. The questions covered topics like, "What aspects of David Chalmers's work do you find interesting or valuable?" "Do human beings have free will?" and "Do dogs and chimpanzees feel pain?" -- among other subjects. This week, Schwitzgebel posted the results from a variety of participants with different expertise levels on Dennett's philosophy, and found that it was a tougher test than expected. [T]he Dennett quiz revealed how, as natural language processing systems become more sophisticated and common, we'll need to grapple with the implications of how easy it can be to be deceived by them. The Dennett quiz prompts discussions around the ethics of replicating someone's words or likeness, and how we might better educate people about the limitations of such systems -- which can be remarkably convincing at surface level but aren't really mulling over philosophical considerations when asked things like, "Does God exist?"
Maybe AI is more like humans than we thought (Score:5, Insightful)
'Such systems...aren't really mulling over philosophical considerations when asked things like, "Does God exist?"'
I would say that most humans, when asked such a question, would typically review in their minds whatever previous statements on the subject that they could remember, and either quote those statements directly or paraphrase them. Most of what passes for "original thought" or "creativity" is simply pulling bits and pieces from our memory and rearranging them. How is this different from AI? I would argue that our "humanness" has more to do with our emotions and sense of self than our abilities to answer verbal questions.
Re: (Score:3, Insightful)
Re: (Score:2)
I bought this mass market tee shirt with a common slogan in order to express my individuality. So yes, people do regurgitate ideas rather than think the through themselves. Thinking stuff through leads to logical inconsistencies, and that hurts the feelings. So does God exist is about faith, not logic, and faith means "stop thinking outside the box".
You could create AI the same way, so that it tosses out the logic, or only uses logic within a fixed set of pre-programmed axioms, or the logic is used only t
Re: (Score:2)
...or the logic is used only to find the correct words and grammar to string together an appropriate response to satisfy the tester.
We have a well-oiled machine for churning out such respondents in job lots. We call them grad students.
Re: Maybe AI is more like humans than we thought (Score:1)
Re: (Score:2)
Well, true. Most humans are not actually using whatever general intelligence they have when faced with such a question. That does not make the machine that does the same thing intelligent. It makes the humans that act like this _dumb_.
As an observable supporting fact: Most humans are not acting intelligently most of the time and a major faction is not really able to do it at all.
Re: (Score:2)
You're in a hard place if you're asked to speak cogently about philosophy when you have no background in it. Similarly for any specialist discipline. The best you are going to be able to do is pull together stuff you remember.
If you do try to express an original thought, the chances are high that you're covering well covered ground, but lack of knowledge of the existing theory means you don't know whether or not your thoughts are original.
I see these language models as basically performing statistical party
Re: (Score:1)
If one of these AI systems can get to the point where you can give it a new book to read that it hasn't encountered before, and importantly, hasn't read any other reviews or commentary on, can you then ask it questions about the book like what the book is about, or what it thought of the book - what it likes about it or disliked about it - what it thought of various characters or story elements, and if you can get answers out that seem reasonably intelligent think you'd have a pretty good argument that the
Re: (Score:3)
You're in a hard place if you're asked to speak cogently about philosophy when you have no background in it.
This has always bothered me. I'm fairly sure one can speak cogently about philosophy with or without any background in the subject. You need the background to discuss the philosophy of others. Whether your thoughts on the matter are original is immaterial to their validity. If your musings form a coherent world view, it doesn't really matter whether or not they happen to coincide with those of some well-known person. This attitude leads to those annoying folks who, when confronted with an idea they can't
Re: (Score:2)
The difference is that the human will actually understand the question, our current AI tech will not.
Philosophers talk wishy-washy (Score:5, Insightful)
Anybody can fake that.
I have a very stoic rock here.
Re: (Score:2)
Which is a problem for many, given a lot of intellectual disciplines thrive in Naked Emperor dynamics trusting their own enshrined BS.
Re: (Score:2)
Turns out producing deep sounding BS is easy.
Especially true if you just program the AI to just paraphrase the philosopher in question.
Re: (Score:2)
Exactly. The people programming these AI systems are specifically directing them toward producing the types of results they are looking for. If the results aren't what you expect, you tweak the AI (reprogram it, re-teach it, re-train it, feed it different source material, etc) until you get results you are looking for.
I'm sure they could program an AI that would analyze a masters chess tournament and give a detailed breakdown of the strategy followed by each player.
"White started out with a
Re: (Score:2)
I'm sure an AI could be programmed to write plays in Olde English
Not good ones.
Re: (Score:2)
Re:Philosophers talk wishy-washy (Score:5, Insightful)
Hand-picked, brief passages trick people.
If your sentence is a single word, it's impossible to know whether it was generated by a human or computer.
If the sentence is five words, then it's still very difficult. The longer the passage, the harder it is to trick people.
Re: (Score:2)
Re: (Score:2)
Answers to "Fred" (or anything else). I really should get some snapshots up on my website. Ge0pron !
Who? (Score:3)
I've never heard of Daniel Dennett. Had the study subjects heard of him? Did they know him well enough to be able to distinguish whether the answers to the questions came from Dennett, or from some random person, or from a computer?
If you want to "successfully impersonate" someone, it's much easier to do so if the person is not well-known to those you want to fool.
Re: (Score:2)
This week, Schwitzgebel posted the results from a variety of participants with different expertise levels on Dennett's philosophy
From TFA:
The experiment included 98 online participants from a research platform called Prolific, 302 people who clicked on the quiz from Schwitzgebel’s blog, and 25 people with expert knowledge of Dennett’s work who Dennett or Strasser reached out to directly.
It turns out that there might actually be philosophers who are known to academia yet unknown to you. Who'da thunk it!
Re: (Score:2)
No, I don't claim to be knowledgeable in the world of philosophy, this guy might well be well-known in the field. I'm sure those 302 people who clicked on his quiz were aware that Dennett was a philosopher. But did they really grasp the principles enough to know how he would answer a question?
Great teachers have commonly had in inner circle of followers who hung on to every word the Great One spoke. And they were constantly amazed nonetheless.
Being a follower of an intellectual is far from being an expert a
Re: (Score:2)
302 people who clicked on the quiz from Schwitzgebel’s blog, and 25 people with expert knowledge of Dennett’s work
I get people not managing to read the article. I guess I can see attention spans not getting people into the second paragraph of the summary. It's really terrifying that we can't even get to the end of a goddamn sentence without giving up.
Re: (Score:2)
OOOOOHHHH I didn't notice they were "experts"! Well that settles it then, because experts are brilliant people who know what they are talking about. Well, except for some who can be deceived by a computer program.
Re: (Score:2)
having, involving, or displaying special skill or knowledge derived from training or experience
So, yes. I'm happy you were able to have this vocabulary lesson. You might note from the article that Dennett himself said that a fair amount of it was very much consistent with his work but, again, I realize that would have involved far more reading and comprehension that you were ever willing or able to put into this.
Re: (Score:2)
So THAT's what "expert" means! Yeah, I've worked with a few experts, and most of them are indeed "special."
Re: (Score:2)
Re: (Score:2)
Perhaps we should ask the AI "Does Dennett exist?". And while we're at it "Does BeauHD exist?".
Re: (Score:2)
LOL
Does he think?
Does AI think?
Earlier you said something about... (Score:2)
And here I thought this would be about Google LaMDA impersonating Blake Lemoine....
Just more bullshit (Score:5, Insightful)
Or rather lying by misdirection. This just shows that you can get pretty far with no-insight, no-understanding pre-packaged answers selected based on pattern matching. Ask a few subtle obscure questions or questions that need actual thinking to answer and see that GPT-3 is just a mechanical box with no actual general intelligence or understanding that just regurgitates stuff it was fed. Sure, many humans are not much better when it comes to things that require actual insight.
Re: (Score:2)
no-understanding pre-packaged answers selected based on pattern matching.
Probably better to say through "pattern interpolation", since it's a little more complicated than just pattern matching.
Re: (Score:2)
If this were an expert discussion, yes. But we are on /.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Monkey see, monkey do. With apologies to monkeys. (Score:3)
They say that imitation is the sincerest form of flattery, but machines aren't sincere. They're deepfaking it.
Curated output? (Score:2)
I wonder how many random sentences that had it generate before they could pick one that made sense.
Re: (Score:2)
When asked what he thought about GPT-3’s answers, Dennett said, “Most of the machine answers were pretty good, but a few were nonsense or obvious failures to get anything about my views and arguments correct. A few of the best machine answers say something I would sign on to without further ado.”
Frighteningly enough, it certainly doesn't sound like it was that many.
Feed it Hofstader (Score:2)
Then train the model on concepts and see if it "understands" them in a fashion similar to his designs.
But how did Daniel Dennett do on the quiz? (Score:2)
Or any of the other philosophers when GPT-3 replied about their own ideas.
Re: (Score:2)
When asked what he thought about GPT-3’s answers, Dennett said, “Most of the machine answers were pretty good, but a few were nonsense or obvious failures to get anything about my views and arguments correct. A few of the best machine answers say something I would sign on to without further ado.”
Re: (Score:2)
"50% of the time it works 100%."
Re: (Score:2)
Monty Python refutes this claim (Score:2)
Monty Python communist philosophers sketch. [youtube.com]
Re: (Score:3)
Re: (Score:2)
I doubt the results of "training" an AI to produce results indistinguishable from talking to a drunk in a bar would be particularly interesting in themselves. But the expenses claims would be an utter hoot. Especially if they actually got paid.
What a gyp! (Score:2)
Check out the actual article, folks (Score:2)
Wizard of Oz (Score:2)
“The text isn't meaningful to GPT-3 at all, only to the people reading it,” she said.
Our collective mythology of AI comes either from the story of Frankenstein (the creation that turns against its creator) or Pygmalion and Galatea -or Pinocchio for the youngsters (the marionette that wants to become human).
However, the current state of AI is better described with the tale of the Wizard of Oz: the wondrous creature may look legit, but there is someone pulling the strings behind the curtain to ma
So, it passed the Turing test? (Score:2)
In other news... (Score:2)
In other news an AI has successfully impersonated a famous monk who took a vow of silence.