Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Humans Find AI-Generated Faces More Trustworthy Than the Real Thing (scientificamerican.com) 72

Scientific American reports on a new study published in the Proceedings of the National Academy of Sciences USA on the effectiveness of deep fakes.

"The results suggest that real humans can easily fall for machine-generated faces — and even interpret them as more trustworthy than the genuine article." "We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces," says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that "these faces could be highly effective when used for nefarious purposes." The first group did not do better than a coin toss at telling real faces from fake ones, with an average accuracy of 48.2 percent... The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people... Study participants did overwhelmingly identify some of the fakes as fake. "We're not saying that every single image generated is indistinguishable from a real face, but a significant number of them are," says study co-author Sophie Nightingale.... The authors of the study end with a stark conclusion after emphasizing that deceptive uses of deepfakes will continue to pose a threat: "We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits," they write. "If so, then we discourage the development of technology simply because it is possible."
Thanks to Slashdot reader Hmmmmmm for sharing the link!
This discussion has been archived. No new comments can be posted.

Humans Find AI-Generated Faces More Trustworthy Than the Real Thing

Comments Filter:
  • I blame Disney/Pixar (Score:4, Interesting)

    by Jzanu ( 668651 ) on Sunday February 20, 2022 @04:15AM (#62285053)
    When you are conditioned to associate purity of intent and honesty with simulated faces, that bleeds over into your real life decision-making. It makes for great movies though.
    • Disney/Pixar? They are small fry in the grand scheme of things. I'm not sure about you but I grew up with talking animals, puppets, comics, a family of dinosaurs, and a shitton of "fake" entertainment not even slightly related to Disney or Pixar. That's before you consider computer games.

      Mind you we've just gone through 4 years of a man with the most horrible looking spray tan and hair which is the subject of much debate as to its origins lying to us, so if that is "real" I'm not surprise a far more normal

      • by Jzanu ( 668651 )
        It is more relevant to consider what the influences on the sampled populations were. If you click through to the paper, each of the three experiments used a sample collected from Amazon Mechanical Turk workers. That does not define things explicitely but it narrows the sample to people who are fairly comfortable using computers and also have either narrowed time or career prospects such that they perform minimal skill labor. My admitted assumption is that suggests younger, mainly Americans and Europeans who
        • The trustworthiness is secondary to the fact that they could not distinguish the fakes from real photos.
          You couldn't tell them apart from the real thing, either. [slashdot.org]

          Your hypothesis has no evidence in support of it. You can start collecting some anecdotally, though, but saying you can tell them apart.

          But really, you can't.
          • Shit link. My bad.
            Use this one. [thisperson...texist.com]
          • by Jzanu ( 668651 )
            Did you reply to the right post? What are saying my claim is? My original claim here was that the people who viewed the simulation movies for any reason were trained by them to expect simulated trustworthiness in actions because of the depictions. The post you replied to was about the sampling used, simply that it was a narrow sample and one from which some assumptions could be made about who actually comprised it.
            • It's possible I'm misunderstanding what you're saying.

              I'll give you my interpretation.
              You're implying that highly simulated imagery has influenced how they perceive the trustworthiness of fake GAN images.
              Correct me if I'm wrong.
              • highly simulated doesn't describe it well...
                obviously simulated (i.e., Pixar CGI) is more what I'm trying to say.
      • Listen, gen-x liberal, it's 2022 now. Disney is a media empire. They dominate.

        Those puppets from your childhood were not AI-generated. Not even TRON had AI-generated faces. Honest

        I know the orange dude was distracting for you, but the real world is still here if you want to catch up. Here's a start:

        • - Amazon is the new Walmart, ruiner of small town mercantilism
        • - Google is now Alphabet, an evil greater than Micro$oft
        • - Bush and turd blossom and the neocon gang are all liberal buddies now
        • - the New York Times
        • Listen, gen-x liberal, it's 2022 now. Disney is a media empire. They dominate.

          Yes but later generations don't dominate the statistics. Millennials and below currently are still outnumbered by Gen-X and above, to say nothing of those below Millennials not being of an age where they are even relevant in this study.

          If you're going to ask a Gen-X about their view then what the following generation is currently experiencing is completely irrelevant.

          Those puppets from your childhood were not AI-generated.

          No shit Sherlock. We're talking about Disney here, how did you reply to my post and then immediately miss the point of the conversation at the

          • Now how do you rate the trustworthiness of an image? I'll give you a bonus point if you can come up with a way that wasn't taught to you by a Sunday morning cartoon.

            I'll give you a million dollars, and 5 bonus points if you can draw a coherent parallel between a GAN deep fake and Aladdin.

    • Are you trying to imply that the works of Pixar, et al are comparable to the shit AI is producing, a la Deep Fakes?
      • by Jzanu ( 668651 )
        Frankly, yes [google.com]. Whether it is some PIxar hero or a GAN face the size of a user profile pic on an internet forum is distinguishable, but do you think there will be many who can tell the latter apart from any actual user pics? Fake reviews, fake posts on news websites for propaganda dispensation, etc. There are avenues, risks, dangers that exist because of this precise reason.
        • I can't tell if you're arguing with me or against me.

          I'm arguing that GAN is not distinguishable from the real thing (except in cases where it sucks, but obviously one wouldn't use those)

          As such, I don't see how clearly-CGI imagery factors in.
          • by Jzanu ( 668651 )
            I meant that the movies imparted subconscious training, i.e. conditioning, to susceptible minds in relaxed settings where exactly that kind of conditioning works. The training was that simulated characters were trustworthy due to their depicted characterizations. For example, all the heroes. Even the bad guys, because they can be reformed. Children's movies are simple like that.
            • I don't disagree with the training.
              I'm struggling to see how you think the training applies.

              The GAN images are indistinguishable from the real thing (or more accurately, appear more real to people than the damn real thing)
              Explain your logic path that leads to a connection between childhood training to think of CGI characters as trustworthy (ignoring that every hero has a villain) and more-real-than-real fakery?
              • by Jzanu ( 668651 )
                In casual glances many features of the generated faces are close enough to match real people. On closer inspection problems with eyeglasses, ear and eye level and orientation and other flaws become apparent. These subtle differences act on a subconscious level where similar features produce a sense of trustworthiness based on childhood training.
                • I think that's highly unlikely.

                  As a control, we can use things that fit squarely in the uncanny valley. There's no feeling of trust toward them.
                  There's no mistaking them as real.
                  These people are mistaking these as real.

                  I'm sorry, but your hypothesis, which could absolutely be correct- simply doesn't work with the results of this study.
                  You can't claim that we were trained on obviously fake shit (totally conceivable), and it has led to an inability to distinguish on really-good-fake-shit, but also subc
                  • by Jzanu ( 668651 )
                    You may think so but that kind of categorical objection comes across as defensive by itself.but instead it is better to explore ideas. you are designing an experinent rather than conducting an observationsl study. of that is better fir reluability but that is not wgat this study even trued to demonstrate.
                    • You may think so but that kind of categorical objection comes across as defensive by itself.

                      That wasn't a categorical objection. The reasons follow below.

                      but instead it is better to explore ideas. you are designing an experinent rather than conducting an observationsl study. of that is better fir reluability but that is not wgat this study even trued to demonstrate.

                      I'm not 100% sure what this says, but if it's something along the lines of "this study isn't great at showing what they want to show", then I agree with you 100%.
                      As much as I agree that it's not entirely clear it shows what they think it shows, I think it's even clearer that it definition doesn't show what you say it shows, and that their conclusion is at least fit by the evidence while yours requires some seriously magical leaps in logic- leaps

                    • by Jzanu ( 668651 )
                      I was using a phone, and typing while moving did not work well. If you are interested in these issues you should take a course in research methods. You are approaching what is called internal validity using an experimental design, but the main issue with this observational study is external validity due to problems in the sampling. No control will fix the convenience sampling used here.

                      Now, my comment before was a supposition, an idea, simply connecting the observed results here with other fields to creat
    • The Fuckin Study is bullshit.

      They did 3 "experiments".
      1: Can you tell a computer generated human face from a photo of a real face - "The average accuracy is 48.2%".
      2: Can you, "with training and trial-by-trial feedback", tell a computer generated human face from a photo of a real face - "The average accuracy improved slightly to 59.0%... Despite providing trial-by-trial feedback, there was no improvement in accuracy over time, with an average accuracy of 59.3%".

      I.e. Whoop-de-fuckin-doo.
      Experiment and people

    • It is funny that M Zuckerberg looks like an obvious fake image - from the shallow end of the gene pool.
  • by Baconsmoke ( 6186954 ) on Sunday February 20, 2022 @04:26AM (#62285077)
    Ads of the future will be far less expensive and likely more successful with AI generated spokesmodels. And knowing humanity we'll happily drink the kool-aid they'll be serving, because we're so damn easily manipulated.
    • Pretty much. Besides, the researches sound a bit naive with their "We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits." This is like asking weapons manufacturers, nuclear bomb researchers, biological weapons lab ownera, ransomware developers, sex trafficking operators, crack dealers, plantation owners, slave trade businessmen, dictators, psychopathic CEOs, deforesters, poaching hunters etc. to "consider whether the associated

    • Ads of the future

      This isn’t “the future”. This is now. The “future” has been here for awhile: https://generated.photos/faces [generated.photos]

      That site has millions of generated faces. You can even generate a face on the fly to fit your specs in a different part of the site. They’re generally free for personal use, but they offer commercial pricing as well that allows you to use them for things like ads. No need for pesky release forms or the threat of lawsuit from someone whose likeness you used in a cont

    • by ET3D ( 1169851 )

      I think that you overestimate the cost of people.

      The people who cost a lot in ads are celebs. The only reason they cost a lot is that they're recognisable. You can't replace that with an AI generated image. (You could have an AI celeb, but then it will still cost, for the license to use it.)

      For ads not featuring celebs, creating an AI person will likely cost at least as much, if not more, as a real person.

      So I can't see this as saving any real money for advertising.

      • by nasch ( 598556 )

        I think you underestimate the cost of people, or overestimate the cost of the fakes. 1-20 synthetic photos of people who don't exist is 3 dollars.

        https://generated.photos/prici... [generated.photos]

        So basically free. Add the cost of some basic compositing to put them into the scene you want, and you're still way below the cost of a photo shoot with a real model, photographer, and whoever else is involved.

  • Obligatory (Score:3, Informative)

    by Anonymous Coward on Sunday February 20, 2022 @04:35AM (#62285097)
    ThisPersonDoesNotExist.com
    • by ugen ( 93902 )

      Why do they all have such bad skin? (Well, I know - because it needs to render the skin tone approximately the same and it's easier to fill it with blemishes to achieve a uniform looking result)

  • by Petersko ( 564140 ) on Sunday February 20, 2022 @04:43AM (#62285109)

    Perhaps people are just so tired of the generally shitty nature of people that some subconscious part of their brain is relieved when presented the fake.

    • Perhaps people are just so tired of the generally shitty nature of people that some subconscious part of their brain is relieved when presented the fake.

      yes, lets be honest. people are lying pieces of shit. companies have built their entire profit strategy on that so be sure that a sense talking automated profile pick to ease your suspicions will be part of that lie....

    • No. AI generated facial images are not distinguishable as fake.
      The "finds them more trustworthy" is an aside. They found images that they thought were real, but were not, more trustworthy.
      It simply means the AI has (inadvertantly, or otherwise) been tuned to generate trustworthy looking faces.
      • Re:An explanation (Score:4, Insightful)

        by ceoyoyo ( 59147 ) on Sunday February 20, 2022 @11:10AM (#62285671)

        The generative models mostly make faces that are pretty average. People like things that are predictable. We like/find attractive/trust average human faces too, including literal averages of stacks of photos.

        It's why talking heads on TV all look the same.

  • Would a simulated simulated face be more trustworthy than a simulated real face?

  • When you're faced with a machine, there's nothing to distrust.

    • But no wise (wo)man ever trusts a printer.
      They are gateways to hell and the devil uses them to stab his glowing read pitch fork up your butt.
  • Lights is faster than sound that is why some people appear trustworthy and intelligent until you hear them speak.

  • So after the uncanny valley, we now get an even more uncanny mountain.
  • This was a small study done mostly on a few hundred people on Mechanical Turk that smacks of grad student. It was done using a simple self-report 1-7 ranking. It hasn't been replicated. There was no A/B testing on different wordings or phrasing to compensate for differing understanding of the meaning of the question. The question itself was relatively abstract, which increases the distribution of different understandings of meaning.

    TL;DR, this was a poorly designed, poorly funded, unreplicated survey and y

  • by ffkom ( 3519199 ) on Sunday February 20, 2022 @09:00AM (#62285399)
    I guess this progress is very good news to all the totalitarian regimes out there. Now to make sure the messages of our AI-generated broadcasts is not diluted by people meeting other physical humans in real life, we only need some way of making people afraid to meet others. If only there was something like a virus or such...
  • by argStyopa ( 232550 ) on Sunday February 20, 2022 @09:29AM (#62285445) Journal

    Who writes a story about realistic ai faces AND THEN DOESN'T INCLUDE AN EXAMPLE PICTURE?

    I thought SciAm had only grown woker-than-thou, I didn't realize they had forgotten how to put together a basic story.

  • Humans find porn to be hotter than real sex.
    Humans find being told what they want to hear more believable than the real truth.
    Are either of these things really a surprise to anyone?
  • ... deemed more trustworthy than real faces ...

    It's called the Halo Effect. Humans value visual symmetry and balance. This was examined in the B-grade movie Looker, 1981. Beautiful models are given plastic surgery to make them perfect/trustworthy but it doesn't reach perfection. So a business creates a perfect deep-fake of the model and murders the real person: No licensing fees to pay. The story also involves a paralysing ray-gun that isn't examined too closely.

  • If anyone is basing trustworthiness on someone's face rather than their words and actions, they're an idiot in the first place.

  • Easily-manipulated mob of dopes don't know anything and lack critical thinking skills. Sortition of professionals is the way to go.

No spitting on the Bus! Thank you, The Mgt.

Working...