Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

White Faces Generated By AI Are More Convincing Than Photos, Finds Survey (theguardian.com) 70

Nicola Davis reports via The Guardian: A new study has found people are more likely to think pictures of white faces generated by AI are human than photographs of real individuals. "Remarkably, white AI faces can convincingly pass as more real than human faces -- and people do not realize they are being fooled," the researchers report. The team, which includes researchers from Australia, the UK and the Netherlands, said their findings had important implications in the real world, including in identity theft, with the possibility that people could end up being duped by digital impostors.

However, the team said the results did not hold for images of people of color, possibly because the algorithm used to generate AI faces was largely trained on images of white people. Dr Zak Witkower, a co-author of the research from the University of Amsterdam, said that could have ramifications for areas ranging from online therapy to robots. "It's going to produce more realistic situations for white faces than other race faces," he said. The team caution such a situation could also mean perceptions of race end up being confounded with perceptions of being "human," adding it could also perpetuate social biases, including in finding missing children, given this can depend on AI-generated faces.
The findings have been published in the journal Psychological Science.
This discussion has been archived. No new comments can be posted.

White Faces Generated By AI Are More Convincing Than Photos, Finds Survey

Comments Filter:
  • Wasn't that a Rolling Stones song?
    Or was it Moody Blues Nights In White Faces?

  • by Press2ToContinue ( 2424598 ) on Tuesday November 14, 2023 @10:15PM (#64006307)
    Next in AI breakthroughs: convincing everyone that The Matrix is just a documentary in disguise. But seriously, if AI-generated white faces are more 'real' than actual humans, does that mean we've achieved peak uncanny valley or just started a new side quest in 'Simulator Simulator'? And let's not forget the golden rule of AI: Garbage in, racially biased garbage out. In other news, the AI for generating faces of color is still waiting for its Windows 95 update. On a side note, can we expect AI-generated faces on driver's licenses soon? It'd certainly make for interesting traffic stops: 'Sir, this license says you're a 6-foot-tall AI with perfect cheekbones, but you're clearly human and stuck in traffic just like the rest of us.
    • Show me your hands!
    • by AvitarX ( 172628 )

      Looking at the faces.

      They were some AI looking real faces.

    • But seriously, if AI-generated white faces are more 'real' than actual humans, does that mean we've achieved peak uncanny valley or just started a new side quest in 'Simulator Simulator'?

      The uncanny valley doesn't exist.

    • Re: (Score:2, Troll)

      by AmiMoJo ( 196126 )

      It could be down to having more white people in the training data. It could also be that the test subjects were looking less critically at white faces, subconsciously.

      • That darker faces reflect fewer photons back to the imaging devices, and display less internal light intensity contrast in features.

        This does not seem to get discussed or evaluated, perhaps for reasons of sensitivity, but it's just physics, so should be fine to discuss.
        On a technical level I'd be interested in an objective assessment of whether this is a significant factor in the differential performance of the tech on different people.
        • by AmiMoJo ( 196126 )

          Wouldn't that make it harder to detect fakes?

          • It could make it possibly slightly harder to train neural nets, even given the same amount of training data.

            A way of visualizing this is to consider what sometimes happens to the resolution of your smartphone camera pictures in low light conditions.. Smartphone cameras these days are doing all kinds of tricks with pixel mixing, interpolating, and what not to sharpen up images, change contrast etc, but sometimes do a worse job on that kind of processing when there is not as great a dynamic range of input lig
            • by AmiMoJo ( 196126 )

              Cameras can see dark skin just fine. It simply needs to be lit properly, and the exposure set correctly.

              Of course, a lot of auto settings are calibrated for white skin.

              • Fair enough (no pun intended) but in available large collections of images, brighter lighting may not be there enough in general to fully compensate for differences in reflectivity. Someone should actually study if this a factor in currently reasonably available image training datasets. Would probably get crucified for attempting such a study mind you. I'm simply saying it may be a factor, from first principles physics.
      • by Anonymous Coward

        They used security cam footage from the 2020 riots to train the models. It was predominantly... err mostly... uhhh... almost completely... umm ...

    • by mjwx ( 966435 )

      Next in AI breakthroughs: convincing everyone that The Matrix is just a documentary in disguise. But seriously, if AI-generated white faces are more 'real' than actual humans, does that mean we've achieved peak uncanny valley or just started a new side quest in 'Simulator Simulator'? And let's not forget the golden rule of AI: Garbage in, racially biased garbage out. In other news, the AI for generating faces of color is still waiting for its Windows 95 update. On a side note, can we expect AI-generated faces on driver's licenses soon? It'd certainly make for interesting traffic stops: 'Sir, this license says you're a 6-foot-tall AI with perfect cheekbones, but you're clearly human and stuck in traffic just like the rest of us.

      Are they more realistic... or just more aesthetically pleasing?

      Humans have flaws, imperfections tend to be more noticeable in photographs. I know a few girls who look absolutely stunning in person but you tend to notice skin imperfections, gauntness, et al in photographs. Photos of celebs are so heavily Photoshopped that the pictures the paps grab can be almost unrecognisable.

      So are they just more liked because they are closer to the idea of a perfect white face?

  • TFA mentions one possible reason why the AI generated faces of people of color were less convincing was that the training data included less people of color. That pretty much tracks with AI turning racist, sexist and creepy [cnn.com] if it picked up those things from its training data.

    I guess people still like to anthropomorphize AI, but it's not capable of creativity or truly understanding social issues. It's just a complicated algorithm designed to process a dataset in a way that appears to mimic thought, but it'

    • Just crank up the temperature until it starts acting up. It's like alcohol for LLMs.
      • It's like coming to you and saying "hey, don't say the thing you really want to say or think is most likely correct, but pick the third or fourth thing out of your list of things you could possibly say right now and say that instead."
    • by dgatwood ( 11270 )

      TFA mentions one possible reason why the AI generated faces of people of color were less convincing was that the training data included less people of color.

      That's one possible reason. But even with a pile of training data, there's no guarantee that it would work as well. In all likelihood, training AIs on low-contrast images is harder than training them on higher-contrast images. Darker skin means smaller differences between shadowed areas and non-shadowed areas, so there's less for the AI to work with. I have a suspicion that if they switched to... say 12-bits-per-channel color for their training data, they would get better results. With 8-bit-per-channe

    • As I see it A.I being better at drawing white people affects white people worse than colored.

      Seriously why do I want people to be able to fake me better? So someone can frame me for a crime, more easily replace me in acting jobs?

      The only advantage I can see is if I don't want to attend an online meeting, I can get an A.I to pretend it is me.

  • Places like Midjourney gatekeep a LOT.
    They obviously have to have a list of banned terms, but the ones they choose controls what can be output. "White" is ok. "Black" will get you banned. "Slim" and "Skinny" are ok, but "full-figured" will get you banned. "Heterosexual" is ok but "gay" will get you banned.
    Also, they keep the list of banned words secret, so you have no idea why you get banned or when. You keep it to white, straight, skinny vanilla females or GTFO. Don't even try to create people of color. Th

    • This is Slashdot. Grab Automatic1111, run your own models, and stop complaining about the corporate stuff you have no control over.

    • Re: (Score:3, Insightful)

      by Baron_Yam ( 643147 )

      There's the problem that if you don't ban those things, every 'edgy' white teen (in addition to the actual racists) is going to use them to be as offensive as possible, and make it a lot more likely your company will get negative press.

      • So you are suggesting the answer to stop empowering potential offensive use of AI by white supremacists is to simply enforce a pure, 100% Aryan AI output? Seriously? Seems like that plan only benefits white supremacists.

        • Kind of like how places are now trying to fight racism with segregation.
        • So you are suggesting the answer to stop empowering potential offensive use of AI by white supremacists is to simply enforce a pure, 100% Aryan AI output? Seriously? Seems like that plan only benefits white supremacists.

          I genuinely tried to understand this when what's now known as "Pearl Milling Company" removed their previous mascot from the box. I figured since I'm a gay man, would I be offended my minority group was being used in a stereotypical/cliched way to promote a product? For example, if it was instead called "Cabana Boy Pancake and Waffle Mix" with a cute blonde twink on the front of the box.

          Nah, I wouldn't be offended in the least. I'd probably even buy it if it made halfway decent pancakes. Hell, I'd have

        • Nope. I'm saying companies have legal and economic incentives to do that.

      • Free speech isn't a problem.
      • There's the problem that if you don't ban those things, every 'edgy' white teen (in addition to the actual racists) is going to use them to be as offensive as possible, and make it a lot more likely your company will get negative press.

        Company get negative press? You mean by the (State-sponsored) 'edgy' employee being paid to manufacture that negative press?

        This ain't just for racists and nerds picked on in high school. Real problems require real scenarios.

    • "Heterosexual" is ok but "gay" will get you banned.

      First Florida and now an AI with its own "don't say gay" law? This timeline keeps getting weirder and weirder.

    • The study did not use Midjourney to generate images.
    • Places like Midjourney gatekeep a LOT.
      They obviously have to have a list of banned terms, but the ones they choose controls what can be output. "White" is ok. "Black" will get you banned. "Slim" and "Skinny" are ok, but "full-figured" will get you banned. "Heterosexual" is ok but "gay" will get you banned.

      I looked at the gallery and I didn't see anything that could imply banned terms. https://legacy.midjourney.com/... [midjourney.com]

      What I did see are plenty of images of what looks like photos out of advertisements. There were plenty of photos of people "in uniform". What do I mean by "in uniform"? I noticed that in advertisements there's a fairly narrow range of what the models/actors may look like, perhaps to not offend the target audience and/or not have people with such an unusual appearance to distract from the fea

    • by ruddk ( 5153113 )

      OpenAI does it as well, and the problem is that you often don't know why things stopped working and you get an error.
      When you try to make your own GPT, you might at some point, feed it information that triggers something and everything breaks, and it is not clear what offended it.

  • Here are some AI and Human faces [sagepub.com], in case you want to figure it out for yourself. I notice that the photos chosen as AI seem to look like they were filtered, whereas the ones chosen as human have fine detail.
    • I have to agree. The ones picked to be AI generally seem to be a bit blurry, while the ones judged real seem to have realistic shadows to them.

      Human males 37 and 47, for example. The lighting seems odd, their features just "don't add up" in that particular shot. Overall, they're the ones with "smoother" skin, or the shot is particularly symmetrical.

      AI male 13 what triggers me is actually the background - the horizon doesn't match up.

      Keep in mind that if you take enough high speed imagery of real humans,

  • One thing that likely skews the results is that the AI almost certainly had more white faces to be trained on, because the database of white faces is larger due to cultural and economic differences around the world.

    Another thing is that among white faces there will be more variation on shapes of the face, color of eyes and hair, and so could work to help fool humans into thinking an AI generated face is real. This variety works against us humans because we have an idea in our heads on what a random human f

  • This made me think at two different things:

    1. likely the training set for AI contains a much larger sample of photos of white faces, so it is trained better with them. That will grant better success with this kind of images.

    2. we people are increasingly more trained to see pictures of our fellow humans heavily filtered or straight photoshopped. Heck, if I take a photo with my phone, usually some "AI" will intervene and changed the look in an uncontrollable way.

  • by bradley13 ( 1118935 ) on Wednesday November 15, 2023 @04:49AM (#64006721) Homepage

    Mentioning white vs. non-white recognition rates is bizarre. The study in this paper explicitly included only white faces, and the participants in the study were explicitly white. The only evidence they have concerning non-white faces comes from their their "reanalysis" of someone else's study. This is a brief section, completely out of place, since it has nothing to do with actual subject of the paper: the analysis of attributes leading to hyperrealism.

    I can only assume that the authors mention white vs. non-white in hopes of rousing anger and increasing the visibility of their paper. I.e., they are trying to become clickbait.

  • You'd think this means that all that stuff people say about feeding in more data making the model better with that particular kind of data is true... Most of the pictures of people it has seen on the internet are probably pictures of white people. So it's better at drawing them. This is exactly what we would expect. If you want it to be better at drawing other people's faces then feed it a bunch more in training.
  • The headline implies that they are "more real" but the article actually just talks about more people being convinced that they were real. That is a totally different thing. Has nothing to do with the headline. Shame on you slashdot
    • by skam240 ( 789197 ) on Wednesday November 15, 2023 @09:57AM (#64006975)

      No, shame on you. Straight from the article

      "The results from 124 participants reveal that 66% of AI images were rated as human compared with 51% of real images."

      The article is actually pretty clear on this and alludes to this finding several times prior to giving the actual data here. You very clearly didnt read the article.

  • You know there's nothing malignant behind this, right?

    " the algorithm used to generate AI faces was largely trained on images of white people." ...is likely mostly do to with the simple realities of light, physics, and the ease with which a camera can draw distinctions on a light-colored face vs a dark-colored face.
    That, and the other reality that online digital photography has a population bias in favor of western, mostly white, rich countries.

  • You can't trust these real pale-faces.

  • by Mal-2 ( 675116 ) on Wednesday November 15, 2023 @11:57AM (#64007305) Homepage Journal

    I believe it's a shortage of training data, because I've received some incredibly realistic images of Black women when requesting things like "a paladin in armor, carrying a shield and sword". Then, after I got over the fact that these were some of the best images generated by far, I realized that every time I got a Black woman in the picture, it had one of three different faces on it. That was all it had. It understood them fine and could generate them quite convincingly, but it didn't have very many to pick from.

  • 315 participants, 800 total faces.

    Do not expect this result to be reproducible.

A computer scientist is someone who fixes things that aren't broken.

Working...