Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Horrifying Woman Keeps Appearing In AI-Generated Images (vice.com) 98

An anonymous reader quotes a report from Motherboard: AI image generators like DALL-E and Midjourney have become an especially buzzy topic lately, and it's easy to see why. Using machine learning models trained on billions of images, the systems tap into the allure of the black box, creating works that feel both alien and strangely familiar. Naturally, this makes fertile ground for all sorts of AI urban legends, since nobody can really explain how the complex neural networks are ultimately deciding on the images they create. The latest example comes from an AI artist named Supercomposite, who posted disturbing and grotesque generated images of a woman who seems to appear in response to certain queries.

The woman, whom the artist calls "Loab," was first discovered as a result of a technique called "negative prompt weights," in which a user tries to get the AI system to generate the opposite of whatever they type into the prompt. To put it simply, different terms can be "weighted" in the dataset to determine how likely they will be to appear in the results. But by assigning the prompt a negative weight, you essentially tell the AI system, "Generate what you think is the opposite of this prompt." In this case, using a negative-weight prompt on the word "Brando" generated the image of a logo featuring a city skyline and the words "DIGITA PNTICS." When Supercomposite used the negative weights technique on the words in the logo, Loab appeared. "Since Loab was discovered using negative prompt weights, her gestalt is made from a collection of traits that are equally far away from something," Supercomposite wrote in a thread on Twitter. "But her combined traits are still a cohesive concept for the AI, and almost all descendent images contain a recognizable Loab."

The images quickly went viral on social media, leading to all kinds of speculation on what could be causing the unsettling phenomenon. Most disturbingly, Supercomposite claims that generated images derived from the original image of Loab almost universally veer into the realm of horror, graphic violence, and gore. But no matter how many variations were made, the images all seem to feature the same terrifying woman. "Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI's world knowledge," Supercomposite wrote. It's unclear which AI tools were used to generate the images, and Supercomposite declined to elaborate when reached via Twitter DM. "I can't confirm or deny which model it is for various reasons unfortunately! But I can confirm Loab exists in multiple image-generation AI models," Supercomposite told Motherboard.

This discussion has been archived. No new comments can be posted.

Horrifying Woman Keeps Appearing In AI-Generated Images

Comments Filter:
  • But I won't tell you how

  • Since there is a ghost in the machine.

  • by Waffle Iron ( 339739 ) on Thursday September 08, 2022 @08:24PM (#62865335)

    These scientists eventually realize that their AI has an evil soul, and this woman is the temporal manifestation of that soul.

    Alarmed by this development, the scientists quickly move to shut down the AI. But it's too late, the AI is on to their plans. Using an IoT vulnerability, the AI hastily transfers its soul into the microcontroller of a child's doll in a suburban bedroom.

    And with that, an entire franchise of slasher films has just been born.

  • Computers are not superstitious or controlled by evil spirits. Evil humans for sure, evil in spirit sure, but evil spirits/ghosts not so much.

    • by Anonymous Coward

      Maybe AI is actually what "evil spirits" actually are?

    • I agree. Clearly this tool is being abused. But nobody expected it to cry out.

    • by Z80a ( 971949 )

      In this case, the neural network is trained against an immense collection of pictures generated by humans.
      loab is a pattern found in those.

      • With "negative prompt weights" it is really an anti-pattern.

        • by HiThere ( 15173 )

          Considering the way it has grabbed attention, I'd say that "pattern" was the better choice, even though it was found via a negative search. Opposites are often both equally real.

          So my question would be, "What is the opposite of 'Loab'?".

  • by devoid42 ( 314847 ) on Thursday September 08, 2022 @08:28PM (#62865345)
    This is a really simple case of the majority of images are slanted towards societies perspective on beauty and sex. As well as the training for the AI being targeted at making useful results. When you take that training bias and then tell it to generate things with "negative weight" you are essentially asking for it to move away from all previous tags and attractors in the network. It's not surprising that it ends up in related spaces often.
    • This is a really simple case of the majority of images are slanted towards societies perspective on beauty and sex.

      If that were true it would be opposite standard beauty in all regards - like maybe a fatter face, no hair.

      Instead it's extra creepy because it's an image you can see was beautiful once, but is now corrupted.

      • This is a really simple case of the majority of images are slanted towards societies perspective on beauty and sex.

        If that were true it would be opposite standard beauty in all regards - like maybe a fatter face, no hair.

        Instead it's extra creepy because it's an image you can see was beautiful once, but is now corrupted.

        remember it is built on images humans created few images we create are meant to be universally repulsive and the images its trained of are not completely revolting the average would trend toward having some positive atributes.

        think of it this way you have 1M images in 950k of them the people have hair , and of the 50k that dont have hair many of those would still be considered attractive/neutral by most other measure, so they are not weight negatively for that.

    • by Anonymous Coward

      If they won't even say what prompts are being used, then yeah this just reeks of a disgustingly simple patterns in patterns out.

      Especially if it's the simple outcome of an AI amassing existing archetypes. If you go around prompting humans with "draw me a woman in response to the word WISE" s/ kind, musical, cunning, you won't just get patterns in relevant properties like posture and expression, you'll find some degree of convergence towards irrelevant subjective properties (eg hair color/style, dress).

      So no

    • Horror movies don't need much; but AI could generate some freaky stuff that I'm sure would do better than some puppet toy with a knife and a cartoon voice...

      The AI generates creepy woman that keeps showing up in stuff as some kind of ghost in the system; trying to remove her causes her to take aggressive actions which come to get you in the real world...

      or somehow satan or whatever gets thru to the machine which is smart enough to be influenced by hell but not alive so it's unable to naturally resist at all

    • by dargaud ( 518470 )
      Couldn't it be a way to automatically detect bias then ?
      • by HiThere ( 15173 )

        There *will* be bias. It's impossible to prevent. What is normally desired is that the bias be as close as possible to the mean (or, occasionally, the mode). I.e., that the deviation from the mean be as small as possible.

        Then the question becomes "What variance is acceptable?". Sometimes you want to allow a fairly large variance. For other purposes the variance should be small. Confusing those two requirements leads to problems.

        Another question should be "Along what dimensions are you measuring the da

        • by dargaud ( 518470 )
          Yes, but my short question implies that, if for instance you train an AI on 'What is a human' and the results seem OK, but then you ask the opposite and the result is always, say, a black woman, then you *know* there is a bias.
          • by HiThere ( 15173 )

            Well, you know there's something wrong. Whether the problem is bias or not isn't known without a lot more digging.

    • The main question is this: how reproducible is this image, if someone else tries to apply similar image generation principles? Unless similar images can be shown to appear for someone else, this remains nothing but a curio with no value.
    • This is a really simple case of the majority of images are slanted towards societies perspective on beauty and sex.

      You're also looking at every manifestation of a woman that is anti-feminine characteristics. Simultaneously misandrist and misogynistic, terrifying to men women and children, anti-nurturing, non-mothering, uncaring and self-betrayed version of woman.

      It's disturbing because the AI has picked every characteristic of female that is repulsive, it is the ultimate expression of what it means to be completely out of sync with nature. The woman's nightmare exposed.

      This is also the endgame the Rockafella's de

      • by HiThere ( 15173 )

        If I interpret that image as a Rorschach test blot, your response is quite interesting. That you accuse others of having your response says interesting things about your personality.

        This isn't really fair, as it's not a test blot, but has commonly identifiable features. It is clearly not a happy face. OTOH, much of your response doesn't seem to be driven by the image, but more by your reaction to it. And the deductions you are making from what is, after all, merely a collection of pixels.

        • by MrKaos ( 858439 )

          If I interpret that image as a Rorschach test blot, your response is quite interesting. That you accuse others of having your response says interesting things about your personality.

          Except that it isn't a blot test and you wouldn't be qualified to interpret the results which says nothing about my personality. There is nothing remotely sexual about those images.

          This isn't really fair, as it's not a test blot, but has commonly identifiable features. It is clearly not a happy face. OTOH, much of your response doesn't seem to be driven by the image, but more by your reaction to it. And the deductions you are making from what is, after all, merely a collection of pixels.

          The words in this post are a collection of pixels that convey and idea. These images are generated by an A.I acting on a mass of data that would be impossible for a normal human to interpret *without* an A.I. So it is more than a collection of pixels, it's a set of images that convey an idea found in a set of data that we nor

      • by jvkjvk ( 102057 )

        I don't find the image that bad at all. I was kind of disappointed after the hype of "horrifying".

        The fact that you have such a strong reaction is on you, on one else.

        Sorry. That is not the definition of Yin, unless you are a misogynist.

        • by MrKaos ( 858439 )

          I don't find the image that bad at all. I was kind of disappointed after the hype of "horrifying".

          Perhaps you haven't integrated your shadow and gorge yourself on horrific images.

          The fact that you have such a strong reaction is on you, on one else.

          Seems like you're reacting to my post.

          Sorry. That is not the definition of Yin, unless you are a misogynist.

          Clearly you don't know what Yin is otherwise you would have offered your own definition.

  • by Sebby ( 238625 ) on Thursday September 08, 2022 @08:29PM (#62865347)

    Horrifying Woman Keeps Appearing In AI-Generated Images

    and this [foxnews.com] image or a horrifying man keeps coming up on sites I visit too...

  • Help I'm Trapped in an code Factory

  • by Kobun ( 668169 ) on Thursday September 08, 2022 @08:56PM (#62865411)
    What happens if you type her name into the generator three times, while sitting in a dark room?
  • ... that it's best just to say, "Yes, dear." And try not to stir up trouble.

  • by RightwingNutjob ( 1302813 ) on Thursday September 08, 2022 @08:59PM (#62865419)

    CARRIER LOST

  • by Virtucon ( 127420 ) on Thursday September 08, 2022 @09:04PM (#62865425)

    Just swipe left, it's not that hard!

  • by Anonymous Coward
    Your mom in, your mom out.
    (I repeated the experiment last night several times.)
  • You just taught an AI how to have nightmares.

  • by VicVegas ( 990077 ) on Thursday September 08, 2022 @09:56PM (#62865505) Homepage

    It is pretty easy to get consistent "people" to appear with Stable Diffusion. I've been lucky to stumble on a good looking (mostly) woman, who also tends to get the red cheeks, but much less often. I found her while exploring variations of an interesting man that kept cropping up. Maybe if I had done a long Twitter thread about it, Vice would have done an article on my AI girlfriend, instead. heheheh.

    I put a few pics of her on Twitter. https://twitter.com/mann_hodge... [twitter.com]

  • Maybe the singularity will occur when AI re-invents tub girl.

    • by Anonymous Coward

      May I invite you to enjoy the wonders and horrors of https://pornpen.ai/ [pornpen.ai] (NSFW) - though it's not quite at that level of depravity yet.

  • It’s unclear which AI tools were used to generate the images, and Supercomposite declined to elaborate when reached via Twitter DM. “I can't confirm or deny which model it is for various reasons unfortunately! But I can confirm Loab exists in multiple image-generation AI models,” Supercomposite told Motherboard.

    • Yes, it's one thing to make these claims, and another to not show the prompts

      People are carefully hoarding these prompts as if they would all be useful in the future, they won't be. The models will change, the training data will change, and the prompts will change. But they sure are proprietary about 'em.

      Meanwhile the people with the models are getting ALL of the prompts, so the prompt hoarders are only doing harm to everyone but them.

  • Someone's messing about in the Wired.
  • of the species wide memory of Eve, Pandora, Helen of Troy, Lady Macbeth, Maleficent, Cersei.
  • by Dr. Bombay ( 126603 ) on Thursday September 08, 2022 @10:55PM (#62865605) Homepage

    looks like butterfly rash indicative of Lupus.

  • by Tablizer ( 95088 ) on Thursday September 08, 2022 @11:15PM (#62865627) Journal

    I did it in an attempt to flush out all the goatse pics. I was mostly successful.

  • that rule 34 was not the straw to break the camel's back on this.

  • ... Gaia. Battered, abused and beaten up by humanity.

  • by 93 Escort Wagon ( 326346 ) on Friday September 09, 2022 @02:45AM (#62865895)

    I suspect this reflects a huge amount of borrowing between AI projects and coding, and what they claim as independent developments really amounts to rehashing over the same ground, over and over. Meaning a lot of the research is much, much less valuable and/or groundbreaking than many people currently believe.

    • by HiThere ( 15173 )

      I'm not sure about the code, but there *is* a large amount of sharing of the training data. (At least WRT text. I only believe the same is true of imagery.)

  • It's kind of interesting that everybody is posting pictures of this woman now. With these pictures being so widely used, they'll probably end up in training data for newer AIs. So we're literally making her immortal.
  • What a fragile writer.
    Perhaps a skin peel mask gone wrong or a sunburn. :D :D

    • by ceoyoyo ( 59147 )

      Well, some of the images are kind of horrifying. Mostly the ones with prompts like "the opposite of angels."

  • by Visarga ( 1071662 ) on Friday September 09, 2022 @03:40AM (#62865961)
    Using reverse image search we can see if there is anything like it on the web. https://yandex.com/images/sear... [yandex.com]
  • Dear headline writer, the word you are looking for is "horrid".
    • "Horrid" is now pretentious enough to be an antonym. It sounds like something a third-generation food critic would say to describe their wine being the wrong temperature.
  • John Carmack would like a word with you about how "cutting-edge" that is.
  • ....nightmarish, f*cked up sh*t........ I'd be hitting the DEL key on that software ASAP......
  • That's Karen. Her name's Karen. If you ever see her, start walking quickly but calmly in the opposite direction. It's only a matter of time before she kicks off.
  • Maybe because of all of the scandals and dead bodies the AI keeps on finding images of Hillary.
  • Expect to see "Loab" appearing in digital porn in 3... 2... 1...
  • It can only do what it's programed to do folks. Has "AI" solved any problem, provided any real benefit?
  • Oh, the clickbait. Oh, the people who spend an unhealthy amount of time fixated on screens. If you thank that image is horrifying, you really need to get out more. Try looking around you at actual people (instead of staring at your phone) as you walk around an actual town or city in the real world.

    Sheesh.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...