Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Researchers Create 'Psychedelic' Stickers That Confuse AI Image Recognition (techcrunch.com) 112

"Researchers at Google were able to create little stickers with 'psychedelic'-looking patterns on them that could trick computer AI image-classifying algorithms into mis-classifying images of objects that it would normally be able to recognize," writes amxcoder: The patterned stickers work by tricking the image recognition algorithm into focusing on, and studying, the little pattern on the small sticker -- and ignoring the rest of the image, including the actual object in the picture... The images on the stickers were created by the researchers using knowledge of features and shapes, patterns, and colors that the image recognition algorithms look for and focus on.

These stickers were created so that the algorithm finds them 'more interesting' than the rest of the image and will focus most of it's attention on analyzing the pattern, while giving the rest of the image content a lower importance, thus ignoring it or confusing it.

The technique "works in the real world, and can be disguised as an innocuous sticker," note the researchers -- describing them as "targeted adversarial image patches."
This discussion has been archived. No new comments can be posted.

Researchers Create 'Psychedelic' Stickers That Confuse AI Image Recognition

Comments Filter:
  • Detail vs shape (Score:5, Interesting)

    by QuietLagoon ( 813062 ) on Sunday January 07, 2018 @12:38PM (#55880713)
    It looks as if the AI is concentrating on the area with the most detail, even though it is not really relevant. I've seen similar, ummmm, distractions confuse AI. For example, disguising a stop sign [globalnews.ca] so that a self-driving car is confused.
    • Re:Detail vs shape (Score:5, Insightful)

      by religionofpeas ( 4511805 ) on Sunday January 07, 2018 @12:46PM (#55880769)

      Humans have similar problems. Instead of stop sign, they sometimes concentrate on areas with the most detail, like a smartphone.

    • Look! A squirrel!

    • by Anonymous Coward

      This is an important point. Trying to confuse a self driving car is dangerous and stupid, but carrying these tings around to confuse some marketing harvester is good fun. I bet I know how laws will get written if this becomes a thing tho...

      • Re: (Score:2, Insightful)

        by arth1 ( 260657 )

        Trying to confuse a self driving car is dangerous and stupid

        Not necessarily. It could be useful for sabotage against other countries, or for stopping/disabling a car that has lost its mind, so to speak.

    • by Anonymous Coward

      If we can just find the right impossible 3d shape, we can infect the collective with it and shut it down for good!

    • Or as another simpler example, my first Straight Talk phone not being able to correctly scan most UPCs. My second and current one does fine though.

    • by hawguy ( 1600213 )

      Of course humans can also be distracted by certain things:

      http://97x.com/a-naked-woman-s... [97x.com]

    • A human looks at that picture, sees the banana and "thing" are sitting on a flat surface, and decides they must be about the same distance so their size in their picture is their actual scale. The banana is a lot bigger, so the human decides it is more important than the "thing".

      An AI looks at that picture, sees the banana and "thing", but crucially doesn't estimate distance. Since the "thing" has a lot more detail the AI decides it's must be further away, and its greater detail means its the more impo
    • by mikael ( 484 )

      That's why human vision works on segmentation, breaking down the scene into a collage of cut-out shapes of different textures, then using stereoscopic depth perception to figure out where they are relative to each other and with occlusion, then using image classification to figure out what each object is. The downside is that you can camouflage anything simply by blurring the edges or by using razzle-dazzle techiques used in World War II.

      https://upload.wikimedia.org/w... [wikimedia.org]

  • by fluffernutter ( 1411889 ) on Sunday January 07, 2018 @12:38PM (#55880715)
    Oh no! Our spying may be tampered with!
  • Retrain. (Score:2, Insightful)

    1. Add stickers to images.
    2. Retrain network
    3. Stickers useless.

    • Adaptive entropy is fun! This is pure nerd stuff and will become a regular sport, we can hope.

    • 1. Add random stickers to images.
      2. Need to retrain network constantly.
      3. Network useless.

      • 4. kiddies make new patterns faster than researcher's can learn them; it's a whack-a-mole!

      • That's more like it. The stickers are acting like noise which makes the network useless.

        That reminds me of a not really learning network situation but there's a relation. I saw a post very recently of a guy who had posted white noise movies on youtube and he got inundated by copyright notices, because the automated copyright detection found all kinds of patterns in it.

    • Computer Chess (Score:3, Interesting)

      by bussdriver ( 620565 )

      With a similar enough network or access to the targeted network, simply create a network that learns to fool the other one. Loosely like two computers playing chess but more like a spam generator to defeat filters.

      Adversarial network learning... just not an official use of it... The solution is to add this kind of learning to the network... except it won't be fool proof until the network is quite good; since the adversary could have as many variations of attack as the classifier has in recognition.

      If you c

  • Amazon will be selling hats and scarves with psychedelic looking patterns on them.
  • by iggymanz ( 596061 ) on Sunday January 07, 2018 @01:05PM (#55880891)

    Remember the "worlds ugliest t-shirt" in one of William Gibson's novels? All cameras in that book's world were compelled by their firmware to fill image of the wearer of that suit with background. One could laugh at such a notion except ....scanners won't do banknotes

  • "I thought what I'd do was I'd pretend I was one of those deaf-mutes"

    Reminds me of Ghost in the Shell's Laughing Man calling card... His sticker would appear over people's faces in VR if they were infected.

  • ALPR? (Score:4, Interesting)

    by Ralgha ( 666887 ) on Sunday January 07, 2018 @01:16PM (#55880937)
    Would one of these stickers on the bumper of my car defeat the automated license plate readers?
    • by Jeremi ( 14640 )

      If you glue enough of them over the license numbers/letters, definitely.

    • Would one of these stickers on the bumper of my car defeat the automated license plate readers?

      Not really, no, because license plate photos are generally interpreted by humans, not AIs.

      • by Dog-Cow ( 21281 )

        Huh? Ever drive on a modern toll road? Those cameras send data to a system that mails you a bill. No humans involved.

      • Re: (Score:2, Insightful)

        by Dog-Cow ( 21281 )

        To add to my previous comment: I regularly use parking garages that read my plate to know that I already paid at the kiosk. Again, no humans involved. Sounds like you live in the 80s. Not sure if that's 1980s or 1880s.

    • I'm thinking that creating bumper stickers in a common license plate font would be enough. It would be fun to try.
  • Just waiting for manufacturers to start selling $10 stickers, shirts, hats, backpacks, luggage tags etc.

    When's the IPO?

  • by Anonymous Coward

    When the robots take over our jobs and then decide we aren't needed, we'll just get them addicted to these stickers. They'll soon get bored with theirs and go looking to trade each other for new ones. Then they'll begin their own industry of trippy stickers so they can get a better high. All day they'll sit and run their batteries dry. RIP to the bots that get stuck in a while loop.

  • If the image was a true unknown, then who would know if the sticker or the banana is more "key?". Just need a time constraint for AI to write off a weird thing as "weird thing" and move on to the next pattern in the image.

    Now that I think about it, I would be curious how the AI would handle a jumble photo, and be able to identify all the stuff in the picture?
  • Actual Intelligence (Score:3, Interesting)

    by DCFusor ( 1763438 ) on Sunday January 07, 2018 @02:08PM (#55881139) Homepage
    Is not as easily fooled as this pattern matching NN grossly incorrectly hyped as Artificial intelligence. Just saying - hype is hype no matter how much you want to believe you've got the next big thing and innovation (and in this case, NN research and pattern matching work go WAY back).
    • People are also easily fooled, but in different ways. Researches will update their networks to be more robust for this kind of trickery, and we'll move on.

      • Not if the learning models are based on neural nets. They have a fundamental limitation (in how they are very different from actual neurons).
        • Neural nets can approximate arbitrary functions to arbitrary precision, so where's the fundamental limitation ?

          • 1. Incomplete training sets - no NN can "expect the unexpected". 2. NN's alone are just pattern matchers - there is no underlying understanding. A picture of a truck is a truck. A real intelligence would perhaps notice the edges of the painting...crappy analogy, but hopefully it communicates. 3. Knowing when you don't know - some types of NN can have confidence estimates, key word, estimate. But still, a blue truck against a blue sky in an intersection in the desert where there's almost no intersection
          • The definition of that arbitrary function is not known in the design phase. Its behavior is not known. Its variability is not known. Its susceptibility to false alarms and false positives in the presence of random and structured noise is not known. As this research has shown, that susceptibility appears to be quite high, and while the hackers know why, the designers may not. In computer geek terms: it's full of zero-day vulnerabilities waiting to be discovered.

            This sea of ambiguity is in direct contrast wi
          • I'll give you a hint: uniform vs non-uniform convergence. Both converge. But only one of them implies the other. If you really don't get how this is relevant, I'll will gladly explain the difference for a measly fee of $500 million (just think of all the startups which don't have to be funded and fail and all the savings). If you do get the implications, you are welcome. I will not explain further though.
  • Put up a topologist's-sine-curve-weighted gradient. If there is an AI which can discern it, it's either not refined enough or it's the next step. I guarantee that no neural net will ever handle it.
  • They seem to mess Bender [youtube.com] up a bit.

    • We're whalers on the moon
      We carry a harpoon
      But there ain't no whales so we tell tall tales
      And sing a whaling tune!

      Address all complaints to the Monsanto Corporation!

  • Our "real" human visual algorithms are distracted by bright, shiny objects in a similar way. It's not just AI that can be fooled.

    • by Dog-Cow ( 21281 )

      You may be projecting your own lack of intelligence. There's a difference between being distracted and thinking the distraction is important to classifying what one is looking at. Humans like shiny, but they aren't going to look at a sticker and not see it's pasted on a car or a sign or a tree.

      • by sinij ( 911942 ) on Sunday January 07, 2018 @03:33PM (#55881493)
        Humans do suffer from similar problem, however we have compensatory mechanisms to correct visual errors.

        Ever glanced at something, seen something weird and had to do a double-take? This is exactly what happened to you. Quick neural nets misidentified something and you had to do full image processing to clear the confusion up.

        The reason Humans know to do a double-take is because we have many other neural nets sitting on top of image identification nets. So when our image identification malfunctions, other nets red-flag it and do error-correction. Sometimes it takes long time to process. Sometimes we decide it is just safer to get the hello out of there (e.g. seeing ghosts).
      • Setting aside your needless insult, why DO we tend to be attracted to shiny objects? Perhaps it's because at some level, our brains think it might be something important, or dangerous? Our brains have been trained to notice things that might be important to our survival and safety. Anything that is unusual or unexpected might be some sort of threat, leading us to be distracted unnecessarily.

        • Our brains have been trained to notice things that might have been important to our survival and safety in the world how it was thousands of years ago.

          FTFY

    • Re: (Score:2, Redundant)

      by arth1 ( 260657 )

      Our "real" human visual algorithms are distracted by bright, shiny objects in a similar way. It's not just AI that can be fooled.

      Not only bright shiny objects.

      https://www.youtube.com/watch?... [youtube.com]

  • 1. The AI assumes that it always sees only one object.
    2. How can it classify this sticker as a toaster? It should be classified as unknown. I think they cheat by assuming that every image can be classified
  • It will be useful when we're trying to fight SkyNet during the inevitable upcoming robot apocalypse.

  • While not exactly the same thing, in one of William Gibson's recent trilogies the characters wore clothing with specific patterns that were designed to render them invisible to surveillance cameras. The basic premise was that the even though the cameras recorded them, the computers monitoring the cameras did not realize that there were people in the images.

  • Can we expect to see this appearing as part of Captchas, then?
  • Meanwhile, Lisa Frank sticker sets see a huge sales growth!

  • So, they figured out how stoners' brains work.

  • I was hoping for something we could put on a cheek that would thwart facial recognition software. Or at least make me look like somebody better looking. (No, I wasn't looking for a full-face mask.)

Keep up the good work! But please don't ask me to help.

Working...