Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

The Flaw Lurking In Every Deep Neural Net 230

mikejuk (1801200) writes "A recent paper, 'Intriguing properties of neural networks,' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project, outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing. Every deep neural network has 'blind spots' in the sense that there are inputs that are very close to correctly classified examples that are misclassified. To quote the paper: 'For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.' To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong. What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set. You might be thinking 'so what if a cat photo that is clearly a photo a cat is recognized as a dog?' If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road? There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?' If it doesn't, how is it so different from the neural networks that are trying to mimic it?"
This discussion has been archived. No new comments can be posted.

The Flaw Lurking In Every Deep Neural Net

Comments Filter:
  • Errors (Score:5, Insightful)

    by meta-monkey ( 321000 ) on Tuesday May 27, 2014 @09:37AM (#47098837) Journal

    Of course the human brain has errors in its pattern matching ability. Who hasn't seen something out of the corner of their eye and thought it was dog when really it was a paper bag blowing in the wind? The brain makes snap judgments, because there's a trade off between correctness and speed. If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision. This is the basis of intuition.

    I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human. When you call a business and get one of those automated answering things and it asks you, "Now please, tell me the reason for your call. You can say 'make a payment,' 'inquire about my loan...'" etc etc, we get really pissed off when we say 'make a payment' and it responds "you said, cancel my account, did I get that right?" But when a human operator doesn't hear you correctly and asks you to repeat what you said, we say "Oh, sure," and repeat ourselves without a second thought. There's something about it being a machine that makes us demand perfection in a way we'd never expect from a human.

  • by Anonymous Coward on Tuesday May 27, 2014 @09:38AM (#47098841)

    A neural network is not by any stretch of the imagination a simulation of how the brain works. It incorporates a few principles similar to brain function, but it is NOT an attempt to re-build a biological brain.

    Anybody relying on "it's a bit like how humans work lol" to assert the reliability of an ANN is a fucking idiot, and probably trying to hawk a product in the commercial sector rather than in academia.

  • by gstoddart ( 321705 ) on Tuesday May 27, 2014 @09:38AM (#47098849) Homepage

    If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?

    Aren't optical illusions pretty much something like this?

    And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?

    Maybe the problem isn't with the biology, but the technology?

    Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)

  • Re:Errors (Score:1, Insightful)

    by Anonymous Coward on Tuesday May 27, 2014 @09:44AM (#47098891)

    Show me a machine that listens to me say "make a payment" and then says "sorry I didn't hear that right, can you repeat it?"

    And then show me a human that hears "cancel my account" when you say "make a payment". A human might hear "fake a payment" but unlike these crappy voice recognition systems they don't confuse things that don't at least rhyme a bit.

  • by jgotts ( 2785 ) <jgotts&gmail,com> on Tuesday May 27, 2014 @09:45AM (#47098895)

    The human brain has multiple neural nets and a voter.

    I am face blind and completely non-visual, but I do recognize people. I can because the primary way that we recognize people is by encoding a schematic image of the face, but many other nets are in play. For example, I use hair style, clothing, and height. So does everybody, though. But for most people that just gives you extra confidence.

    Conclusion: Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.

  • Re:Errors (Score:5, Insightful)

    by Anonymous Coward on Tuesday May 27, 2014 @09:54AM (#47098959)

    Actually, not only is this common in humans, but the "fix" is the same for neural networks as it is in humans. When you misidentify a paper bag as a dog, you only do so for a split second. Then it moves (or you move, or your eyes move - they constantly vibrate so that the picture isn't static!), and you get another slightly different image milliseconds later which the brain does identify correctly (or at least, tells your brain "wait a minute there's a confusing exception here, let's turn the head and try a different angle).

    The neural network "problem" they're talking about was while identifying a single image frame. In the context of a robot or autonomous car, the same process a human goes through above would correct the issue within milliseconds, because confusing and/or misleading frames (at the level we're talking about here) are rare. Think of it as a realtime error detection algorithm.

  • Re:Errors (Score:4, Insightful)

    by ponos ( 122721 ) on Tuesday May 27, 2014 @10:13AM (#47099093)

    I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human.

    But we do expect some levels of performance, even from humans. You have to pass certain tests before you are allowed to drive a car or do neurosurgery. So, we do need some, relatively tight, margins of error before a machine can be acceptable for certain tasks, like driving a car. An algorithm that has provable bias and repeatable failures is much less likely to be acceptable.

    The original article also mentions the great similarity between inputs. We expect a human to misinterpret voice in a noisy environment or misjudge distance and shapes in a stormy night. However, we would be really surprised if "child A" is classified as a child, while similar looking "child B" is mislcassified as a washing machine. Under normal conditions, humans don't do these kind of errors.

    Finally, even an "incomplete" system (in a goedelian sense) can be useful it it is stable for 99.999999% of inputs. So, fuzzy and occasionally wrong is OK in real life. However, this will have to be proven and carefully examined empirically. We can't just shrug this kind of result away. Humans are known to function a certain way for thousands of years. A machine will have to be exhaustively documented before such misclassifications are deemed functionally insignificant.

  • by dinfinity ( 2300094 ) on Tuesday May 27, 2014 @10:55AM (#47099429)

    Your model of the brain as multiple neural nets and a voter is a good and useful simplification.

    So the 'voter' takes multiple inputs and combines these into a single output?

    Only if you have no idea how a neural network works, is it a useful simplification. The 'multiple nets' in the example given by GP mainly describe many input features.

  • Re:Errors (Score:5, Insightful)

    by jellomizer ( 103300 ) on Tuesday May 27, 2014 @12:49PM (#47100413)

    Don't forget the issue that men seem to have. We can be looking for something... Say a bottle of ketchup, we can stare at it in the fridge for minutes until we find it. Often the problem is there is something different about the bottle that doesn't match what our imaginations say that we are looking for. It was a Plastic Bottle but you were expecting glass. The bottle was upside down, you were expecting it to be right side up. Sometimes these simple little things trick your mind, and you just don't see what is right in front of your face. It almost makes you wonder how much more stuff we are not seeing because we just don't expect to see it, or don't want to see it.

  • Re:Errors (Score:5, Insightful)

    by radtea ( 464814 ) on Tuesday May 27, 2014 @01:33PM (#47100793)

    The slightly surprising part is that the misclassified images seem so close to those in the training set.

    With emphasis on "slightly". This is a nice piece of work, particularly because it is constructive--it both demonstrates the phenomenon and gives us some idea of how to replicate it. But there is nothing very surprising about demonstrating "non-linear classifiers behave non-linearly."

    Everyone who has worked with neural networks has been aware of this from the beginning, and in a way this result is almost a relief: it demonstrates for the first time a phenomenon that most of us were suspicious would be lurking in there somewhere.

    The really interesting question is: how dense are the blind spots relative to the correct classification volume? And how big are they? If the blind spots are small and scattered then this will have little practical effect on computer vision (as opposed to image processing) because a simple continuity-of-classification criterion will smooth over them.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...