Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

The Flaw Lurking In Every Deep Neural Net 230

mikejuk (1801200) writes "A recent paper, 'Intriguing properties of neural networks,' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project, outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing. Every deep neural network has 'blind spots' in the sense that there are inputs that are very close to correctly classified examples that are misclassified. To quote the paper: 'For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.' To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong. What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set. You might be thinking 'so what if a cat photo that is clearly a photo a cat is recognized as a dog?' If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road? There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?' If it doesn't, how is it so different from the neural networks that are trying to mimic it?"
This discussion has been archived. No new comments can be posted.

The Flaw Lurking In Every Deep Neural Net

Comments Filter:
  • by James Clay ( 2881489 ) on Tuesday May 27, 2014 @09:44AM (#47098893)
    I can't speak to what the car manufacturers are doing, but Google's algorithms do not include a neural network. They do use "machine learning", but neural networks are just one form of machine learning.
  • by sqlrob ( 173498 ) on Tuesday May 27, 2014 @09:54AM (#47098961)

    A dynamic non-linear system [wikipedia.org] has some weird boundary conditions. Who could ever have predicted that? </s>

    Why wasn't this assumed from the beginning and it shown that it wasn't an issue?

  • by biodata ( 1981610 ) on Tuesday May 27, 2014 @10:03AM (#47099025)
    Neural networks are only one way to build machine learning classifiers. Everything we've learnt about machine learning tells us not to rely on a single method/methodology and that we will consistently get better results by taking the consensus of multiple methods. We just need to make sure that a majority of the other methods we use have different blind spots to the ones the neural networks have.
  • by doti ( 966971 ) on Tuesday May 27, 2014 @10:21AM (#47099153) Homepage

    SoylentNews is the replacement for /.

    reddit is of another kind.

  • by wanax ( 46819 ) on Tuesday May 27, 2014 @10:36AM (#47099295)

    This is a well known weakness with back-propagation based learning algorithms. In the learning stage it's called Catastrophic interference [wikipedia.org], in the testing stage it manifests itself by mis-classifying similar inputs.

  • by peter303 ( 12292 ) on Tuesday May 27, 2014 @10:46AM (#47099361)
    NN technology is 60 years old. Some A.I. pundts disliked in the beginning such as Minsky in his 1969 book Perceptrons. Many of these flaws have been LONG known.
  • Re:Errors (Score:4, Informative)

    by rgmoore ( 133276 ) <glandauer@charter.net> on Tuesday May 27, 2014 @11:46AM (#47099859) Homepage

    Show me a machine that listens to me say "make a payment" and then says "sorry I didn't hear that right, can you repeat it?"

    My phone does something like that with its voice command stuff. If it can't make out what you say, it will say "Sorry, I didn't get that. Could you repeat it?" On some kinds of ambiguous input it will say "I think you asked for X. Is that correct?"

  • by presidenteloco ( 659168 ) on Tuesday May 27, 2014 @01:17PM (#47100633)

    When analyzing a still picture/scene, your eye moves its high resolution central area of its camera around the low level visual features of the image. Thus the image is processed over time as many different images.
    The images in that time sequence occur at slightly different locations of the visual light-sensor array (visual field) and at slightly different angles and each image has considerably different pixel resolution trained on each part of the scene.

    So that would still almost certainly give some robustness against these artifacts (unlucky particular images) being able to fool the system.

    Time and motion are essential in disambiguating 3D/4D world with 2D imaging.

    Also, I would guess that having learning algorithms that preferentially try to encode a wide diversity of different kinds of low level features would also protect against being able to be fooled, even by a single image, but particularly over a sequence of similar but not identical images of the same subject.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...