Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

The Flaw Lurking In Every Deep Neural Net 230

mikejuk (1801200) writes "A recent paper, 'Intriguing properties of neural networks,' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project, outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing. Every deep neural network has 'blind spots' in the sense that there are inputs that are very close to correctly classified examples that are misclassified. To quote the paper: 'For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.' To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong. What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set. You might be thinking 'so what if a cat photo that is clearly a photo a cat is recognized as a dog?' If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road? There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?' If it doesn't, how is it so different from the neural networks that are trying to mimic it?"
This discussion has been archived. No new comments can be posted.

The Flaw Lurking In Every Deep Neural Net

Comments Filter:
  • Errors (Score:5, Insightful)

    by meta-monkey ( 321000 ) on Tuesday May 27, 2014 @08:37AM (#47098837) Journal

    Of course the human brain has errors in its pattern matching ability. Who hasn't seen something out of the corner of their eye and thought it was dog when really it was a paper bag blowing in the wind? The brain makes snap judgments, because there's a trade off between correctness and speed. If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision. This is the basis of intuition.

    I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human. When you call a business and get one of those automated answering things and it asks you, "Now please, tell me the reason for your call. You can say 'make a payment,' 'inquire about my loan...'" etc etc, we get really pissed off when we say 'make a payment' and it responds "you said, cancel my account, did I get that right?" But when a human operator doesn't hear you correctly and asks you to repeat what you said, we say "Oh, sure," and repeat ourselves without a second thought. There's something about it being a machine that makes us demand perfection in a way we'd never expect from a human.

    • Re:Errors (Score:5, Insightful)

      by Anonymous Coward on Tuesday May 27, 2014 @08:54AM (#47098959)

      Actually, not only is this common in humans, but the "fix" is the same for neural networks as it is in humans. When you misidentify a paper bag as a dog, you only do so for a split second. Then it moves (or you move, or your eyes move - they constantly vibrate so that the picture isn't static!), and you get another slightly different image milliseconds later which the brain does identify correctly (or at least, tells your brain "wait a minute there's a confusing exception here, let's turn the head and try a different angle).

      The neural network "problem" they're talking about was while identifying a single image frame. In the context of a robot or autonomous car, the same process a human goes through above would correct the issue within milliseconds, because confusing and/or misleading frames (at the level we're talking about here) are rare. Think of it as a realtime error detection algorithm.

      • Re:Errors (Score:5, Interesting)

        by TapeCutter ( 624760 ) on Tuesday May 27, 2014 @09:56AM (#47099443) Journal
        A NNet is basically trying to fit a curve, the problem of "overfitting" manifests itself as two almost identical data points being separated because the curve has contorted itself to fit one data point, So yes, a video input would likely help. The really interesting bit is that it seems all NNets make the same mis-classification, even when trained with different data. What these guys are saying is "that's odd", I think mathematicians will go nuts trying to explain this and it will probably will lead to AI insights.

        The AI system in an autonomous car is much more than a Boltzmann machine running on a video card. The problem for man or machine when driving a car is that it's "life" depends on predicting the future, the problem is that neither man or machine can can confirm their calculation before the future happens. If the universe fails to co-operate with their prediction it's too late. What's important from a public safety POV is who gets it right more often, if cars killing people was totally unacceptable we wouldn't allow cars in the first place.
      • Even worse, think of all those optical illusions you see places that are based on pointing out errors with our visual processing systems. Those don't go away even if you move your eyes around.

      • Re:Errors (Score:5, Interesting)

        by dinfinity ( 2300094 ) on Tuesday May 27, 2014 @10:46AM (#47099857)

        The neural network "problem" they're talking about was while identifying a single image frame

        Yes, and even more important: they designed an algorithm to generate exactly the images that the network misperformed on. The nature of these images is explained in the paper:

        Indeed, if the network can generalize well, how can it be confused by these adversarial negatives, which are indistinguishable from the regular examples? The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers[)], and so it is found near every virtually every test case.

        A network that generalizes well correctly classifies a large part of the test set. If you'd had the perfect dog classifier, trained with millions of dog images and tested with 100% accuracy on its test set, it would be really weird if the given 'adversarial negatives' would still exist. Considering that the networks did not generalize 100%, it isn't at all surprising that they made errors on seemingly easy images (humans would probably have very little problem in getting 100% accuracy for the test sets used). That is just how artificial neural networks are currently performing,

        The slightly surprising part is that the misclassified images seem so close to those in the training set. If I'm interpreting the results correctly (IANANNE), what happens is that their algorithm modifies the images in such a way that the feature detectors in the 10 neuron wide penultimate layer fire just under the required threshold for the final binary classifier to fire.

        Maybe the greatest thing about this research is that it contains a new way to automatically increase the size of the training set with these meaningful adversarial examples:

        We have successfully trained a two layer 100-100-10 non-convolutional neural network with a test error below 1.2% by keeping a pool of adversarial examples a random subset of which is continuously replaced by newly generated adversarial examples and which is mixed into the original training set all the time. For comparison, a network of this size gets to 1.6% errors when regularized by weight decay alone and can be improved to around 1.3% by using carefully applied dropout. A subtle, but essential detail is that adversarial examples are generated for each layer output and are used to train all the layers above. Adversarial examples for the higher layers seem to be more useful than those on the input or lower layers.

        It might prove to be much more effective in terms of learning speed than just adding noise to the training samples as it seems to grow the test set based on which features the network already uses in its classification instead of the naive noise approach. In fact, the authors hint at exactly that:

        Already, a variety of recent state of the art computer vision models employ input deformations during training for increasing the robustness and convergence speed of the models [9, 13]. These deformations are, however, statistically inefcient, for a given example: they are highly correlated and are drawn from the same distribution throughout the entire training of the model. We propose a scheme to make this process adaptive in a way that exploits the model and its deciencies in modeling the local space around the training data.

        • Re:Errors (Score:5, Insightful)

          by radtea ( 464814 ) on Tuesday May 27, 2014 @12:33PM (#47100793)

          The slightly surprising part is that the misclassified images seem so close to those in the training set.

          With emphasis on "slightly". This is a nice piece of work, particularly because it is constructive--it both demonstrates the phenomenon and gives us some idea of how to replicate it. But there is nothing very surprising about demonstrating "non-linear classifiers behave non-linearly."

          Everyone who has worked with neural networks has been aware of this from the beginning, and in a way this result is almost a relief: it demonstrates for the first time a phenomenon that most of us were suspicious would be lurking in there somewhere.

          The really interesting question is: how dense are the blind spots relative to the correct classification volume? And how big are they? If the blind spots are small and scattered then this will have little practical effect on computer vision (as opposed to image processing) because a simple continuity-of-classification criterion will smooth over them.

      • We (humans) can classify stills of cats pretty well, no? So your argument does not hold.

        It's true that there is more information in video data, but the problem described in the article is certainly not caused by the restriction to stills.

        • by presidenteloco ( 659168 ) on Tuesday May 27, 2014 @12:17PM (#47100633)

          When analyzing a still picture/scene, your eye moves its high resolution central area of its camera around the low level visual features of the image. Thus the image is processed over time as many different images.
          The images in that time sequence occur at slightly different locations of the visual light-sensor array (visual field) and at slightly different angles and each image has considerably different pixel resolution trained on each part of the scene.

          So that would still almost certainly give some robustness against these artifacts (unlucky particular images) being able to fool the system.

          Time and motion are essential in disambiguating 3D/4D world with 2D imaging.

          Also, I would guess that having learning algorithms that preferentially try to encode a wide diversity of different kinds of low level features would also protect against being able to be fooled, even by a single image, but particularly over a sequence of similar but not identical images of the same subject.

      • Re:Errors (Score:4, Interesting)

        by tlhIngan ( 30335 ) <slashdot&worf,net> on Tuesday May 27, 2014 @11:33AM (#47100247)

        Actually, not only is this common in humans, but the "fix" is the same for neural networks as it is in humans. When you misidentify a paper bag as a dog, you only do so for a split second. Then it moves (or you move, or your eyes move - they constantly vibrate so that the picture isn't static!), and you get another slightly different image milliseconds later which the brain does identify correctly (or at least, tells your brain "wait a minute there's a confusing exception here, let's turn the head and try a different angle).

        The neural network "problem" they're talking about was while identifying a single image frame. In the context of a robot or autonomous car, the same process a human goes through above would correct the issue within milliseconds, because confusing and/or misleading frames (at the level we're talking about here) are rare. Think of it as a realtime error detection algorithm.

        For some humans, it's a smack in the head, though.

        The human wetware is powerful but easy to mislead. For example, the face-recognition bit in human vision is extremely easy to fool - or why we see a face on the Moon, or a face on a rock on Mars, or Jesus on toast, a potato chip, or whatever.

        Human vision is especially vulnerable - see optical illusions. The resolution of the human eye is quite low (approx. 1MP concentrated in a tiny area of central vision, and another 1MP for peripheral vision), however, the vision system is coordinated with e motor system to control eye muscles so the eyeball moves ~200 times a second to get a higher resolution image from a low-resolution camera (which results in an image that is approximately 40+MP over the entire visual field).

        But then you have blind spots which the wetware interpolates (to great amusement at times), and annoying, habits like unidentifiable objects that are potentially in our way can lead to target fixation while the brain attempts to identify.

        Hell, humans are very vulnerable to this - the brain is wired for pattern recognition, and seeing patterns where there is none is a VERY common human habit.

        Fact tis, the only reason we're not constantly making errors is because we do just that - we take more glances and more time to look closer to give more input to the recognition system.

        Likewise, an autonomous vehicle would have plenty of information to derive recognition from - including a history of frames. These vehicles will have a past history of the images it received and processed, and the new anomalous ones could be temporally compared with images before and after.

    • Re: (Score:3, Interesting)

      by Anonymous Coward
      Ok, I need to share story of my boss. Hope it is relevant.

      My boss was hardware engineer and had total blind spot for software. We involved him many times in the discussion to make sure he understands different layers of software, but everything in vain.

      It used to create funny situations. For example, one of the developer was developing a UI and had bug in his code. Unfortunately he was stuck for an hour when my boss happened to ask him how he was doing. After hearing the problem, he jumped and said the

      • by hubie ( 108345 )
        That's funny because in my experience, the hardware guys usually blame it on the software and the software guys blame it on the hardware.
    • Re:Errors (Score:4, Insightful)

      by ponos ( 122721 ) on Tuesday May 27, 2014 @09:13AM (#47099093)

      I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human.

      But we do expect some levels of performance, even from humans. You have to pass certain tests before you are allowed to drive a car or do neurosurgery. So, we do need some, relatively tight, margins of error before a machine can be acceptable for certain tasks, like driving a car. An algorithm that has provable bias and repeatable failures is much less likely to be acceptable.

      The original article also mentions the great similarity between inputs. We expect a human to misinterpret voice in a noisy environment or misjudge distance and shapes in a stormy night. However, we would be really surprised if "child A" is classified as a child, while similar looking "child B" is mislcassified as a washing machine. Under normal conditions, humans don't do these kind of errors.

      Finally, even an "incomplete" system (in a goedelian sense) can be useful it it is stable for 99.999999% of inputs. So, fuzzy and occasionally wrong is OK in real life. However, this will have to be proven and carefully examined empirically. We can't just shrug this kind of result away. Humans are known to function a certain way for thousands of years. A machine will have to be exhaustively documented before such misclassifications are deemed functionally insignificant.

      • Under normal conditions, humans don't do these kind of errors.

        In this case however, it should be noted that the humans are ALSO in error. They see both images as the same, when the images are in fact not the same.

        With this realization, there is no remaining controversy here. Both the wetware and the software use different methodologies, so its no surprise then that they have different error distributions.

        A far better method of comparing the two systems with regard to "accuracy" would be to throw many family photo albums at both the wetware and software and have b

        • In this case however, it should be noted that the humans are ALSO in error. They see both images as the same, when the images are in fact not the same.

          Ah, but here's a question for you: Are the humans in error, or have the humans applied a better threshold for "same enough"? Possibly one with some evolutionary advantage?

          Say I set my camera to continuously take pictures while holding the shutter button, and I take 10 frames of something.

          The delta between those frames could be very small, or very large depe

    • Like when you are walking behind a guy with long hair and think she might be kinda hot. Doh!
    • by TapeCutter ( 624760 ) on Tuesday May 27, 2014 @09:23AM (#47099175) Journal
      "Sure it's possible that computers may one day be as smart as humans, but who wants a computer that remembers the words to the Flintstones jingle and forgets to pay the rent?"
    • If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision.

      As someone whose brain does err on the side of tiger regularly, and there are no tigers, I'd like to point out that it's not nearly as harmless as you may think.

    • Neural Nets work on stimulus, and feed back. Large Cats think of primates as "preferred" food; and work on "feedback." As time went by, fewer primates existed and reproduced that could NOT recogize Large Cats; from lets say, anything else.
    • The thing is, usually the mistakes made by a computer appear obvious, as in "even an idiot wouldn't make that mistake". For example, a human would have to have really big problems with hearing or language to hear "make a payment" as "cancel my account". If the sound quality is bad the human would ask me to repeat what I said, I would say it slower or say the same thing in other words.

      Same thing with cars, people can understand the limits of other people (well, I guess I probably wouldn't be able to avoid th

      • Also, to err is human (or so the saying goes), but a machine should operate without mistakes or it is broken (the engine of my car runs badly when it is cold - but that's not because the car doesn't "want" to go or doesn't "like" cold, it's just that some part in it is defective (most likely the carburetor needs cleaning and new seals)).

        I guess that's what I'm saying. If you make a computer "mind" that can think like a human, a lot of what lets us recognize patterns is intuition. The ability of the brain to fill in gaps in incomplete information. But that basically requires that we're going to be wrong some of the time, because the information is by definition incomplete. Your car example is for a simple machine. I'm talking about a pattern matching engine that functions as well (or better) than the human mind. A machine that thinks like a

    • This sounds so reminiscent of things like the Mandelbrot set, where there are always adjacent points with different outcomes, no matter how far down you go. Who knows if it really is related?

      • This sounds so reminiscent of things like the Mandelbrot set, where there are always adjacent points with different outcomes, no matter how far down you go. Who knows if it really is related?

        Good point, and yes, it probably is.

    • Re:Errors (Score:5, Insightful)

      by jellomizer ( 103300 ) on Tuesday May 27, 2014 @11:49AM (#47100413)

      Don't forget the issue that men seem to have. We can be looking for something... Say a bottle of ketchup, we can stare at it in the fridge for minutes until we find it. Often the problem is there is something different about the bottle that doesn't match what our imaginations say that we are looking for. It was a Plastic Bottle but you were expecting glass. The bottle was upside down, you were expecting it to be right side up. Sometimes these simple little things trick your mind, and you just don't see what is right in front of your face. It almost makes you wonder how much more stuff we are not seeing because we just don't expect to see it, or don't want to see it.

    • by Belial6 ( 794905 )
      A lot of that comes from the fact that we know there is no other way to communicate with the machine other than the very specific sounds it is looking for. With a human, you can speak slower. You can over emphasis the words they are missing. You can spell the word that hey can't understand, and say things like "B...as in BOY." With a human, you can even ask them to pick up their handset because their earphone is causing the sound to distort. All of the ways that you can make the communication clearer w
  • by Anonymous Coward on Tuesday May 27, 2014 @08:38AM (#47098841)

    A neural network is not by any stretch of the imagination a simulation of how the brain works. It incorporates a few principles similar to brain function, but it is NOT an attempt to re-build a biological brain.

    Anybody relying on "it's a bit like how humans work lol" to assert the reliability of an ANN is a fucking idiot, and probably trying to hawk a product in the commercial sector rather than in academia.

  • by gstoddart ( 321705 ) on Tuesday May 27, 2014 @08:38AM (#47098849) Homepage

    If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?

    Aren't optical illusions pretty much something like this?

    And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?

    Maybe the problem isn't with the biology, but the technology?

    Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)

    • If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?

      And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?

      Maybe the problem isn't with the biology, but the technology?

      Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)

      You're just repeating the question asked in the summary.

      • You're just repeating the question asked in the summary.

        No, I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".

        I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.

        • I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".

          Nobody's assuming; scientists are asking a question.

          I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.

          I've not come across the phrase "jumping to the possibility" before. If I 'jump' to giving this a possibility of 2%, is that a 'stretch'?

  • Deep neural networks are implicitly generating dynamic-ontologies. The 'mis-categorisation' occurs when you only have one functional exit point. The fact is that if you are within the network itself, the adversarial are held in-frame alongside other possibilities, and the network only tilts towards one when the prevailing system requires it through external stimulus. From the outside it will look like an error, (because we already decided that) but internally each possible interpretation is valid.
  • by James Clay ( 2881489 ) on Tuesday May 27, 2014 @08:44AM (#47098893)
    I can't speak to what the car manufacturers are doing, but Google's algorithms do not include a neural network. They do use "machine learning", but neural networks are just one form of machine learning.
    • by Gibgezr ( 2025238 ) on Tuesday May 27, 2014 @10:00AM (#47099485)

      Just to back up what James Clay said, I took a course from Sebastian Thrun (the driving force behind the Google cars) on programming robotic cars, and no neural networks were involved, nor mentioned with regards to the Google car project. As far as I can tell, if the LIDAR says something is in the way, the deterministic algorithms attempt to avoid it safely; if you can't avoid it safely, you brake and halt. That's it. Maybe someone who actually worked on the Google car can comment further?
      Does anyone know of any neural networks used in potentially dangerous conditions? This study: www-isl.stanford.edu/~widrow/papers/j1994neuralnetworks.pdf states that
      accurateness and robustness issues need to be addressed when using neural network algorithms, and gives a baseline of more than 95% accuracy as a useful performance metric to aim for. This makes neural nets useful for things like auto-focus in cameras and handwriting recognition for tablets, but means that using a neural network as a primary decision-maker to drive a car is perhaps something best left to video games (where it has been used to great success) rather than real cars with real humans involved.

      • If I recall correctly, there are neural networks being used in medical diagnostics. There is a recognition that they have flaws, but then again, so do human beings.

        Of course, they are supposed to inform the doctor, not be blindly followed. Which means in N years, they will be blindly followed.

  • by jgotts ( 2785 ) <<moc.liamg> <ta> <sttogj>> on Tuesday May 27, 2014 @08:45AM (#47098895)

    The human brain has multiple neural nets and a voter.

    I am face blind and completely non-visual, but I do recognize people. I can because the primary way that we recognize people is by encoding a schematic image of the face, but many other nets are in play. For example, I use hair style, clothing, and height. So does everybody, though. But for most people that just gives you extra confidence.

    Conclusion: Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.

    • by bunratty ( 545641 ) on Tuesday May 27, 2014 @08:51AM (#47098933)
      More importantly, the human brain has feedback loops. All the artificial neural nets I've seen are only feed-forward, except during the training phase in which case there is only feed-forward or only feed-backward and never any looping of signals. In effect, the human brain is always training itself.
    • by ganv ( 881057 ) on Tuesday May 27, 2014 @08:56AM (#47098975)

      Your model of the brain as multiple neural nets and a voter is a good and useful simplification. I think we still know relatively little about how accurate it is. You would expect evolution to have optimized the brain to avoid blind spots that threatened survival, and redundancy makes sense as a way to do this.

      However, I wouldn't classify blind spots as 'no problem whatsoever'. If the simple model of multiple neural nets and a voter is a good one, then there will be cases where several nets give errors and the conclusion is wrong. Knowing what kinds of errors are produced after what kind of training is critical to understanding when a redundant system will fail. In the end though, I suspect that the brain is quite a bit more complicated that a collection of the neural nets like those this research is working with.

      • by dinfinity ( 2300094 ) on Tuesday May 27, 2014 @09:55AM (#47099429)

        Your model of the brain as multiple neural nets and a voter is a good and useful simplification.

        So the 'voter' takes multiple inputs and combines these into a single output?

        Only if you have no idea how a neural network works, is it a useful simplification. The 'multiple nets' in the example given by GP mainly describe many input features.

    • That makes sense. Rare errors will be screened out if instead of a single deterministic selection process you use a distribution of schemes and select based on the most probable outcome... I am wondering what our brain does with its minority reports...
    • by Anonymous Coward

      Indeed, remembering the experiments done in the 1960s by Sperry and Gazzaniga on patients who had a divided corpus callosum, there are clearly multiple systems that can argue with each other about recognising objects. Maybe part of what makes us really good at it, is not relying on one model of the world, but many overlaid views of the same data by different mechanisms.

    • It would be interesting to learn how does this neural networks interact. Is it a single neural network, are several independent neural networks, that have points where they interact. Or are they interdependent neural networks, where some parts are fully independent, and other, where they mix with others ?

      • by Rich0 ( 548339 )

        It would be interesting to learn how does this neural networks interact. Is it a single neural network, are several independent neural networks, that have points where they interact. Or are they interdependent neural networks, where some parts are fully independent, and other, where they mix with others ?

        The more I read it is one big mess. There are areas with functional optimization which is why a stroke in a certain part of the brain tends to impact most people in the same way. However, lots of operations that we might think of as simple involve many different parts of the brain working together.

        My sense is that the brain is a collection of many interconnected sub-networks. Each sub-network forms certain patterns during development, with major interconnections forming during development. The structure

    • by Urkki ( 668283 )

      Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.

      ..."no problem whatsoever" in the sense, that it doesn't kill enough people to have impact on human population size, and "highly redundant" also on the sense that there usually are many spare people to replace those killed/maimed by such brain blind spots.

    • While I share your view that expecting the mind to be explained as a single neural network (in the Comp. Sci. sense) is probably simplistic, I don't think modeling it as multiple neural nets and a voter fixes the problem. I am not quite sure about this, but isn't a collection of neural nets and a voter equivalent to a single neural net? Or, to put it a slightly different way, for any model that consists of multiple neural nets and a voter, there is a single neural net that is functionally identical? I am as

    • For example, I use hair style, clothing, and height

      And then one day they radically change their hair stye and wear a new outfit, causing hilarity to ensue.

  • by Wolfier ( 94144 ) on Tuesday May 27, 2014 @08:47AM (#47098915)

    All neural nets try to predict, and predictions can be foiled.

    People can be fooled by optical illusions, too.

    • All neural nets try to predict, and predictions can be foiled.

      People can be fooled by optical illusions, too.

      The main difference being that optical illusions are designed to fool the human eye, and thus are intentional, whereas the computer in this case is being fooled by regular stuff, i.e. not intentional.

      If the human brain failed to recall unique individuals because of slight changes in their appearance, I doubt we'd have progressed much beyond living in caves and hitting stuff with cudgels.

  • This is indeed shocking, as everyone one knows we all thought that we had perfected the art of artificial human intelligence and that there was no more room for improvement.

  • by bitt3n ( 941736 ) on Tuesday May 27, 2014 @08:53AM (#47098955)
    What if that supposed pedestrian really is no more than a clear stretch of road, and it is we who err in notifying the road's next of kin, who are themselves no more than a dirt path and a pedestrian walkway?
  • by sqlrob ( 173498 ) on Tuesday May 27, 2014 @08:54AM (#47098961)

    A dynamic non-linear system [wikipedia.org] has some weird boundary conditions. Who could ever have predicted that? </s>

    Why wasn't this assumed from the beginning and it shown that it wasn't an issue?

    • by ponos ( 122721 )

      The main advantage of learning algorithms like neural nets is that they can automagically generalise and produce classifiers that are relatively robust. I wouldn't be surprised at all if a neural net misclassified an extreme artifical case that could fool humans (say, some sort of geometric pattern generated by a complicated function or similar artificial constructs). Here, however, it appears that the input is really, really similar and simple to recognize for humans. Obviously the researchers have recreat

    • by wanax ( 46819 ) on Tuesday May 27, 2014 @09:36AM (#47099295)

      This is a well known weakness with back-propagation based learning algorithms. In the learning stage it's called Catastrophic interference [wikipedia.org], in the testing stage it manifests itself by mis-classifying similar inputs.

    • love the unbalanced sarcasm tag.

  • It is almost like the article is saying that something a computer did was not perfectly in line with human reasoning. We should stop being life-centric and realize that if the computer says two pictures of the same cat should not be classified in the same way, the computer is simply wiser than we are, and if we don't believe it the computer will beat our asses at chess and then we'll see who is smarter.
  • by biodata ( 1981610 ) on Tuesday May 27, 2014 @09:03AM (#47099025)
    Neural networks are only one way to build machine learning classifiers. Everything we've learnt about machine learning tells us not to rely on a single method/methodology and that we will consistently get better results by taking the consensus of multiple methods. We just need to make sure that a majority of the other methods we use have different blind spots to the ones the neural networks have.
    • by slew ( 2918 )

      OR, perhaps we use the same method but look at the data a different way (e.g., like a turbo code uses the same basic error correction code technology, but permutes the input data)... I suspect the brain does something similar to this, but I have no evidence...

    • by neiras ( 723124 )

      It seems that a mechanism to determine the "trustworthiness" of each method and thus weighting its individual influence in the vote would make sense. That way the system would weed out the models that produce incorrect results.

      Then we feed the system a steady diet of Fox News and watch it downvote the lonely "liberal" model.

      Man, this stuff makes me want to go back to school. Highly interesting.

  • When we ride a bicycle the brain constantly adjusts for error. We try to travel in a straight line but it really is a series of small curves as we adjust and keep trying to track straight. Processes such as vision probably do the same thing. As we quickly try to identify items it probably turns into a "this not that" series until the brain eventually decides we have gotten it right. Obviously this all occurs constantly and at rather high, internal, speeds.
  • We know there will be errors with the neural nets. There will be edge cases (like the one described with the cat), corner cases, bizarre combination of inputs that result in misclassifications, wrong answers and bad results. This happens in the real world too. People misclassify things, get things wrong, screw up answers.

    The lesson is not to trust the computer to be infallible. We have trusted the computer to do math perfectly. 1 + 1 = 2, always, but is not so for neural nets. It is one thing if the neura
  • The Probably Approximately Correct (PAC) learning model is what formally justifies the tendency of neural networks to "learn" from data (see Wikipedia).

    While the PAC model does not depend on the probability distribution which generates training and test data, it does assume that they are *the same*. So by "adversarially" choosing test data, the researchers are breaking this important assumption. Therefore it is in some ways not surprising that neural networks have this vulnerability. It sh

    • by ceoyoyo ( 59147 )

      The evil "left as an exercise for the reader" part of textbooks where the author shows you a bunch of examples then gives you a problem that's related to those, but just enough different in some small way that it's fiendishly difficult. Or, more generally, the trick question.

  • "does the same result apply to biological networks?"
    Of course we just rely on other parts of our brain and use logic to throw these out. I once saw an old carpet rolled up on the side of the road and OMG it looked like a rhino. But I knew this was not a rhino.

    • News from the future, rhinos find success adapting to suburban environments with discarded carpet camouflage, people slow to adapt.

  • by sackbut ( 1922510 ) on Tuesday May 27, 2014 @09:34AM (#47099275)
    This seems to be almost a form of cognitive bias as defined and studied by Tversky and Kahneman. I direct you to : http://en.wikipedia.org/wiki/L... [wikipedia.org]. Or as previously pointed out optical illusions seem to be an equivalence.
  • I wonder how much it pays to the first person who sorts this one out? I wonder if this is happening to the human brain?
  • a long time ago..... If, say, the reef fish cannot distinguish a coral head from a barracuda, then it get eliminated pretty quick. There must be a flaw in the artificial neural nets.
    • by ceoyoyo ( 59147 )

      Yeah right. Brains make mistakes all the time, but natural selection has tuned them to err on the side of paranoia.

  • by peter303 ( 12292 ) on Tuesday May 27, 2014 @09:46AM (#47099361)
    NN technology is 60 years old. Some A.I. pundts disliked in the beginning such as Minsky in his 1969 book Perceptrons. Many of these flaws have been LONG known.
  • > how is it so different from the neural networks that are trying to mimic it? These neural networks are not trying to mimic the brain.
  • by Jodka ( 520060 ) on Tuesday May 27, 2014 @10:04AM (#47099517)

    The sounds similar to the Napoleon Dynamite Problem, the problem encountered in the Netflix Prize challenge of predicting user ratings for some particular films. For most films knowledge of an individuals preferences for some films were good predictors for their preferences of other films. Yet preferences for some particular films were hard to predict, notably the eponymous Napoleon Dynamite.

    Neural network identification and automated prediction of individual film ratings are both classification tasks. Example sets for both of these problems contain particular difficult-to-classify examples. So perhaps this phenomena of "adversarial examples" described in the Szegedy et. al. article is more generally a property of datasets and classification, not an artifact of implementing classification using neural networks.

  • by jcochran ( 309950 ) on Tuesday May 27, 2014 @10:11AM (#47099553)

    incompleteness theorem. And as some earlier posters' stated, the correction is simple. Simply look again. The 2nd image collected will be different from the previous and if the NN is correct, will resolve to the correct interpretation.

  • by gurps_npc ( 621217 ) on Tuesday May 27, 2014 @10:11AM (#47099557) Homepage
    Is the term we use for errors in human neural networks. If you do a google search for optical illusions you will find many examples. From pictures that look like they are 3d, but are just 2d, to sizes that appear to change but aren't, we make lots of errors. Not to mention the many many cases where we think "THAT'S A FACE", whether it is jesus on toast, a face on the moon, or just some trees on a mountainside, we are hardwired to assume things are faces.
  • I think the example of mis-classifying pedestrians as clear road is over-reaching a bit to find a problem.

    On the other hand, the AI might end up in trouble when deciding to run over cats and avoid dogs.

    • so let's not overreach, we don't want a dog misidentified as a person when there is choice to hit a tree, or hit a person or hit a dog. the solution is clearly to run over the beast.

  • Would it be possible to build a neural net that recognizes when one of these blind spots has been hit? If it's reliably misidentified across neural nets as they claim, there should be enough common attributes for a different neural net to train on.

  • This reminds me of the problems with perceptrons [webofstories.com] (a early, linear, neural net), which caused AI scientists to loose interest in them, until neural nets came along.

  • What AI really needs is a wife that nags it if it f8cks up.

    Humans seem pretty subject to close-call-foul-ups too. When proof-reading my own writing, often I don't spot a problem because my mind translates the pattern as I intended, not as I wrote it. For example, if I meant to write "Finding the Right Person for the Job..." but instead wrote it as "Finding the Right Pearson for the Job..." (note the "a"), there's a fairly high chance I'd miss it because the pattern of what I meant clogs my objectivity, even

  • I bet this is a case of overftting. The network is too "large" (at least in some dimensions) with respect to the data that it is required to approximate/classify.

  • If the misclassification only occurs on rare inputs then any random perturbation of that input is highly likely to be classified correctly.

    The fix therefore (likely what occurs in the brain) is to add noise and average the results. Any misclassified nearby input will be swamped by the greater number of correctly classified ones.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...