The Flaw Lurking In Every Deep Neural Net 230
mikejuk (1801200) writes "A recent paper, 'Intriguing properties of neural networks,' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project, outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing. Every deep neural network has 'blind spots' in the sense that there are inputs that are very close to correctly classified examples that are misclassified. To quote the paper: 'For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.' To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong. What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set. You might be thinking 'so what if a cat photo that is clearly a photo a cat is recognized as a dog?' If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road? There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?' If it doesn't, how is it so different from the neural networks that are trying to mimic it?"
Errors (Score:5, Insightful)
Of course the human brain has errors in its pattern matching ability. Who hasn't seen something out of the corner of their eye and thought it was dog when really it was a paper bag blowing in the wind? The brain makes snap judgments, because there's a trade off between correctness and speed. If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision. This is the basis of intuition.
I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human. When you call a business and get one of those automated answering things and it asks you, "Now please, tell me the reason for your call. You can say 'make a payment,' 'inquire about my loan...'" etc etc, we get really pissed off when we say 'make a payment' and it responds "you said, cancel my account, did I get that right?" But when a human operator doesn't hear you correctly and asks you to repeat what you said, we say "Oh, sure," and repeat ourselves without a second thought. There's something about it being a machine that makes us demand perfection in a way we'd never expect from a human.
Re:Errors (Score:5, Insightful)
Actually, not only is this common in humans, but the "fix" is the same for neural networks as it is in humans. When you misidentify a paper bag as a dog, you only do so for a split second. Then it moves (or you move, or your eyes move - they constantly vibrate so that the picture isn't static!), and you get another slightly different image milliseconds later which the brain does identify correctly (or at least, tells your brain "wait a minute there's a confusing exception here, let's turn the head and try a different angle).
The neural network "problem" they're talking about was while identifying a single image frame. In the context of a robot or autonomous car, the same process a human goes through above would correct the issue within milliseconds, because confusing and/or misleading frames (at the level we're talking about here) are rare. Think of it as a realtime error detection algorithm.
Re:Errors (Score:5, Interesting)
The AI system in an autonomous car is much more than a Boltzmann machine running on a video card. The problem for man or machine when driving a car is that it's "life" depends on predicting the future, the problem is that neither man or machine can can confirm their calculation before the future happens. If the universe fails to co-operate with their prediction it's too late. What's important from a public safety POV is who gets it right more often, if cars killing people was totally unacceptable we wouldn't allow cars in the first place.
Re: (Score:2)
Neural networks are Turing complete analogic copmputers programmed by setting weights within it... Hash algorithms are programs that given an input return an output that is very similar to random, except for the fact that it's completely deterministic.
What kind of semelhance did you see between them?
Re: (Score:3)
Even worse, think of all those optical illusions you see places that are based on pointing out errors with our visual processing systems. Those don't go away even if you move your eyes around.
Re:Errors (Score:5, Interesting)
The neural network "problem" they're talking about was while identifying a single image frame
Yes, and even more important: they designed an algorithm to generate exactly the images that the network misperformed on. The nature of these images is explained in the paper:
Indeed, if the network can generalize well, how can it be confused by these adversarial negatives, which are indistinguishable from the regular examples? The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers[)], and so it is found near every virtually every test case.
A network that generalizes well correctly classifies a large part of the test set. If you'd had the perfect dog classifier, trained with millions of dog images and tested with 100% accuracy on its test set, it would be really weird if the given 'adversarial negatives' would still exist. Considering that the networks did not generalize 100%, it isn't at all surprising that they made errors on seemingly easy images (humans would probably have very little problem in getting 100% accuracy for the test sets used). That is just how artificial neural networks are currently performing,
The slightly surprising part is that the misclassified images seem so close to those in the training set. If I'm interpreting the results correctly (IANANNE), what happens is that their algorithm modifies the images in such a way that the feature detectors in the 10 neuron wide penultimate layer fire just under the required threshold for the final binary classifier to fire.
Maybe the greatest thing about this research is that it contains a new way to automatically increase the size of the training set with these meaningful adversarial examples:
We have successfully trained a two layer 100-100-10 non-convolutional neural network with a test error below 1.2% by keeping a pool of adversarial examples a random subset of which is continuously replaced by newly generated adversarial examples and which is mixed into the original training set all the time. For comparison, a network of this size gets to 1.6% errors when regularized by weight decay alone and can be improved to around 1.3% by using carefully applied dropout. A subtle, but essential detail is that adversarial examples are generated for each layer output and are used to train all the layers above. Adversarial examples for the higher layers seem to be more useful than those on the input or lower layers.
It might prove to be much more effective in terms of learning speed than just adding noise to the training samples as it seems to grow the test set based on which features the network already uses in its classification instead of the naive noise approach. In fact, the authors hint at exactly that:
Already, a variety of recent state of the art computer vision models employ input deformations during training for increasing the robustness and convergence speed of the models [9, 13]. These deformations are, however, statistically inefcient, for a given example: they are highly correlated and are drawn from the same distribution throughout the entire training of the model. We propose a scheme to make this process adaptive in a way that exploits the model and its deciencies in modeling the local space around the training data.
Re:Errors (Score:5, Insightful)
The slightly surprising part is that the misclassified images seem so close to those in the training set.
With emphasis on "slightly". This is a nice piece of work, particularly because it is constructive--it both demonstrates the phenomenon and gives us some idea of how to replicate it. But there is nothing very surprising about demonstrating "non-linear classifiers behave non-linearly."
Everyone who has worked with neural networks has been aware of this from the beginning, and in a way this result is almost a relief: it demonstrates for the first time a phenomenon that most of us were suspicious would be lurking in there somewhere.
The really interesting question is: how dense are the blind spots relative to the correct classification volume? And how big are they? If the blind spots are small and scattered then this will have little practical effect on computer vision (as opposed to image processing) because a simple continuity-of-classification criterion will smooth over them.
This is not entirely true (Score:2)
It's true that there is more information in video data, but the problem described in the article is certainly not caused by the restriction to stills.
Your eye moves over a still (Score:4, Informative)
When analyzing a still picture/scene, your eye moves its high resolution central area of its camera around the low level visual features of the image. Thus the image is processed over time as many different images.
The images in that time sequence occur at slightly different locations of the visual light-sensor array (visual field) and at slightly different angles and each image has considerably different pixel resolution trained on each part of the scene.
So that would still almost certainly give some robustness against these artifacts (unlucky particular images) being able to fool the system.
Time and motion are essential in disambiguating 3D/4D world with 2D imaging.
Also, I would guess that having learning algorithms that preferentially try to encode a wide diversity of different kinds of low level features would also protect against being able to be fooled, even by a single image, but particularly over a sequence of similar but not identical images of the same subject.
Re:Errors (Score:4, Interesting)
For some humans, it's a smack in the head, though.
The human wetware is powerful but easy to mislead. For example, the face-recognition bit in human vision is extremely easy to fool - or why we see a face on the Moon, or a face on a rock on Mars, or Jesus on toast, a potato chip, or whatever.
Human vision is especially vulnerable - see optical illusions. The resolution of the human eye is quite low (approx. 1MP concentrated in a tiny area of central vision, and another 1MP for peripheral vision), however, the vision system is coordinated with e motor system to control eye muscles so the eyeball moves ~200 times a second to get a higher resolution image from a low-resolution camera (which results in an image that is approximately 40+MP over the entire visual field).
But then you have blind spots which the wetware interpolates (to great amusement at times), and annoying, habits like unidentifiable objects that are potentially in our way can lead to target fixation while the brain attempts to identify.
Hell, humans are very vulnerable to this - the brain is wired for pattern recognition, and seeing patterns where there is none is a VERY common human habit.
Fact tis, the only reason we're not constantly making errors is because we do just that - we take more glances and more time to look closer to give more input to the recognition system.
Likewise, an autonomous vehicle would have plenty of information to derive recognition from - including a history of frames. These vehicles will have a past history of the images it received and processed, and the new anomalous ones could be temporally compared with images before and after.
Re: (Score:2)
This independent correctly identifies you as an asshat, and a[n anonymous] coward.
Re: (Score:3, Interesting)
My boss was hardware engineer and had total blind spot for software. We involved him many times in the discussion to make sure he understands different layers of software, but everything in vain.
It used to create funny situations. For example, one of the developer was developing a UI and had bug in his code. Unfortunately he was stuck for an hour when my boss happened to ask him how he was doing. After hearing the problem, he jumped and said the
Re: (Score:3)
Re:Errors (Score:4, Insightful)
I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human.
But we do expect some levels of performance, even from humans. You have to pass certain tests before you are allowed to drive a car or do neurosurgery. So, we do need some, relatively tight, margins of error before a machine can be acceptable for certain tasks, like driving a car. An algorithm that has provable bias and repeatable failures is much less likely to be acceptable.
The original article also mentions the great similarity between inputs. We expect a human to misinterpret voice in a noisy environment or misjudge distance and shapes in a stormy night. However, we would be really surprised if "child A" is classified as a child, while similar looking "child B" is mislcassified as a washing machine. Under normal conditions, humans don't do these kind of errors.
Finally, even an "incomplete" system (in a goedelian sense) can be useful it it is stable for 99.999999% of inputs. So, fuzzy and occasionally wrong is OK in real life. However, this will have to be proven and carefully examined empirically. We can't just shrug this kind of result away. Humans are known to function a certain way for thousands of years. A machine will have to be exhaustively documented before such misclassifications are deemed functionally insignificant.
Re: (Score:3)
Under normal conditions, humans don't do these kind of errors.
In this case however, it should be noted that the humans are ALSO in error. They see both images as the same, when the images are in fact not the same.
With this realization, there is no remaining controversy here. Both the wetware and the software use different methodologies, so its no surprise then that they have different error distributions.
A far better method of comparing the two systems with regard to "accuracy" would be to throw many family photo albums at both the wetware and software and have b
Re: (Score:3)
Ah, but here's a question for you: Are the humans in error, or have the humans applied a better threshold for "same enough"? Possibly one with some evolutionary advantage?
Say I set my camera to continuously take pictures while holding the shutter button, and I take 10 frames of something.
The delta between those frames could be very small, or very large depe
Re:Errors, and then there are cringeworthies... (Score:3)
AI question I heard 30yrs ago... (Score:5, Funny)
Re: (Score:3)
If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision.
As someone whose brain does err on the side of tiger regularly, and there are no tigers, I'd like to point out that it's not nearly as harmless as you may think.
Rustling bush tiger perception (Score:2)
not so good for your hunting buddies.
(or bed partner for that matter.)
Don't shoot til you see the golds of their eyes.
Re: (Score:2)
Re: (Score:2)
The thing is, usually the mistakes made by a computer appear obvious, as in "even an idiot wouldn't make that mistake". For example, a human would have to have really big problems with hearing or language to hear "make a payment" as "cancel my account". If the sound quality is bad the human would ask me to repeat what I said, I would say it slower or say the same thing in other words.
Same thing with cars, people can understand the limits of other people (well, I guess I probably wouldn't be able to avoid th
Re: (Score:2)
Also, to err is human (or so the saying goes), but a machine should operate without mistakes or it is broken (the engine of my car runs badly when it is cold - but that's not because the car doesn't "want" to go or doesn't "like" cold, it's just that some part in it is defective (most likely the carburetor needs cleaning and new seals)).
I guess that's what I'm saying. If you make a computer "mind" that can think like a human, a lot of what lets us recognize patterns is intuition. The ability of the brain to fill in gaps in incomplete information. But that basically requires that we're going to be wrong some of the time, because the information is by definition incomplete. Your car example is for a simple machine. I'm talking about a pattern matching engine that functions as well (or better) than the human mind. A machine that thinks like a
Re: (Score:3)
This sounds so reminiscent of things like the Mandelbrot set, where there are always adjacent points with different outcomes, no matter how far down you go. Who knows if it really is related?
Re: (Score:3)
This sounds so reminiscent of things like the Mandelbrot set, where there are always adjacent points with different outcomes, no matter how far down you go. Who knows if it really is related?
Good point, and yes, it probably is.
Re:Errors (Score:5, Insightful)
Don't forget the issue that men seem to have. We can be looking for something... Say a bottle of ketchup, we can stare at it in the fridge for minutes until we find it. Often the problem is there is something different about the bottle that doesn't match what our imaginations say that we are looking for. It was a Plastic Bottle but you were expecting glass. The bottle was upside down, you were expecting it to be right side up. Sometimes these simple little things trick your mind, and you just don't see what is right in front of your face. It almost makes you wonder how much more stuff we are not seeing because we just don't expect to see it, or don't want to see it.
Re:Errors (Score:5, Funny)
Just made me think of Hitchhiker's Guide to the Galaxy where they land the spaceship in the middle of London, and instead of using a cloaking device to hide it they surround it with a Somebody Else's Problem field.
Re: (Score:3)
Re:Errors (Score:4, Informative)
My phone does something like that with its voice command stuff. If it can't make out what you say, it will say "Sorry, I didn't get that. Could you repeat it?" On some kinds of ambiguous input it will say "I think you asked for X. Is that correct?"
For fuck's sake, it's 2013. (Score:3, Insightful)
A neural network is not by any stretch of the imagination a simulation of how the brain works. It incorporates a few principles similar to brain function, but it is NOT an attempt to re-build a biological brain.
Anybody relying on "it's a bit like how humans work lol" to assert the reliability of an ANN is a fucking idiot, and probably trying to hawk a product in the commercial sector rather than in academia.
Re:For fuck's sake, it's 2013. (Score:5, Funny)
No, it really is not 2013!
Optical illusuions? (Score:4, Insightful)
Aren't optical illusions pretty much something like this?
And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?
Maybe the problem isn't with the biology, but the technology?
Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)
Re: (Score:2)
And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?
Maybe the problem isn't with the biology, but the technology?
Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)
You're just repeating the question asked in the summary.
Re: (Score:2)
No, I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".
I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.
Re: (Score:2)
I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".
Nobody's assuming; scientists are asking a question.
I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.
I've not come across the phrase "jumping to the possibility" before. If I 'jump' to giving this a possibility of 2%, is that a 'stretch'?
They are *not* errors... (Score:2, Interesting)
Re:They are *not* errors... (Score:5, Funny)
The fact is that if you are within the network itself, the adversarial are held in-frame alongside other possibilities, and the network only tilts towards one when the prevailing system requires it through external stimulus.
Tron? Is that you? Speak to me, buddy.
Google's algorithm is not a neural network (Score:5, Informative)
Re:Google's algorithm is not a neural network (Score:5, Interesting)
Just to back up what James Clay said, I took a course from Sebastian Thrun (the driving force behind the Google cars) on programming robotic cars, and no neural networks were involved, nor mentioned with regards to the Google car project. As far as I can tell, if the LIDAR says something is in the way, the deterministic algorithms attempt to avoid it safely; if you can't avoid it safely, you brake and halt. That's it. Maybe someone who actually worked on the Google car can comment further?
Does anyone know of any neural networks used in potentially dangerous conditions? This study: www-isl.stanford.edu/~widrow/papers/j1994neuralnetworks.pdf states that
accurateness and robustness issues need to be addressed when using neural network algorithms, and gives a baseline of more than 95% accuracy as a useful performance metric to aim for. This makes neural nets useful for things like auto-focus in cameras and handwriting recognition for tablets, but means that using a neural network as a primary decision-maker to drive a car is perhaps something best left to video games (where it has been used to great success) rather than real cars with real humans involved.
Re: (Score:3)
If I recall correctly, there are neural networks being used in medical diagnostics. There is a recognition that they have flaws, but then again, so do human beings.
Of course, they are supposed to inform the doctor, not be blindly followed. Which means in N years, they will be blindly followed.
Re: (Score:3)
Your knowledge is out of date. Support vector machines can replace shallow neural networks. The deep ones have serious, mathematically proven, advantages over shallow AANs and SVMs.
If you were taking a machine learning class a year ago that said nobody is using AANs then it was five to ten years out of date. Google has put quite a few resources into them, including buying (er, hiring) one of the pioneers of deep networks.
The brain has multiple neural nets (Score:5, Insightful)
The human brain has multiple neural nets and a voter.
I am face blind and completely non-visual, but I do recognize people. I can because the primary way that we recognize people is by encoding a schematic image of the face, but many other nets are in play. For example, I use hair style, clothing, and height. So does everybody, though. But for most people that just gives you extra confidence.
Conclusion: Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.
Re:The brain has multiple neural nets (Score:5, Interesting)
Re:The brain has multiple neural nets (Score:5, Interesting)
Your model of the brain as multiple neural nets and a voter is a good and useful simplification. I think we still know relatively little about how accurate it is. You would expect evolution to have optimized the brain to avoid blind spots that threatened survival, and redundancy makes sense as a way to do this.
However, I wouldn't classify blind spots as 'no problem whatsoever'. If the simple model of multiple neural nets and a voter is a good one, then there will be cases where several nets give errors and the conclusion is wrong. Knowing what kinds of errors are produced after what kind of training is critical to understanding when a redundant system will fail. In the end though, I suspect that the brain is quite a bit more complicated that a collection of the neural nets like those this research is working with.
Re:The brain has multiple neural nets (Score:4, Insightful)
Your model of the brain as multiple neural nets and a voter is a good and useful simplification.
So the 'voter' takes multiple inputs and combines these into a single output?
Only if you have no idea how a neural network works, is it a useful simplification. The 'multiple nets' in the example given by GP mainly describe many input features.
Ensemble neural nets (Score:3, Interesting)
The brain has multiple neural nets (Score:3, Interesting)
Indeed, remembering the experiments done in the 1960s by Sperry and Gazzaniga on patients who had a divided corpus callosum, there are clearly multiple systems that can argue with each other about recognising objects. Maybe part of what makes us really good at it, is not relying on one model of the world, but many overlaid views of the same data by different mechanisms.
Re: (Score:2)
It would be interesting to learn how does this neural networks interact. Is it a single neural network, are several independent neural networks, that have points where they interact. Or are they interdependent neural networks, where some parts are fully independent, and other, where they mix with others ?
Re: (Score:2)
It would be interesting to learn how does this neural networks interact. Is it a single neural network, are several independent neural networks, that have points where they interact. Or are they interdependent neural networks, where some parts are fully independent, and other, where they mix with others ?
The more I read it is one big mess. There are areas with functional optimization which is why a stroke in a certain part of the brain tends to impact most people in the same way. However, lots of operations that we might think of as simple involve many different parts of the brain working together.
My sense is that the brain is a collection of many interconnected sub-networks. Each sub-network forms certain patterns during development, with major interconnections forming during development. The structure
Re: (Score:2)
Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.
..."no problem whatsoever" in the sense, that it doesn't kill enough people to have impact on human population size, and "highly redundant" also on the sense that there usually are many spare people to replace those killed/maimed by such brain blind spots.
Are they the same thing? (Score:3)
While I share your view that expecting the mind to be explained as a single neural network (in the Comp. Sci. sense) is probably simplistic, I don't think modeling it as multiple neural nets and a voter fixes the problem. I am not quite sure about this, but isn't a collection of neural nets and a voter equivalent to a single neural net? Or, to put it a slightly different way, for any model that consists of multiple neural nets and a voter, there is a single neural net that is functionally identical? I am as
Re: (Score:2)
For example, I use hair style, clothing, and height
And then one day they radically change their hair stye and wear a new outfit, causing hilarity to ensue.
How shocking is that? (Score:3)
All neural nets try to predict, and predictions can be foiled.
People can be fooled by optical illusions, too.
Re: (Score:2)
All neural nets try to predict, and predictions can be foiled.
People can be fooled by optical illusions, too.
The main difference being that optical illusions are designed to fool the human eye, and thus are intentional, whereas the computer in this case is being fooled by regular stuff, i.e. not intentional.
If the human brain failed to recall unique individuals because of slight changes in their appearance, I doubt we'd have progressed much beyond living in caves and hitting stuff with cudgels.
Re: (Score:2)
Re: (Score:2)
Familiarity is one thing, not recognizing Tom because he's wearing glasses today is something completely different.
Shocking! (Score:2)
This is indeed shocking, as everyone one knows we all thought that we had perfected the art of artificial human intelligence and that there was no more room for improvement.
how do we know the neural network is wrong? (Score:3, Funny)
Well what do you know (Score:4, Informative)
A dynamic non-linear system [wikipedia.org] has some weird boundary conditions. Who could ever have predicted that? </s>
Why wasn't this assumed from the beginning and it shown that it wasn't an issue?
Re: (Score:3)
The main advantage of learning algorithms like neural nets is that they can automagically generalise and produce classifiers that are relatively robust. I wouldn't be surprised at all if a neural net misclassified an extreme artifical case that could fool humans (say, some sort of geometric pattern generated by a complicated function or similar artificial constructs). Here, however, it appears that the input is really, really similar and simple to recognize for humans. Obviously the researchers have recreat
Re:Well what do you know (Score:4, Informative)
This is a well known weakness with back-propagation based learning algorithms. In the learning stage it's called Catastrophic interference [wikipedia.org], in the testing stage it manifests itself by mis-classifying similar inputs.
Re: (Score:2)
love the unbalanced sarcasm tag.
I don't believe it (Score:2)
Average across models (Score:5, Informative)
Re: (Score:2)
OR, perhaps we use the same method but look at the data a different way (e.g., like a turbo code uses the same basic error correction code technology, but permutes the input data)... I suspect the brain does something similar to this, but I have no evidence...
Re: (Score:2)
It seems that a mechanism to determine the "trustworthiness" of each method and thus weighting its individual influence in the vote would make sense. That way the system would weed out the models that produce incorrect results.
Then we feed the system a steady diet of Fox News and watch it downvote the lonely "liberal" model.
Man, this stuff makes me want to go back to school. Highly interesting.
The Curve (Score:2)
Errors, what do we do (Score:2)
The lesson is not to trust the computer to be infallible. We have trusted the computer to do math perfectly. 1 + 1 = 2, always, but is not so for neural nets. It is one thing if the neura
pac learning model (Score:2)
The Probably Approximately Correct (PAC) learning model is what formally justifies the tendency of neural networks to "learn" from data (see Wikipedia).
While the PAC model does not depend on the probability distribution which generates training and test data, it does assume that they are *the same*. So by "adversarially" choosing test data, the researchers are breaking this important assumption. Therefore it is in some ways not surprising that neural networks have this vulnerability. It sh
Re: (Score:2)
The evil "left as an exercise for the reader" part of textbooks where the author shows you a bunch of examples then gives you a problem that's related to those, but just enough different in some small way that it's fiendishly difficult. Or, more generally, the trick question.
Errors? (Score:2)
"does the same result apply to biological networks?"
Of course we just rely on other parts of our brain and use logic to throw these out. I once saw an old carpet rolled up on the side of the road and OMG it looked like a rhino. But I knew this was not a rhino.
Re: (Score:3)
News from the future, rhinos find success adapting to suburban environments with discarded carpet camouflage, people slow to adapt.
Cogntitve bias (Score:3)
What's the incentive, and should we worry? (Score:2)
Natural selection elminated that flaw... (Score:2)
Re: (Score:2)
Yeah right. Brains make mistakes all the time, but natural selection has tuned them to err on the side of paranoia.
Minksy said this in 1969 (Score:4, Informative)
Not trying to mimic the brain. (Score:2)
Re: (Score:2)
The Napoleon Dynamite Problem (Score:3)
The sounds similar to the Napoleon Dynamite Problem, the problem encountered in the Netflix Prize challenge of predicting user ratings for some particular films. For most films knowledge of an individuals preferences for some films were good predictors for their preferences of other films. Yet preferences for some particular films were hard to predict, notably the eponymous Napoleon Dynamite.
Neural network identification and automated prediction of individual film ratings are both classification tasks. Example sets for both of these problems contain particular difficult-to-classify examples. So perhaps this phenomena of "adversarial examples" described in the Szegedy et. al. article is more generally a property of datasets and classification, not an artifact of implementing classification using neural networks.
Sounds like a real world example of Gödel's (Score:4, Interesting)
incompleteness theorem. And as some earlier posters' stated, the correction is simple. Simply look again. The 2nd image collected will be different from the previous and if the NN is correct, will resolve to the correct interpretation.
Optical Illusion (Score:3)
Pedestrian (Score:2)
I think the example of mis-classifying pedestrians as clear road is over-reaching a bit to find a problem.
On the other hand, the AI might end up in trouble when deciding to run over cats and avoid dogs.
Re: (Score:2)
so let's not overreach, we don't want a dog misidentified as a person when there is choice to hit a tree, or hit a person or hit a dog. the solution is clearly to run over the beast.
Idea (Score:2)
Would it be possible to build a neural net that recognizes when one of these blind spots has been hit? If it's reliably misidentified across neural nets as they claim, there should be enough common attributes for a different neural net to train on.
Inspired by != identical to (Score:2)
This reminds me of the problems with perceptrons [webofstories.com] (a early, linear, neural net), which caused AI scientists to loose interest in them, until neural nets came along.
It needs an incentive to not screw up (Score:2)
What AI really needs is a wife that nags it if it f8cks up.
Humans seem pretty subject to close-call-foul-ups too. When proof-reading my own writing, often I don't spot a problem because my mind translates the pattern as I intended, not as I wrote it. For example, if I meant to write "Finding the Right Person for the Job..." but instead wrote it as "Finding the Right Pearson for the Job..." (note the "a"), there's a fairly high chance I'd miss it because the pattern of what I meant clogs my objectivity, even
overfitting anyone ?? (Score:2)
I bet this is a case of overftting. The network is too "large" (at least in some dimensions) with respect to the data that it is required to approximate/classify.
Add noise to fix (Score:2)
If the misclassification only occurs on rare inputs then any random perturbation of that input is highly likely to be classified correctly.
The fix therefore (likely what occurs in the brain) is to add noise and average the results. Any misclassified nearby input will be swamped by the greater number of correctly classified ones.
Re: (Score:3)
Re: (Score:3)
Great, everyone is going to start having moles on their cheeks.
Re: (Score:2)
Plus there's the wee matter of the halting problem, where it's not possible in general to prove whether a program will output something, never mind to prove what it will output.
Never mind the problem of bugs in the logic of your program correctness proof.
I prefer to just issue a disclaimer, for example:
Imagine this in all caps:
The user and/or purchaser/lessee/licensee of this software agrees with the proposition that software is too complex to be warranteed for safety or fitness for use or purpose or sale.
T
Re: (Score:2)
Re: (Score:2)
Re:The Flaw Lurking Deep in Slashdot Beta (Score:5, Informative)
SoylentNews is the replacement for /.
reddit is of another kind.
Re: (Score:2)
Then go to reddit, you fucking whiner.
You mean jump from the toilet to the cesspit?
Why not just keep slashdot nice get rid of beta, quit posting sports stories [slashdot.org] to a geek news site and maybe actually fix things like unicode support, ssl (as in keep the cert up to date for the login at least), rather than bone the sites UI into a Yet.Another.Identical.Agragrator.
Re: (Score:2)
Then go to reddit, you fucking whiner.
You mean jump from the toilet to the cesspit?
Why not just keep slashdot nice get rid of beta, quit posting sports stories [slashdot.org] to a geek news site and maybe actually fix things like unicode support, ssl (as in keep the cert up to date for the login at least), rather than bone the sites UI into a Yet.Another.Identical.Agragrator.
Re: (Score:2)
opps double posted probably due to the shity wifi timing me out for the last half hour
Re: (Score:2)
Deep networks automatically learn to recognize image elements. That's one of their most interesting features.
Re: (Score:2)
That's not really true. ANNs, particularly deep ones, share a lot of features with specific parts of the brain, especially primary sensory processing areas. It's quite reasonable to ask whether emergent properties of deep AANs are also found in the analogous systems in the brain (and vice versa). Some, such as visual processing done by simple, then complex cells, are already known. An ANN isn't a simulation of any part of the brain, but it's an analogous system that does share some properties and might