Comment Training classifiers require "rejectable" samples (Score 2) 130
The DNN examples were apparently trained to discriminate between a members of a labeled set. This only works when you have already cleaned up the input stream (a priori) and guarantee that the image must be an example of one of the classes.
These classifiers were not trained on samples from outside the target set. This causes a forced choice: given this random dot image, which of the classes have the highest confidence? Iterate until confidence is sufficiently high, and you have a forgery with the same features the classifier is looking for.
For example, the digit training set (0,1,2...9) would need to be augmented with pictures of 'A', 'D', a smiley face, a doodle of a tree, a silhouette of Alfred Hitchcock and some spider webs. The resulting classifier would be more robust. The target classes (0,1,2,...9) would be counterbalanced with a null class (everything else). Looking inside the receptive fields of a robust image classifier is rather satisfying: you will find eigenimages that project back to image structures that are human recognizable, too.
The lesson in training your classifier is to either verify your assumption (all incoming samples must be a member of the chosen classes) or train (expose) your classifier to out-of-class samples.