Comment Re:Lessons Learned 20 Years Ago at JPL/Nasa (Score 1) 98
The research we were doing was in fact prompted by the well-documented success of neural networks in other nonlinear problems. One of the very first good examples of an applied adaptive neural network was in the standard modem of the time, which used a very small neural network to optimize the equalizer settings on each end.
Neural nets appear to have a lot more success with constructing nonlinear maps from subsets of Rn to Rm with n and m relatively small. Vision is not such a case as the input space n is very large. Once n and m get large you will require an exponentially large number of training samples, with the increased risk of falling into local minima (mitigated by simulated annealing or tunneling). In addition, if there is any inherent linearity in the problem an old-school Kalman filter may be less sexy but more useful.
Many of the success stories of neural nets are really of the "Stone Soup" variety, in which the neural network is the "Stone" and the meat-and-potatoes real work is in how to preprocess the data to reduce the dimensions n and m. One of the most amazing (non-neural) pattern-recognition apps that I have seen recently is the Shazam technology, which can identify a recorded song from 30 seconds of a (noisy) snippet. Their dimension-reducing logic involves hashes of spectrogram peak pairs. No neural nets to be seen, but absolutely brilliant and points to ways that similar things could be done in the visual domain.