Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment The Exact Value... (Score 1) 80

is most likely transcendental. According to Efimov's original paper, the magic value "22.7" (we shall M) is given exactly by e^(pi / |s0|), where s0 is a pure imaginary solution (very close to i) of an equation (9) he derives earlier and is related to the three-body problem (there are an infinite number of real solutions s1,s2...). If you define s0 = i x where x is real, then (by converting from trig to hyperbolic trig) it can be shown that the number M is given by e^(pi / x), where x is the positive real solution to:

0 = (8 / sqrt(3)) sinh( pi * x / 6) - x * cosh( pi * x / 2)

If you copy/paste the right-hand side into the WolframAlpha website, you will see that the curve has exactly two non-zero solutions, approximately (+-) 1.00624. You can ask for more digits, and it will give you x=1.0062378251027814891..., which means to eight digits, M = 22.694382595... This equation above is a transcendental equation whose non-zero solutions are neither rational nor algebraic, and very likely M itself is also transcendental. Proving these sorts of things, however is very difficult. The best we can hope for is that the number can be expressed as an infinite expansion whose terms have a nice form which converses rapidly. A few more clicks on Wolfram Alpha and I'm sure someone will work it out.

Comment Re:Lessons Learned 20 Years Ago at JPL/Nasa (Score 1) 98

I was referring to our own work on using neural networks for pattern recognition in images.

The research we were doing was in fact prompted by the well-documented success of neural networks in other nonlinear problems. One of the very first good examples of an applied adaptive neural network was in the standard modem of the time, which used a very small neural network to optimize the equalizer settings on each end.

Neural nets appear to have a lot more success with constructing nonlinear maps from subsets of Rn to Rm with n and m relatively small. Vision is not such a case as the input space n is very large. Once n and m get large you will require an exponentially large number of training samples, with the increased risk of falling into local minima (mitigated by simulated annealing or tunneling). In addition, if there is any inherent linearity in the problem an old-school Kalman filter may be less sexy but more useful.

Many of the success stories of neural nets are really of the "Stone Soup" variety, in which the neural network is the "Stone" and the meat-and-potatoes real work is in how to preprocess the data to reduce the dimensions n and m. One of the most amazing (non-neural) pattern-recognition apps that I have seen recently is the Shazam technology, which can identify a recorded song from 30 seconds of a (noisy) snippet. Their dimension-reducing logic involves hashes of spectrogram peak pairs. No neural nets to be seen, but absolutely brilliant and points to ways that similar things could be done in the visual domain.

Comment Lessons Learned 20 Years Ago at JPL/Nasa (Score 5, Interesting) 98

Back in 1987 I was working in the Multimission Image Processing Lab at JPL/Nasa, and the whole Neural Network fad was in its second resurgence, after the original "perceptron" work of Minsky and Papert caught some attention in the early 70's. The group that I was in was mostly interesting in mapping, and in particular identifying features (e.g. soil types, vegetation, etc) from both spatial and spectral signatures. As usual, the military was also interested in this, and so Darpa was throwing a lot of loose money around, some of which fell our way.

I spent a lot of time on this project, writing a lot of neural net simulations, supervised and unsupervised learning, back-prop, Hopfield nets, reproducing a lot of Terry Sejnowski's and Grossman's work, taking trips over to Caltech to see what David Van Essen and his crew were doing with their analysis of the visual cortex of Monkey brains, trying to understand how "wetware" neural nets can so quickly identify features in a visual field, resolve 3D information from left-right pairs, and the like. For the most part, all of the neural net models were really programmable dynamical systems, and the big trick was to find ways (steepest descent, simulated annealing, lagrangian analysis) of computing a set of synaptic parameters whose response minimizes an error function. That, and figuring out the "right" error function and training sets (assuming you are doing supervised learning).

Bottom line was, not much came of all this, beyond a few research grants and published papers. The one thing that we do know now is, real biological neural networks do not learn by backward-error propagation. If they did, the learning would be so slow that we would all still be no smarter than trilobites if that. Most learning does appear to be "connectionist" and is stored in the synaptic connections between nodes, and that those connections are strengthened when the nodes that they connect often fire simultaneously. There is some evidence now of "grandmother cells" which are hypothetical cells that fire when, e.g. your grandmother comes into the room. But other than that, most of the real magic of biological vision appears to be in the pre-processing stages of the retinal signals, which are hardcoded layer upon layer of edge-detectors, color mapping, and some really amazing slicing, dicing and discrete FFT transforms of the orginal data into small enough and simple enough pieces that the cognitive part of the brain can make sense of the information.

It's pretty easy to train a small back-prop net to "spot" a silhouette of a cat and distinguish it from a triangle and a doughnut. It is not so easy to write a network simulation that can pick out a small cat in a busy urban scene, curled up beneath a dumpster, batting at a mouse....

To do that, you need a dog.

Comment SpaceWeather.Com (Score 3, Informative) 361

Check out It has been around for some time, and has some excellent aurora galleries. Besides summarized ACE data, this website also features the techie-cool far side views of the sun from SOHO, computed using helioseismic holography. For the truly worried, they offer for-fee email solar-flare alert services, which also come in handy if you just want to know when to go out to look for auroras. Anyway, most of the site is non-subscription, and it's worth a look.

Slashdot Top Deals

An inclined plane is a slope up. -- Willard Espy, "An Almanac of Words at Play"