Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Systematicity, and Fodor & Pylyshyn (Score 5, Informative) 90

I have a recent PhD in neural computation, though from a functional cognitive and language modeling perspective, and not a neuroanotomical modeling perspective -- so it may be a different area than you're interested in. From a high-level perspective, neural computation has moved a lot in terms of scale in the past two decades (simulations can have millions of nodes), and it has moved a lot in terms of modeling the processes of individual neurons and neurochemistry. Very high-level functional mapping work has also moved a good deal with fMRI, EEG, and MEG becoming relatively inexpensive and very common techniques in cognitive experiments. One area that, in my opinion, has moved very little in the past 20 years is the ability for neural networks to learn non-trivial domain-general representations and processes, and to generalize from those representations and processes to novel (untrained) instances. In the late 80s, after connectionism had made return with Rummelhart and McClelland's popularization of the backpropagation algorithm and demonstration of its utility in a number of tasks earlier in the decade, a good deal of the literature demonstrated very basic limitations and failures of these systems to generalize to untrained instances, or to move away from toy problems. Fodor and Pylyshyn's "Connectionism and Cognitive Architecture" is a classic paper from that era, and Pinker wrote a lot language-specific criticisms as well. Stefan Frank has the most recent long-standing research program in this area that I'm aware of, and his earlier papers have good literature reviews that can further help guide ones background reading. There have been some limited demonstrations of systematicity with different architectures (like echo state networks), and comparatively little work on storing representations and processes simultaneously in a network, but so far these are long-standing and fundamental issues that need revitalization. When convincing demonstrations do arise, they'll likely not need more than a desktop to run, as it will be demonstrations in learning algorithms and architectures, not scale. For non-neural folks, classical neural network architectures are essentially very good at pattern matching and classification (e.g. being trained on handwriting, and trying to classify each letter as one of a set of known letters (A-Z) that it's seen many hundreds of instances of before), or things that involve a large set of specific rules (if X then Y). They're much less good at things that involve domain-general computation, that involve learning both representations and processes and storing them in the same system (i.e. let's read this paragraph and summarize it, or answer a question, or let's write a sentence describing a simple scene that I'm seeing). That's not to say that you couldn't make a neural system that did this -- you could sit down and hard-code an architecture that looked something like a von-neumann CPU architecture and program it to play chess or be a word processor, if you really wanted, but the idea is developing a learning algorithm that, by virtue of exposure to the world, will craft the architecture as such. The idea being that, after years of exposure, the world will progressively "program" the computational/representational substrate that is the brain to recognize objects, concepts, words, put them together into simple event representations, and do simple reasoning with them, much like an infant. I hope that helps. Of course all of this is written by someone interested in developmental knowledge representation and language processing, so it may be a completely different question than you'd wanted answered. best wishes.

Comment Re:What's this for? (Score 4, Informative) 41

I realize that not everyone is familiar with spectroscopy, so I'll try and help outline the contributions that this project makes -- which are centrally in terms of size and cost.

Useful chemical classification can occur with an instrument containing as few as one spectral channel (ie. a narrow band filter). Colorimeters use three spectral channels, like a conventional camera, for determining the concentration of analytes. The similarity in the spectral features between the compounds you're analyzing for a given application determines the spectral resolution one needs to meet that performance. In some cases you may need 10, 100, or 1000 spectral channels, and in other applications, many more.

The architecture used for many contemporary slit spectrometers was invented by Fraunhofer in the early 19th century using a diffraction grating, a slit, and some relay optics. There are different architectures that allow you to improve upon this design (like coded aperture spectroscopy, to increase the SNR), or access different spectral regions (such as interferometer based designs for different wavelengths, like FTIR for infrared spectroscopy), but unless you're getting really fancy for visible spectroscopy, the Fraunhofer architecture is the familiar 200-year old architecture that many folks build in a highschool science class, and these work rather well for a variety of applications. This spectrometer also uses (more or less) this architecture.

Spectrometers are generally big, and many are bench-sized instruments. Currently, an inexpensive visible range (350-1000nm) usb lab spectrometer with around 500 spectral channels is around $2k, and about the size of a bunch of iPhone's stacked ontop of each other -- so it's not at all suitable for being embedded in a tiny handheld device (like an open source science tricorder). Of the commercial mini-spectrometers I'm aware of, this open mini spectrometer has a similar number of detector pixels, a similar spectral range, and a similar size. The current spectrograph on the open mini spectrometer appears to have a FWHM that's about two times worse than these systems, and it's SNR is certainly lower, but it also costs an order of magnitude less. It's also completely open, and you're free to improve the spectrograph design to increase the performance, or potentially use signal processing techniques to increase it's effective resolution.

It's not easy to compare this to something like an iPhone with a spectrometer attachment, because it's intended to be an inexpensive but complete spectrometer module rather than a complete spectrometer with a display, so the audience is different and it aims to enable makers and young scientists to build instruments and incorporate these devices in places they otherwise wouldn't be able to. But if you want to do the comparison, I'm not sure what the FWHM and effective spectral resolution would be for an iPhone with a spectrograph attachment (it depends on the spectrograph you're using, of course), but just the phone without a huge spectrograph hanging off of it is about 10 times larger than this, and for the same price you could probably put 50 of these together.

Submission + - Tricorder Project releases prototype open source 3D printable spectrometer (tricorderproject.org)

upontheturtlesback writes: As part of developing the next open source science tricorder model, Dr. Peter Jansen of the Tricorder project has released the source to an inexpensive 3D printable visible spectrometer prototype intended for the next science tricorder, but also suitable for Arduino or other embedded electronics projects for science education. With access to a Makerbot-class 3D printer, the spectrometer can be build for about $20 in materials. The source files including hardware schematics, board layouts, Arduino/Processing sketches and example data are available on Thingiverse, and potential contributors are encouraged to help improve the spectrometer design.
Hardware Hacking

Submission + - Canadian Grad Student releases open source Star Trek Tricorder (tricorderproject.org)

upontheturtlesback writes: "Another example of Star Trek technology becoming a reality. In light of the recent Tricorder X-Prize announcement, Dr. Peter Jansen has openly released the designs for a series of Science Tricorders that he developed while a graduate student at McMaster University. The Science Tricorders are capable of sensing a variety of atmospheric, electromagnetic, and spatial phenomena. Where the Science Tricorder Mark 1 is a relatively easy-to-build proof of concept, the Science Tricorder Mark 2 runs linux and resembles a cross between a Nintendo DS and scientific instrument with dual OLED touch displays. An exciting video shows them in action, and describes the project goal of creating general scientific tools for learning about and visualizing the world, as well as their importance for science education by helping kids ground abstract concepts like magnetism or polarization visually. The hardware schematics, board layouts, and firmware source are freely available on the Tricorder project website under various open licenses."

Slashdot Top Deals

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...