Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

Comment Machine Learning? (Score 2, Interesting) 311

I think that anyone who has dabbled in machine learning would not be too shocked (weather by Hume's version or this post). It's the error term in machine learning, adaptive filtering, etc. that really drives the learning. As a stupid but simple example: Least Mean Squares in adaptive filtering (essentially gradient descent over the error surface).

Comment Re:Medical... (Score 1) 727

Yes, although as looking through some of the electronics on some of these (depending on the severity and type of hearing loss obviously), there are some that use rather sophisticated adaptive filtering methods and feedback loops to remove noise from having the system so compact and rigidly affixed to the ear. Getting all that so small and power efficient isn't as easy as it might seem. The book I'm using for my source here is "Digital Hearing Aids" by James Kates (http://www.amazon.com/Digital-Hearing-Aids-James-Kates/dp/159756317X/ref=sr_1_4?ie=UTF8&s=books&qid=1268529085&sr=1-4).

Comment Re:Why not... (Score 1) 206

That's pretty interesting because pure compressed sensing uses an L0 constrained minimization (min |a|_0 such that ||x-D*a||_2^2 epsilon). The L1 minimization is a not quite so trivial equivalent problem (min |a|_1 such that ||x-D*a||_2^2 epsilon) given that a is sparse enough. Although I do think that they knew it was equivalent before the rigorous proof was established.

Comment Re:A Novelty At Best (Score 1) 261

There are actually methods being looked into to analyze art by certain unsupervised learning methods. Right now this has been looked at for paintings (http://news.bbc.co.uk/2/hi/technology/8440142.stm, BBC news) and (http://www.math.dartmouth.edu/~dgraham/hughes_pnas.pdf, the pdf of the actual paper) but similar mathematical models exist for auditory coding, and might be applied to characterizing the various styles of music without researchers fiddling with all the knobs. Will this type of analysis be useful in proceeding? Maybe, maybe not. But if there's enough success in distinguishing paintings by certain artists from well make fakes, why not try to turn the model into a constructive model that might generate art a la a certain artists (or musician if applied to art?)

I do have to agree (as a lover of music) that it will not be a complete replacement by any means, but it will definitely be amusing to see how close models of artists can be to the real thing based solely on their art.

Comment Re:Why not... (Score 3, Interesting) 206

Amusingly enough, the idea of compressed sensing (I will rephrase for clarity) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex (V1). [See Redwood Center for Theoretical Neuroscience, https://redwood.berkeley.edu/%5D. The lingo used is a bit different than the CS community, but the math is essentially the same. The point being that compressed sensing could lead to answers a lot more natural for human perception than simply canceling out high frequencies.

Also the point is that CS leads to [near] perfect reconstruction for signals of a certain nature rather than the fuzzyness that comes from some other algorithms that do not take the inherent sparsity of the signal into account.

"When it comes to humility, I'm the greatest." -- Bullwinkle Moose