Stick News organizations on there too. There's a place for accountability for what you say and do, but that's not nearly the entire internet.
Or Steve Jobs, isn't he the new bully?
Yea, not like there was ever a potato famine or anything...
I think that anyone who has dabbled in machine learning would not be too shocked (weather by Hume's version or this post). It's the error term in machine learning, adaptive filtering, etc. that really drives the learning. As a stupid but simple example: Least Mean Squares in adaptive filtering (essentially gradient descent over the error surface).
I'm sure there's a significant way this differs from 50% of 4th grade science projects...
But that means removing facebook almost completely.... but wait, that's a good thing, so yes, they should be fair!
But it's not too late to stop giving them any more data!
Yes, although as looking through some of the electronics on some of these (depending on the severity and type of hearing loss obviously), there are some that use rather sophisticated adaptive filtering methods and feedback loops to remove noise from having the system so compact and rigidly affixed to the ear. Getting all that so small and power efficient isn't as easy as it might seem. The book I'm using for my source here is "Digital Hearing Aids" by James Kates (http://www.amazon.com/Digital-Hearing-Aids-James-Kates/dp/159756317X/ref=sr_1_4?ie=UTF8&s=books&qid=1268529085&sr=1-4).
That's pretty interesting because pure compressed sensing uses an L0 constrained minimization (min |a|_0 such that ||x-D*a||_2^2 epsilon). The L1 minimization is a not quite so trivial equivalent problem (min |a|_1 such that ||x-D*a||_2^2 epsilon) given that a is sparse enough. Although I do think that they knew it was equivalent before the rigorous proof was established.
Interesting. Were they learning kernels or using random ones?
There are actually methods being looked into to analyze art by certain unsupervised learning methods. Right now this has been looked at for paintings (http://news.bbc.co.uk/2/hi/technology/8440142.stm, BBC news) and (http://www.math.dartmouth.edu/~dgraham/hughes_pnas.pdf, the pdf of the actual paper) but similar mathematical models exist for auditory coding, and might be applied to characterizing the various styles of music without researchers fiddling with all the knobs. Will this type of analysis be useful in proceeding? Maybe, maybe not. But if there's enough success in distinguishing paintings by certain artists from well make fakes, why not try to turn the model into a constructive model that might generate art a la a certain artists (or musician if applied to art?)
I do have to agree (as a lover of music) that it will not be a complete replacement by any means, but it will definitely be amusing to see how close models of artists can be to the real thing based solely on their art.
Amusingly enough, the idea of compressed sensing (I will rephrase for clarity) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex (V1). [See Redwood Center for Theoretical Neuroscience, https://redwood.berkeley.edu/%5D. The lingo used is a bit different than the CS community, but the math is essentially the same. The point being that compressed sensing could lead to answers a lot more natural for human perception than simply canceling out high frequencies.
Also the point is that CS leads to [near] perfect reconstruction for signals of a certain nature rather than the fuzzyness that comes from some other algorithms that do not take the inherent sparsity of the signal into account.
The picture does make it look like it has a mouth (that seems to be a lookout point?)
haha, I don't know why that went up anonymously. I'm not exactly ashamed about my views on the probability of finding bacteria on the moons of Saturn.
> that live off of solar power
crap, I meant thermal power!