AFAIK, the best current guess is that the brain is a neural net, not a Turing machine. Neural nets are not Turing machines and do not solve problems analytically using algorithms - they produce "best guess" solutions based on a network of connections and probabilistic processes, usually developed by 'learning'. They can 'solve' ill-defined or uncomputable problems in the sense that they produce a very reliable guess: you don't actually solve a differential equation every time you catch a ball.
Are you sure that neural nets are not turing machines? I mean, if that is true it would mean that there is no way of implementing a neural network; if this is the case then what exactly is it that is not implementable? "best guess" solutions are generally done by statistical analysis, which I know is not currently on par with the human brain. But again, if there is some innate property to the "best guess" functionality of the human brain that is not implementable, what would that be?
Here's me guessing that "Self-awareness" is not a computable problem...
I kind of agree with you on this one, not neccessarily because self-awareness is non-computable but because it would be difficult to prove that any implementation of self-awareness is conscious. My slightly uneducated guess is that it is in fact impossible to prove in any reliable way that a machine is consciously experiencing its surroundings the same way that I am. But this is something that, IMHO, is not restricted to machines. I think the notion of qualia and the zombie-argument goes a long way to show that no conscious individual can prove that there are other conscious beings in its surroundings. The best anyone can do is to assume that this is the case based on observing behaviour similar to one's own. And if that's the case, some kind of sophisticated version of the Turing-test would probably be our best metric for determining whether an artificially created being is in fact a conscious, self-aware being.