Nice, that you mentioned Gödel. His greatest achievement was a contribution to formal systems, where, in short, a language/system cannot be consistent and complete at the same time. This applies to Watson, but that limitation does not apply to humans or animals. Furthermore, machines are always bound by their programming, as you state yourself
Try saying "prefec2 can not consistently assert this sentence" to see if humans are not subject to the Incompleteness Theorem.
While I concur to the last part, I do not think that it is a deterministic thinking apparatus. First, to be self-aware, the brain and the body of a person interact. It is this connection which allows to build self-awareness. However, it is not the only ingredient. Second, while a single nerve cell can be modeled with mathematics, it is a large simplification. Even though each cell-model is a non-deterministic system. In combination with others it is able to solve problems, sometimes without prior knowledge, which are not computable and heuristics won't apply.
Quantum mechanics has a deterministic, timeless representation of the wavefunction of the Universe. Determinism does not preclude self-awareness or self-determination. I think it's more accurate to say that neurons have non-linear behavior and are therefore difficult to predict with accuracy. There is a threshold, however, at which computing power is sufficient to simulate a neural network of some size such that it is indistinguishable from the original. If there were not such a threshold then all the environmental noise like thermal noise, electromagnetic radiation, and stray cosmic rays would strongly interfere with our brains.
For intelligent machines to become our overlords, we would have to program them to be that, which is very unlikely. And they need to be greedy and power hungry. We have a pretty good model, why some of us are greedy and power hungry, and how this trait evolved.
Bacteria are not greedy or power hungry. Neither are viruses. They will eat you all the same. A train is not greedy or power hungry, but it will flatten you if you are in the way. What happens if you get in the way of a self-improving machine that wasn't explicitly programmed to avoid squashing humans? The risk is not that we will be slaves to a machine but that the machine will ignore us as it converts available matter and energy (including us) into whatever its goals and programming tell it to do. It requires very complex goals and behavior to enslave humans. Simple goals will simply destroy us.
However, lets assume that we program a system to become our overlord, like in iRobot, where we formulate rules, which in the end conflict with our own ability to be nice to each other, which results in drastic measures applied by the machine. If it would come to that we would be doomed. However, the machine would soon recognize that the humans would die off and that its own measures are the cause. That is, of course, only true if we do not program it to be a total asshole.
You are much closer to the heart of the problem here. How do you program a machine to not be an asshole? That is the very core of the problem. Any self-improving machine with the ability to change the world will eventually act dangerously toward humans out of no desire of its own but due to a lack of programming to protect humans. The machine must understand humans at least as well as we understand ourselves and additionally have the goal of preserving and improving our moral judgements.
Look up Friendly AI for a fairly thorough discussion of the risks involved