Comment Re:Nothing but calculators (Score 1) 90
Many years ago, I was talking about this with a friend with a degree in Math, who'd actually studied this. He told me that, as an example, the arithmetic of all integers between zero and one million, inclusive, would not be sufficiently complex. I don't remember if he explicitly said that any system that couldn't take infinity into account wasn't enough for Goedel's theorem to apply, but that's the impression I came away with.
Yes, he's correct in this. If you have a system which only contains integers 0 to n, then that doesn't satisfy the Peano axioms.
Yes, and no. I didn't mean that we can handle infinity in the sense that computers can handle floating point numbers. I meant that the Arithmetic of Natural Numbers can express any finite integer, and that the concept of infinity is part of our tool box, if you will. Again, I'm writing as a layman, so my understanding may, and probably is as incomplete as math itself.
Infinity isn't a concept in mathematics, it is what we call an entire class of sets. If a set S cannot be put into a one-one correspondence with a finite number (a number itself is a set), then we say that S is infinite.
The existence of such sets is given by the "Axiom of Infinity." This is all just a part of the formalisation of Set theory. Such a formalisation is simply a set of "beliefs." A theorem prover is simply an AI which has the axioms (or synonomously the beliefs) of set theory at it's core.
What I'm trying to get at is that there is no difference in the way we produce proofs compared to a computer. Just as we can never truly realise an infinite process (because we'll be going on forever), we can formalise it, and then prove results about. The point is that a theorem prover is given knowledge that "infinity" exists, because at the core, all you have is a set of beliefs, and your proofs are simply formal derivations from those beliefs.
In actual fact, the Turing machine idea of computability was really created to understand the process of mathematical proof. The fundamental difference is that Turing machines had infinite memory. But for any single program which terminates, only finite amount of tape is used. A proof, by definition, is a finite "program" - so this idea of theorem proving goes way back to the 1920's! So, given any proof, we can accommodate a finite amount of memory for that and run it to check that the proof is indeed correct.
Hope this clarifies things!