The parent's point, though, is that the halting problem, and the diagonalization argument, are inherently infinite.
If you have a finite amount of memory, say, n bits, then there is a finite number of states the memory can be in (2^n).
No matter what the architecture of the processor, you can generalize it as something which reads the current state of memory, and other inputs, and produces a new state of memory (and possibly output).
Ignoring for a moment the infinite possibilities one gets with peripheral input, we address your example, that of a fractal. Given that this should take place entirely in memory, we can safely ignore input.
If you have finite memory, there are only a finite number of states the memory can be in. furthermore, because we are ignoring user input, and are using a deterministic processor, any given state of memory will always lead to a certain next state of memory.
As a process runs, it will either
a) halt
or
b) visit a state it has already visited.
Simple proof: you have a state machine with n possible states. (in a binary computer with finite memory of m bits, 2^m = n).
iterate it n+1 times.
If it hasn't halted within this time, it MUST have visited one state more than once (pigeonhole principle).
Cantor's argument is counterintutive, but true, as are the conclusions drawn about computability and the halting problem. The thing to realize, though, is that they apply to infinite sets.
In finite sets, brute force trumps.