It seems their definition requires the ability to reflect on itself. In which case, it does not require a great deal of intelligence for a program to be able to do this. For instance, consider the following Turing Machine, let's call it M. M performs the following function:
1. Receive an input N, which is a string encoding of a TM.
2. Accept if N is a string encoded representation of M.
3. Reject otherwise
Such a TM would be easy to implement in any programming language. Obviously, it would contain within itself some method of comparing encoded strings to its own encoding. It would be able to reflect on itself very deeply. In fact, how many of us when presented with a complete map of neurons could successfully say "Yes, that's my brain" or "No it is not"?
Does that mean that the TM has a soul? Does it mean that we do not?
Perhaps I've been writing too many computation theory proofs of late, and need to play more video games, but it is still interesting, don't you think?