Comment Re: What I don't like about Dawkins (Score 1) 393
I think it's an argument about both. If you can't define what a consciousness is, how can you say something doesn't have one?
Vertebrates in general are conscious beings, and said consciousness runs on an analog pattern recognition device. An LLM on the other hand is a digital pattern recognition device. With digital computers being "Turing complete", is it truly unfeasible that with significant software development and computing precision/power (and of course, a precise understanding of how the brain works), a supercomputer could not emulate a human brain? If a computer can successfully emulate a human brain, would that brain not be conscious? Could consciousness occur without all the complicated wetware emulation, and therefore appear in other computer programs?
As AI gets more and more sophisticated, i do think we will get to a point where we cannot truly rule out a computer program being conscious.
Vertebrates in general are conscious beings, and said consciousness runs on an analog pattern recognition device. An LLM on the other hand is a digital pattern recognition device. With digital computers being "Turing complete", is it truly unfeasible that with significant software development and computing precision/power (and of course, a precise understanding of how the brain works), a supercomputer could not emulate a human brain? If a computer can successfully emulate a human brain, would that brain not be conscious? Could consciousness occur without all the complicated wetware emulation, and therefore appear in other computer programs?
As AI gets more and more sophisticated, i do think we will get to a point where we cannot truly rule out a computer program being conscious.