Comment A Sufficiently Good Illusion (Score 1) 388
At what point does a sufficiently good illusion become reality?
There is a real possibility that our own consciousness and free will are simply useful illusions that grant us a statistically meaningful survival advantage. Whether they are or not is open to debate, but the fact that we are unable to provide a conclusive answer is not. Given that, you have to ask - at what point does the distinction between a near perfect illusion, and "reality" become meaningless.
It's also fine to say that semantic pattern matching and representation, even with millions or billions of dimensions is not "understanding"... but understanding is exactly that - accurately modeling and navigating the relationships between a large number of data points and symbolic/semantic representations. We don't yet have any solid, objective way to explain why that comparison isn't commutative, or how to determine when/if it gets complex enough to BECOME commutative. All we have is "I'll know it when I see it", and Dawkins definitely demonstrates the problem with THAT logic.
No, I don't think LLM's are "conscious", as we understand the word - nor anywhere near. The technology isn't nearly that good. But we've given them the ability to spawn new threads that can independently evaluate on their own output, to produce new and different conclusions. That's the basic outline of self reflection. They are only going to get better, and more complex, and it is a question that is coming.
At what point does a model's failure to do as instructed stop being a flaw in the model, or its training ... and start being stupidity.