Comment Re:LLMs are not concious (Score 1) 44
The word "conscious" (and the family of related words) is sloppily defined. This is not a defect. In fact, it is a powerful feature of our language (and our mental abilities that allow us to process language) that we can operate really well with sloppily-defined concepts. It allows for very speedy information exchange on very practical matters (especially useful when in combat or other emergency situations).
But this same feature makes in-depth analysis difficult, especially when it presents logical traps (fallacies) that we can innocently fall into.
Our commonsense understanding of consciousness is rooted in our very practical need to quickly divide up the world of our experience into the categories of "conscious" and "not-conscious." Rocks, clouds, the wind, shadows....all not conscious. People, wild animals, divine beings (to the extent that one believes in them) all conscious. The point here is that the sloppiness of the definition is rooted in a stark practical reality for us: we interact differently with conscious beings than with inert matter, so we need to be able to make very quick snap-judgments about which is which.
For most of the history of our existence, this was enough. We just tossed plants over in the "not conscious" group and ran with it. Computers, too, went right into the "not conscious" group, and that was good enough.
This commonsense idea of consciousness is not very helpful when we dive deeply into the edge cases, especially the ones that are new in the history of our species. As AI becomes more sophisticated, we wind up with something that has elements common to both categories (its a metalic/plastic construct, so generally not conscious, but it can engage with us in lucid dialog and solve engineering problems and so on, so generally conscious).
We aren't going to be able to resolve this dilemma with what we have on-hand. Our basic intuition about what consciousness is does not give us a clear answer (and it will just become more fuzzy as the tech improves), and further scientific research is hard to do properly since such research must begin with clear and unambiguous definitions (which we don't have, for the reasons given above).
So, for now, it is still easier to toss these things in the "not conscious" bucket and move on, but if the hopes and dreams of interested parties come true, it will become a lot more difficult to do so in the near future.
All the same applies to the word "life" incidentally. And though we can clearly have things that are alive but not conscious (such as a human in a coma), AI raises the interesting question of whether we can ever have something that is conscious but not alive. At this point though, such a discussion is entirely semantics without substance. However, in the future, if the tech actually does make the leaps we hope fore, that discussion will become more poigniant.