Comment Re:The AIs are not sentient (Score 1) 112
In order to understand LLM's, one needs, it seems to me, the appropriate amazement that the inventors of this technology were able to structure and store the information itself in such a way that stimulating it with prompts yields coherently structured information back.
It requires huge amounts of data and energy to ingest giant curated datasets and structure them in the way required to yield coherent information when queried/prompted/stimulated.
AFTER the data is structured and stored, perhaps it is a mere stochastic parrot (a term coined by Dr. Emily Bender in her well-known paper about bias in LLMs, whether they use too much energy and whether there is benefit to larger models - discussion of it here and here). At one point in the video discussion panel, she vehemently opposed that humans might be doing the same sort of thing that LLMs do because it's "dehumanizing". She goes on to say, "I will not engage in discussions with people who don't acknowledge my humanity," which seemed very... aggressively advocacy focused. I mean, we don't know how we digest food in that we are not consciously directing it, and we don't know how we retrieve information, we "just do it."
My point here is that this is a new tool, and no one knows everything it can do. Knowing it's a next token predictor is the most basic level of understanding of these software constructs, and can lead people to underestimate the tasks it can accomplish with tuning and optimization.