Comment Re:The cycle (Score 2) 178
It's mimicked intelligence. You're absolutely right that - under the hood - there's not the sort of traditionally cognitive processing happening that we might consider intelligence. That can be a distinction without a difference if the output is the same, and for quite a lot of things, they're becoming indistinguishable.
I think the real challenge for LLMs specifically is the training data. Between the limits placed technically and legally, malicious poisoning already happening, and the breakdown of function seen when LLM generated content is repeatedly added to its training data (i.e., "model collapse"). However, I also think that by the time we start to see major effects of this, the LLMs of today will have evolved to largely work around this limitation and the underlying process for generating output will be far less susceptible to the problems seen today. Time will tell whether that's overly optimistic, but there's a ton of development in this space toward better approaches.