Comment Re:quiet part out loud (Score 1) 34
Yeah, "AI" right now, outside of whatever is making the ghosts hunt pacman, generally means LLMs right now. LLMs are simply things that pick words based upon a multidimensional statistical analysis after being given an input. They're tuned and designed to give things that look like answers.
How on Earth does emotion or guilt fit into this? It really doesn't. People are fooled because they attribute positives and negatives to some kind of intelligence underneath, but if those positives and negatives aren't a function of language, those will not come through. You can train it to be polite, or rude, or manipulative based upon polite, or rude, manipulative language, because there is such a thing.
But guilt isn't a function of language. You can express guilt through language, sure, as in "Gosh, I feel so bad I did that", but you cna't actually make decisions based upon a feeling of guilt by reading things written by people who feel guilty.
So it'll never been reflected in a statistical analysis of what words are associated with other words, and never come out.
Now, maybe the people making these suggestions know all that and are talking about some future where AI actually reflects a reasoning, analytical, logical "brain" communicating. I'm as glad they are as I'm glad Asimov created the three laws of robotics. But for now I suspect this is going to be read by policy makers who think LLMs can do more than they actually can, who'll think it can solve actual problems, and we're a long way from that.