My top candidates just now:
1. It's just a joke.
2. I'm just asking questions. (Most relevant to this story.)
3. AI is good.
So what's your favorite?
In my typically verbose way, I feel like a few words of clarification are called for. Also another attempted joke or two?
The first one is mostly frequently abused as an excuse for bad behavior, including speech behaviors. In particular, there are many lies that used to be taken as proof of character flaws, but now they are just spun away. In orange particular, "The president was only joking" is no excuse for a job that ain't supposed to be so funny it makes you sick. (Which actually comes back to the theme of the Slashdot story at hand.)
The second one is most damaging as an epistemological attack on the nature of truth itself. It's actually a good thing that science does ask questions, but the goal of scientific questions is to learn more, not to destroy the idea that we know anything at all. Perfect knowledge should not be the ultimate enemy of trying to learn anything at all on the excuse that our knowledge ain't perfect. As if there were any perfect scientists (or politicians), now or ever.
Now about my newish third candidate, the problem is with "good". Options that are closer to the truth might be "AI is a tool too easily used as a weapon" or even "AI is nothing" because it's the human beings who use things, even including AI things.
Just had another encounter with an AI entrepreneur yesterday. Language-related application should have caught my interest, but his money-centric attitude lost it. My bad. What else should I have expected at a VC gathering? The main reason he was there was in hopes of getting some of that sweet, sweet cash and I should congratulate him on his tight focus. (A-hole joke time?)
Back to the AI threat. I suppose the main angle for this story should be examples of AI slop attacking vaccines in particular and the CDC in general. Too depressing to websearch for some examples, and you can get AI help if you want some. I'm more focused on the GAIvatar threat. I considered "GAIvatar are harmless" as my third candidate, but the portmanteau is not frequent and I've been unable to find any standard usage describing generative AI used to imitate specific people. Rarely they may offer a few bits about chatting with a fake Einstein or an AI ghost of a grandparent. Recently read an interesting SF story about solving a major math conjecture with the aid of an AI postmortem copy of a deceased father...
So I used to focus on the use of individual GAIvatars to predict and control individuals (though carefully crafted and targeted prompts). But now I'm wondering about creating a group GAIvator to predict and control the behaviors of an entire class of people. It could even become a kind of circular definition, where group membership is defined on a sliding scale based on how closely a particular candidate member conforms to the GAIvator's predictions and prompts.
So have a nice Friday?
Me? I'll take my chances with the vaccines. Much better odds than they'll give me in Vegas or the stock market.