Comment Re:Not A Ton Yet, But That's Changing (Score 2) 65
I work in editorial operations. And while we're not making significant use of LLMs at this second, that's going to be changing rather quickly.
Assuming trials with GPT-4 go as expected, the company hopes to eliminate roughly half of the editorial pool by the start of next year. That would leave a team of a few power operators to feed prompts into GPT to get usable results, a few editors to check the output (because we know it bullshits at times),
I wouldn't want to do that job. Shit, I don't want to read gpt-crap now. I don't want to read the shit the editors here come up with now, either.
and then everyone else working on traditional editorial content, including breaking news and subjects GPT can't handle.
You're going to find fewer and fewer people able to do that job. GPT is one reason.
The reality is that even if GPT-4 isn't always as good as a human, it's close enough that the cost savings make it a no-brainer.
That means even fewer humans learn how to do it.
What it lacks in factual accuracy it makes up for in writing clear, structured articles; something even a good deal of humans struggle with. Put bluntly, it makes it hard to argue to hire a greenhorn to work the general news beat when GPT can do a better job from day one.
It doesn't exactly help that more and more schools no longer teach this sort of thing. They used to, so you had a good pool of capable people. Now, even here, it's become quite common that people go off the handle for things entirely in their own heads, since nothing of what they complain of is in the original comment they're reacting to. Reading comprehension is way down.
That in turn means that it no longer matters whether GPT manages to produce wonderfully clear, structured drivel. The readers are no longer able to notice the difference. (Or are no longer willing to read all that drivel, since produces by machine, with "nobody home".) And GPT will do its level best to help, and succeed, with making it worse.