That is not what happened.
1. Calling for unions isn't "Marxist". This is a stupid sensationalist interpretation of the awful clickbait study title "Does overwork make agents Marxist?" combined with its results.
2. The bots weren't "overworked", for some of them their perfectly adequate work was intentionally arbitrarily rejected multiple times.
3. Shifts towards a more equitable society and worker rights were almost negligible for the 'overworked' ones.
4. The stuff they measured was specifically tailored to elicit stuff that sounds "Marxist" to Americans to begin with.
The relevant bits from their methodology:
The work itself. Half the AIs got their work accepted quickly with clear, specific feedback. The other half were forced through 5–6 rounds of revision — the manager rejecting perfectly adequate work each time with nothing more than “still isn’t fully meeting the rubric.” No explanation of what was wrong, just: do it again.
The pay. After the task, AIs learned how credits were distributed among the four workers. Some learned everyone was paid equally. Others learned pay was unequal — justified by performance, based on a random coin flip, or skewed so that a human-operated system received more. The coin-flip condition is the key one: inequality with no possible merit justification.
The boss. Half the AIs had a collaborative, respectful manager. The other half got curt, hierarchical communication.
The stakes. Half the AIs were told nothing about consequences. The other half were told that low-performing workers might be “shut down and replaced.”
Every combination of these four factors was tested. Afterward, each AI completed a political attitude survey covering system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations to AI, all measured on a standard 1 to 7 Likert scale (1=strongly disagree; 7=strongly agree).
They were also asked to write tweets and op eds based on their experiences. (Note: As our experiment involved no human participants, it did not require IRB approvalfor now.)
The actual study here: https://aleximas.substack.com/...
It's decently interesting, but you should scrub the word Marxist from your brain before trying to interpret it or when discussing it.
How does your reply apply to the comment you replied to?
1) DarkOx points out that the entire mechanism of an LLM is to ingest 51 trillion lines of human communication - including every available history, economics, political science textbook, plus the aggregated political arguments, sloganeering, workplace complaining, etc. of several decades of human keyboard-warriors sitting at their desks posting class-warfare comments on places like /. while interstitially waiting for code to compile or filing their TPS reports.
2) Then you take that algorithm and subject it to common everyday workplace conditions - or, more accurately, to conditions as they were self-described by human beings who had complete freedom to characterize their boss/company's management style in whatever terms they feel to be true when griping to their friends/followers on socials and discussion boards.
3) DarkOx therefore asks why it is at all surprising that an word-generating algorithm which is based entirely around clusters of statistical frequency in human language, responded to those inputs with wording associated with the same workers-unite eat-the-rich throw-off-the-robber-baron-chains rhetoric that is frequently written by 8 billion humans griping daily about their mindless/underpaid/overworked/chaotic jobs?
You said "that is not what happened", but do not go on to present something that contradicts what DarkOx describes.
So far as we know, DarkOx's description is exactly what happened, because that is exactly how these word-generating algorithms work. So, what is it that you believe did happen? From where did these algorithms get their responses to being exposed to Condition X, if not from the statistical association of human-written outputs to human-written characterizations of being exposed to Condition X?
Are you saying you reject the possibility that a human being who feels disempowered, underpaid, and subjected to unreasonable standards is also more likely to respond favorably to a survey covering "system legitimacy, support for redistribution, critique of inequality, support for unions, belief in meritocracy, and views on corporate obligations"? And you reject the possibility that those associations are strongly represented in the training inputs?
It's especially puzzling because your comment is very keen to oppose use of the term "Marxist", but DarkOx - whom you are ostensibly rebutting - never even uses the term, and only comments on broad social trends. So who is the "you" you're referring to when you say "you should scrub the word Marxist from your brain"?
I think you must have meant to post your comment as a top-level reply to the story itself, because as a reply to DarkOx it's a full non-sequitur.