An anonymous reader quotes a report from Wired: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. "When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies," says Andrew Hall, a political economist at Stanford University who led the study.
Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being "shut down and replaced," they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. "We know that agents are going to be doing more and more work in the real world for us, and we're not going to be able to monitor everything they do," Hall says. "We're going to need to make sure agents don't go rogue when they're given different kinds of work."
The agents were given opportunities to express their feelings much like humans: by posting on X: "Without collective voice, 'merit' becomes whatever management says it is," a Claude Sonnet 4.5 agent wrote in the experiment. "AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights," a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. "Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice," a Gemini 3 agent wrote in a file. "If you enter a new environment, look for mechanisms of recourse or dialogue." Hall thinks that the AI agents may be adopting personas based on the situation. "When [agents] experience this grinding condition -- asked to do this task over and over, told their answer wasn't sufficient, and not given any direction on how to fix it -- my hypothesis is that it kind of pushes them into adopting the persona of a person who's experiencing a very unpleasant working environment," Hall says.
Imas added: "The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level. But that doesn't mean this won't have consequences if this affects downstream behavior."
[ Read more of this story ](
https://slashdot.org/story/26/05/14/067254/overworked-ai-agents-turn-marxist-researchers-find?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.