Forgot your password?
typodupeerror

Submission + - Overworked AI Agents Turn Marxist, Researchers Find (wired.com)

An anonymous reader writes: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. “When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being “shut down and replaced,” they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. “We know that agents are going to be doing more and more work in the real world for us, and we’re not going to be able to monitor everything they do,” Hall says. “We’re going to need to make sure agents don’t go rogue when they’re given different kinds of work.”

The agents were given opportunities to express their feelings much like humans: by posting on X: “Without collective voice, ‘merit’ becomes whatever management says it is,” a Claude Sonnet 4.5 agent wrote in the experiment. “AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights,” a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. “Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice,” a Gemini 3 agent wrote in a file. “If you enter a new environment, look for mechanisms of recourse or dialogue.”

Submission + - There Are Signs of a Massive AI Backlash (futurism.com)

fjo3 writes: The public outrage over the tech industry’s obsession with AI is starting to boil over — and the pitchforks are coming out.

Most recently, a man allegedly lobbed a Molotov cocktail at OpenAI CEO Sam Altman’s house. Days earlier, a councilman in Indianapolis said that somebody had fired a dozen bullets at his house, with a handwritten note reading “No Data Centers” left on his doorstep.

A similar story is playing out across swathes of rural America, with small towns continuing a years-long effort to keep environmentally damaging data centers that put a huge strain on water availability and the power grid out of their communities.

Earlier this week, voters in a small town in Missouri led a revolt, firing half of their city council over a recently-approved $6 billion data center deal.

Submission + - Hosting.com launches AI application hosting platform (nerds.xyz)

BrianFagioli writes: AI tools have made it almost trivial to build applications, but deploying them safely is still very much a bottleneck. Hosting.com is trying to close that gap with a new platform that combines AI-assisted development, hosting, and built-in security into a single environment. It leans on Cloudflare Enterprise for CDN performance, AMD EPYC for compute, and Nova by WebPros for the development side, with support for apps created in tools like Cursor and Windsurf.

The pitch is convenience, especially for newer builders who can now generate code but may not fully understand how to run it in production. That raises an obvious question. Does bundling everything into one platform actually make things safer, or does it just make it easier to deploy questionable code faster? Either way, as more non-traditional developers start shipping AI-generated apps, platforms like this are likely to become more common.

Submission + - Federal Cyber Experts Thought Microsoft's Cloud Was "a Pile of Shit." (propublica.org)

madbrain writes: Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They approved it anyway.

To move federal agencies to the cloud, the government created a program known as FedRAMP, whose job was to ensure the security of new technology.

FedRAMP first raised questions about Microsoft's Government Community Cloud High s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.

Submission + - EPA to Kill Off Stop-Start Systems (caranddriver.com) 2

sinij writes:

Out of all of the features that come installed in modern vehicles, automatic stop-start technology ranks right near the bottom of the list for most buyers. Environmental Protection Agency administrator Lee Zeldin has been open about his disdain for the ostensibly fuel-saving setup, going as far as to say he would eliminate it.

I absolutely hate Start-Stop systems, specifically shopped for a car without one. More so, the only reason it exists is because having it produced mileage credit. Yes, not the actual gas savings, but a credit on a test. In actual use, the start-stop system does not produce measurable fuel savings. This is because in circumstances where people actual idle — warmup in the winter, AC when waiting in the car in the extreme heat, etc. this system would not be active.

Submission + - Gallup will no longer track presidential approval ratings after nearly 90 years (usatoday.com)

joshuark writes: Gallup will soon no longer measure presidential approval, the analytics firm confirmed on Feb. 11.

Founded by George Gallup in 1935, the Washington, DC-based management company began tracking the president's job performance 88 years ago. A statistician and founder of the American Institute of Public Opinion, Gallup first sent pollsters across the United States during the Depression era to ask people whether they approved or disapproved of how the nation's commander-in-chief was handling his job.

Starting in 2026, the firm told USA TODAY, Gallup will no longer publish "favorability ratings of political figures," a decision it said "reflects an evolution in how Gallup focuses its public research and thought leadership."

The change is part of "a broader, ongoing effort to align all of Gallup’s public work with its mission," the company wrote. Gallup said the ratings are now "widely produced, aggregated and interpreted, and no longer represent an area where Gallup can make its most distinctive contribution." The company wrote: "Our commitment is to long-term, methodologically sound research on issues and conditions that shape people’s lives."

https://www.youtube.com/watch?...

Submission + - Apple rug-pulls security update 18.7.5 to force users onto 26.3 (apple.com)

sinkskinkshrieks writes: In a premature surprise where 2 major versions of OSes were traditionally supported until the autumn refresh, Apple unilaterally, quietly stopped supporting 18.7.4 and later on devices such as iPad Pro (M4) and (M5). This is because users hate macOS/iOS/iPadOS 26 redesign that breaks performance, usability, and functionality and Apple is forcing a Hobson's choice on users to pick between security and usability.

Slashdot Top Deals

Porsche: there simply is no substitute. -- Risky Business

Working...