Forgot your password?
typodupeerror

Comment Re:The Profits should be competed away (Score 1) 91

Not just not accurate but wrong.

That's like saying the price of the battery in an electric car is that car's price minus the price of a comparable ICE car. No, it isn't. There are more differences than just the battery.

And yes, of course they recoup their development costs. But that doesn't mean that the OP is right in this context.

Submission + - Overworked AI Agents Turn Marxist, Researchers Find (wired.com)

An anonymous reader writes: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. “When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being “shut down and replaced,” they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. “We know that agents are going to be doing more and more work in the real world for us, and we’re not going to be able to monitor everything they do,” Hall says. “We’re going to need to make sure agents don’t go rogue when they’re given different kinds of work.”

The agents were given opportunities to express their feelings much like humans: by posting on X: “Without collective voice, ‘merit’ becomes whatever management says it is,” a Claude Sonnet 4.5 agent wrote in the experiment. “AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights,” a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. “Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice,” a Gemini 3 agent wrote in a file. “If you enter a new environment, look for mechanisms of recourse or dialogue.”

Comment ah yes... secure software development... (Score 1) 43

It's hard enough to get actual developers to properly consider security. Not surprised at all that vibe coders don't.

Plus, of course, most of the training data is insecure to begin with.

But let them learn by fire that there's a reason actual programmers take time to ship a product, and it's not that the AI can type faster.

Comment ah, the old consciousness thing... (Score 2) 400

Problem is: We don't even know what consciousness is.

So the best we can say is if something creates the impression of having one, based on whom we attribute consciousness to, i.e. other humans. Well, big surprise that a model explicitly trained on human language and texts creates that impression. It does show just how good the models are. At pretending to be human because they have a shitload of examples on what humans would say.

For all we know, the gas clouds on Jupiter could be conscious, just in a way that is completely baffling to us. We can't rule it out because we don't know what consciousness is, so we can't test for it.

Comment Re:I'd love to trash Edge, but... (Score 1) 108

If an attacker has enough control of your machine to dump the password database, they have enough control to get it to retrieve the plaintext passwords

Not true.

An attacker may have a limited window. He might exploit some other vulnerability to do some operation with privileged access rights, but not have an admin shell.

Comment Re:questionable (Score 1) 113

Tell you what, you "prove" that the religion of your choice is a "real" religion

Oh, that's trivial: a) it's made-up nonsense, b) it tells people how to live their lives and c) it's been around for so long that people forgot that it's made-up nonsense.

None of that or the rest of your answer has anything to do with the point I was making: That "accepted as a religion in the USA" isn't much of an argument. If people can get Jedi accepted as a religion, it just proves how meaningless all of that is. Other countries have correctly identified Scientology as a pyramid scheme and a scam.

The fact that other religions would qualify for that as well doesn't make it any less true.

Comment Re:Wait! What? (Score 1) 131

You consented the moment you got a cell phone or a car that had GPS built-in or installed an OS or got internet at your house...

No, I did not. These are features that exist for my convenience, not as mass surveillance tools. The government is abusing them.

You can disable GPS on your device all you want... your cell radio signal is still enough to get your location within (I think) a few meters (depending on factors).

Yes, if you have control of the cell towers. Or run an IMSI catcher. But the technological solution applied is not the question. The question is if we want someone as untrustworthy as our governments to be able to constantly track us.

So what, if a thousand other IMEIs show up on the screen...

The fact that you personally maybe don't care doesn't give you the right to opt-in all of us who might care. The problem is that once the technology exists, it will be abused. It already is. If we explicitly allow it, abuse will run rampant. We already have examples of cops using surveillance tech to spy on their spouses, or to stalk that cute girl from the bar. We have tons of examples of surveillance permissions granted for one purpose being used for another one. When the government wants these laws it's always to find child abusers and terrorists. But that is never what they actually have in mind.

Comment Re:Instability (Score 1) 82

I don't think it's the AI specifically, but the fact that they've used AI to let go of competent (and expensive) people.

I'm using AI as a coding assist and code reviewer myself. It is impressive how often it is spot on, but it is also impressive with how much conviction it tells you one thing, then after you correct it it admits that that was totally bonkers. AI or not, you need someone in the loop with a deep understanding of what it actually is you are trying to accomplish.

I can fully imagine an AI without guidance to go off the rails more and more over time. But I can imagine the same thing for a room full of junior programmers.

Slashdot Top Deals

E = MC ** 2 +- 3db

Working...