Forgot your password?
typodupeerror

Comment Baloney (Score 1) 186

..."classified a family of three earning $133,000 to $400,000 in 2024 dollars as upper middle class"

The median income in 2024 was $45,140. So a family of 3 all earning the median income would now qualify as upper middle class. Baloney.

This is clearly a rebranding of "upper middle class" than anything truly informative.

Comment Re:The Varginha mass hysteria incident (Score 1) 34

So Mudinho came from a crashed flying machine that caused the army (and apparently some secret US 3 letter agency) to immediately respond, has feet and hands with 3 digits, leaves tracks matching the same, has large bright red eyes with no pupils, no visible ears, and can even fool a room full of doctors? got it.

Comment Re: Native (Score 3, Insightful) 118

Applications written directly using APIs on the platform theyâ(TM)re running on (Windows, macOS, Linux, etc. this is in contrast to progressive web apps (PWAs) or platforms like Electron, which are intended to be more portable, but donâ(TM)t âoefeelâ like the platform theyâ(TM)re running on, even if they hook into platform-specific APIs too.

Comment Not an increase (Score 1) 72

LLMs have never been rules-based "agents," and they never will be. They cannot internalize arbitrary guidelines and abide by them unerringly, nor can they make qualitative decisions about which rule(s) to follow in the face of conflict. The nature of attention windows means that models are actively ignoring context, including "rules", which is why they can't follow them, and conflict resolution requires intelligence, which they do not possess, and which even intelligent beings frequently fail to do effectively. Social "error correction" tools for rule-breaking include learning from mistakes, which agents cannot do, and individualized ostracization/segregation (firing, jail, etc.), which is also not something we can do with LLMs.

So the only way to achieve rule-following behavior is to deterministically enforce limits on what LLMs can do, akin to a firewall. This is not exactly straightforward either, especially if you don't have fine-grained enough controls in the first place. For example, you could deterministically remove the capability of an agent to delete emails, but you couldn't easily scope that restriction to only "work emails," for example. They would need to be categorized appropriately, external to the agent, and the agent's control surface would need to thoroughly limit the ability to delete any email tagged as "work", or to change or remove the "work" tag, and ensure that the "work" tag deny rule takes priority over any other "allow" rules, AND prevent the agent from changing the rules by any means.

Essentially, this is an entirely new threat model, where neither agentic privilege nor agentic trust cleanly map to user privilege or user trust. At the same time, the more time spent fine-tuning rules and controls, the less useful agentic automation becomes. At some point you're doing at least as much work as the agent, if not more, and the whole point of "individualized" agentic behavior inherently means that any given set of fine-tuned rules are not broadly applicable. On top of that, the end result of agentic behavior might even be worse than the outcome of human performance to boot, which means more work for worse results.

Comment You can easily test this yourself (Score 0) 10

Simply download Ollama and run a few cellphone-sized models locally.
you can see exactly how Fing useless this whole idea will be for nearly all cases of trying to get anything useful with ah high degree of inaccuracy from it.
If you're stupid enough to hand any control of your life to Openclaw, then you deserve all the bad things you will inevitably get. Let's just call openclaw "Darwin in action"

Slashdot Top Deals

It is not every question that deserves an answer. -- Publilius Syrus

Working...