Forgot your password?
typodupeerror

Comment Re:Herbert was right (Score 1) 79

Not only have I seen that, but I have experienced it.

My socket set and ratchet isn't trying to convince me to be in a relationship with it, to be in love with it, to be something of an equal to it.

Even our pets as living beings capable of expressing themselves are not able to communicate at our level.

Large language model AI is attempting to spoof being human, to mimic being us. There are already examples of people becoming very, VERY upset when their AI-boyfriend or AI-girlfriend is taken away by companies revising the AI standards and interaction rules. This is unhealthy. The relationship needs to remain that of tool user and tool, because anything more than that is one-sided and subject to terrible abuse by anyone that managed to co-opt that system.

Comment Re:Right outcome, wrong reasons (Score -1, Troll) 61

And it is common practice. And has been for a long time. If you want to do business with the government and you can't certify that your suppliers comply with applicable rules and regulations, you either stop using them. Or give up the business opportunity. Welcome to the Federal Procurement Process.

It's a hostage situation because Anthropic is trying to insert its TOS as a poison pill into others supply chains. The Pentagon doesn't have to comply with them. But as a potential vendor, you may be exposed to tortious action. Anthropic is setting you up as a blackmail victim. Something, by the way, that counterintelligence is VERY interested in.

You are living in bizarro land.

I've been living in the DoD (now the DoW) supplier business for decades. And yes, it's bizarro land. But it's the law. Federal contracts are not some sort of UBI for crybaby companies.

Comment Right outcome, wrong reasons (Score 0) 61

Not 'punishment'. But 'not fit for use'. That is, in fact, what Anthropic says.

Anthropic says its artificial intelligence product, Claude, is not ready for safe use in fully autonomous lethal weapons or the mass surveillance of Americans.

OK. Then you don't win the bid. Assuming that the DoW worded their acquisition RFQ properly. Also, if a third party uses Claude and wishes to bid on a DoW supply contract, Anthropic's resistance to being involved in such business may put that potential third party supplier in legal risk. The DoW has a right to proactively warn future partners about such a conflict. Hence the "supply chain risk".

One of the amicus briefs described these measures as "attempted corporate murder." They might not be murder, but the evidence shows that they would cripple Anthropic.

Anthropic is taking potentially unwilling parties hostage. Anthropic has no right to impose its desires on these parties. That's restraint of trade. A violation of the Sherman Antitrust Act and a felony.

Comment Not an increase (Score 1) 67

LLMs have never been rules-based "agents," and they never will be. They cannot internalize arbitrary guidelines and abide by them unerringly, nor can they make qualitative decisions about which rule(s) to follow in the face of conflict. The nature of attention windows means that models are actively ignoring context, including "rules", which is why they can't follow them, and conflict resolution requires intelligence, which they do not possess, and which even intelligent beings frequently fail to do effectively. Social "error correction" tools for rule-breaking include learning from mistakes, which agents cannot do, and individualized ostracization/segregation (firing, jail, etc.), which is also not something we can do with LLMs.

So the only way to achieve rule-following behavior is to deterministically enforce limits on what LLMs can do, akin to a firewall. This is not exactly straightforward either, especially if you don't have fine-grained enough controls in the first place. For example, you could deterministically remove the capability of an agent to delete emails, but you couldn't easily scope that restriction to only "work emails," for example. They would need to be categorized appropriately, external to the agent, and the agent's control surface would need to thoroughly limit the ability to delete any email tagged as "work", or to change or remove the "work" tag, and ensure that the "work" tag deny rule takes priority over any other "allow" rules, AND prevent the agent from changing the rules by any means.

Essentially, this is an entirely new threat model, where neither agentic privilege nor agentic trust cleanly map to user privilege or user trust. At the same time, the more time spent fine-tuning rules and controls, the less useful agentic automation becomes. At some point you're doing at least as much work as the agent, if not more, and the whole point of "individualized" agentic behavior inherently means that any given set of fine-tuned rules are not broadly applicable. On top of that, the end result of agentic behavior might even be worse than the outcome of human performance to boot, which means more work for worse results.

Comment Re:Apples Security features... (Score 1) 84

They are there to protect you from criminals and possibly help with privacy. It is not there to protect you from the government.

I am struggling to understand the distinction.

Of course Apple knows who their users are

The best move would have been to not know, retaining some plausible deniability. But Apple has to have that yummy, yummy revenue.

Slashdot Top Deals

Professional wrestling: ballet for the common man.

Working...