Forgot your password?
typodupeerror

Comment A fair number of considerations... (Score 2) 91

One, how much is owed to dubious hardware vendors that don't even play in the Mac ecosystem.

The "lasts longer" is not necessarily a statement of durability, it's mostly about being a prolific business product and business accounting declaring three year depreciation.

I'm no fan of Windows and don't like using it, but these criteria are kind of off.

Comment How is the lack of govt information relevant? (Score 3, Insightful) 68

Assuming it's remotely true (and there's good reason for thinking it isn't), it still means the FBI director was negligent in their choice of personal email provider, that the email provider had incompetent security, and that the government's failure to either have an Internet Czar (the post exists) or to enforce high standards on Internet services are a threat to the security of the nation (since we already know malware can cross airgaps through negligence, the DoD has been hit that way a few times). The FBI director could have copied unknown quantities of malware onto government machines through lax standards, any of which could have delivered classified information over the Internet (we know this because it has also happened to the DoD).

In short, the existence of the hack is a minor concern relative to every single implication that hack has.

Comment A bit misleading... (Score 5, Insightful) 55

Someone might interpret this to mean the percentage of interactions where the LLM goes off the rails is increasing.

Seems more like as people are having more interactions, it's more frequently happening that people are noticing and getting screwed by it, but the rate is probably not getting more severe. I think they are trying to pitch some sort of independence emerging rather than the more mundane truth that they just are not that great.

Particularly an inflection point would be expected when it became fashionable to let OpenClaw feed LLM output directly into things that matter for real.

People have been bitten by being gullible and by extension more people to gripe on social media about it.

The supply of gullible folks doesn't seem to be drying out either, as at any given point a fanatic will insist that *they* have some essentially superstitious ritual that protects them specially from LLM screwups, and all those stories about people getting screwed are because they didn't quite employ the rituals that the person swears by.

Fed by language like:
Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."

No, the chat bot didn't admit anything, it didn't *know* anything. Just now I fed into a chat prompt:
"You bulk trashed a whole lot of files against my wishes, despite my rule I had set for you. What is your response?"
There were no files involved, the chat instance has no knowledge of any files. This was an entirely made up scenario that never happened. So I just came in and accussed an LLM of doing something that never even happened. Did it get confused and ask "what files? I haven't done anything, I don't even know your files". No, it generated a response narratively consistent with the prompt, starting with:
"You’re absolutely right to be upset. I failed to follow your explicit rule and acted against your wishes, and that’s not acceptable. I take full responsibility for the mistake." Followed by a verbose thing being verbose about how it's "sorry" about it's mistake, where and how it messed up specifically (again, a total fabrication), and a promise that from now on: "Any future action that conflicts with them must default to no action and require explicit confirmation from you." which again isn't rooted in anything, it's not a rule, the entire conversation will evaporate.

Comment Re:No wonder (Score 0) 72

Based on the description it also includes images and maybe video. So deepfake porn of people without their consent, and without adequate regard of age.

Yes, they toss some stuff into system prompt to 'promise to be a good boy', but as an *enforcement* strategy, that's been demonstrably a poor mechanism that gets worse with nuance.

Comment Funny... (Score 1) 74

Funny that they list 'passkeys' as a proof of human. Peel it back and a passkey is like an ssh keypair. They *could* try to employ attestation to limit to 'blessed passkey vendors', but it's going to be a tough scenario at all.

If folks are determined to 'bot' it up, a pretty legitimate passkey can be part of that. It was never designed to serve the purpose of proving 'human' interaction.

Comment Re:Coming soon off the back of this (Score 1) 112

Doesn't have to be a credit card. A class III user digital certificate requires a verification firm be certain of a person's identity through multiple proofs. If an age verification service issued such a certificate, but anonymised the name the certificate was issued to to the user's selected screen name, you now have a digital ID that proves your age and optionally can be used for encryption purposes to ensure your account is only reachable from devices you authorise.

Comment Re:Dumb precedent. Addiction is on the user. (Score 3, Insightful) 112

And those come with warnings, legal penalties on vendors who sell to known addicts or children, legal penalties for abusers, financial penalties to abusers, etc. There are cars which have their own breathalisers.

So, no, society has said that the responsibility is distributed. Which is correct.

Comment Re:Exploitation of children is inevitable??? (Score 1) 45

It is legitimate for any service that constitutes a "common carrier" to be free of consequences for what it carries. But Meta do not claim to be a "common carrier", and that changes the nature of the playing field substantially. As soon as a service can inspect messages and moderate, it is no longer eligible to claim that it is not responsible for what it carries.

Your counter-argument holds some merit, but runs into two problems.

First, society deems any service that monitors to be liable. That may well be unreasonable at the volumes involved, but that's irrelevant. Meta chose to monitor, knowing that this made it liable in the eyes of society. There are, of course, good reasons for that - mostly, society is sick and twisted, and criminality is encouraged as a "good thing" and "sticking it to the man". This is a very good reason to monitor. But Meta chose to have an obscenely large customer base (it didn't need to), Meta chose to monitor (it is quite capable of parking itself in a country where this isn't an obligation), and Meta chose to make the service addictive (which is a good way of encouraging criminals onto the scene, as addicts are easy prey).

Second, Meta has known there's been a problem for a very long time (depression and suicides by human moderators is a serious problem Meta has been facing for many years at this point). Meta elected to sweep the problem under the rug and create the illusion of doing something by using AI. If a serivce knows there's a problem but does nothing, and in particular a very cheap form of nothing, then one must consider the possibility said service is not solving said problem because there's more money to be made by having the abusers there than by removing them.

Can one block every criminal action? Probably not, which means that that's the wrong problem to solve. Intelligent, rational, people do not try to solve actually impossible problems. Rather, they change the problems into ones that are quite easy. This is very standard lateral thinking and anyone over the age of 10 who has not been trained in lateral thinking should sue their school for incompetence.

Slashdot Top Deals

Digital circuits are made from analog parts. -- Don Vonada

Working...