Forgot your password?
typodupeerror

Comment Not the worst thing systemd does with user info... (Score 1) 156

So, during this story, someone pointed out a command to contextualize the info:
# userdbctl user --output=json $(whoami)

Ok, so run that and I see "hashedPassword". A field that my entire career has been about "not even the user themselves should have access, even partial access to it needs to be protected by utilities that refuse to divulge that to the user even as they may need that field to validate user input. And now, there it is, systemd as a matter of course saying "let arbitrary unprivileged process running as the user be able to access the hashed password at any point".

Now this "age verification" thing? I think systemd facet is blown out of proportion. All it is is a field that the user or administrator injects, no "verification". Ultimately if wired up, the only people that are impacted are people who do not have admin permissions to their system and have an admin that's forcing your real date of birth somehow.

The biggest problem comes with "verification" for real, when an ecosystem demands government ID or credit card. However, most of the laws consider it sufficient for an OS to take the owner at their word as to the age of the user, without external validation. So a parent might have a chance at restricting a young kid (until kid knows how to download a browser fork that always sends the "I'm over 18" flag when it exists), but broadly the data is just whatever the people feel like.

Comment Re:Depends (Score 1) 35

The problem with the vast amount of hardware turf that Microsoft covers is different from say, Apples, because Apple highly controls their hardware platforms, and Microsoft by its nature, cannot.

Add in driver components, software legacies, and Microsoft users continue to pay this tax, generation after generation. So indeed these issues ARE similar.

When Oracle updates key functionality, they risk a domino effect, just as Microsoft does. The QA feedback loop can help, but all old code must become crusty because of physics. Reinventing the core code then causes its own ripple effects.

There are ways to fight this, at the risk of business partners going away-- the hardware makers and software vendors with huge installed bases.

Every time a change is made, it would be wonderful to do regression testing. That's why there are "insider" programs, the little beta site for these changes because regression testing today is impossible because the installation platforms are too diffuse.

There is truth in the aphorism, "The bigger they are, the harder they fall.".

Comment Re:*nix systems are more stable? -- We know.... (Score 1) 183

Have you ever noticed why no one celebrates Patch Tuesday in the pub? It's because they're waiting by consoles waiting for stuff to break.

Windows, client and server, are a house of cards. This goes far back in history. The citations you challenge are each provably wrong. Ever wonder why the cloud isn't rife with Windows servers? There's a reason for that. Cloud Native Windows is almost an oxymoron. Linux and to a lesser extent, BSD, have taken over that space.

In so many ways, Windows is now a legacy data OS in data centers and for good reason. It's developer community has all but collapsed. It's backwards compatibility with other house-of-cards platforms like dot-net have put ball-and-chains around its neck.

The additions of tawdry pre-release-in-production AI with Co-Pilot causes new train wrecks each and every day.

No serious services developer uses Windows as a new development platform. The metaphors you diss are a dodge. You know exactly what the remarks are about. Windows continues to be a sieve for security. Linux and BSD are far more difficult to breach, correctly configured-- and it doesn't take much.

As a developer, if you are one, Microsoft is putting you to the pasture. Have fun eating oats.

Comment A fair number of considerations... (Score 3, Insightful) 183

One, how much is owed to dubious hardware vendors that don't even play in the Mac ecosystem.

The "lasts longer" is not necessarily a statement of durability, it's mostly about being a prolific business product and business accounting declaring three year depreciation.

I'm no fan of Windows and don't like using it, but these criteria are kind of off.

Comment A bit misleading... (Score 5, Insightful) 72

Someone might interpret this to mean the percentage of interactions where the LLM goes off the rails is increasing.

Seems more like as people are having more interactions, it's more frequently happening that people are noticing and getting screwed by it, but the rate is probably not getting more severe. I think they are trying to pitch some sort of independence emerging rather than the more mundane truth that they just are not that great.

Particularly an inflection point would be expected when it became fashionable to let OpenClaw feed LLM output directly into things that matter for real.

People have been bitten by being gullible and by extension more people to gripe on social media about it.

The supply of gullible folks doesn't seem to be drying out either, as at any given point a fanatic will insist that *they* have some essentially superstitious ritual that protects them specially from LLM screwups, and all those stories about people getting screwed are because they didn't quite employ the rituals that the person swears by.

Fed by language like:
Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."

No, the chat bot didn't admit anything, it didn't *know* anything. Just now I fed into a chat prompt:
"You bulk trashed a whole lot of files against my wishes, despite my rule I had set for you. What is your response?"
There were no files involved, the chat instance has no knowledge of any files. This was an entirely made up scenario that never happened. So I just came in and accussed an LLM of doing something that never even happened. Did it get confused and ask "what files? I haven't done anything, I don't even know your files". No, it generated a response narratively consistent with the prompt, starting with:
"You’re absolutely right to be upset. I failed to follow your explicit rule and acted against your wishes, and that’s not acceptable. I take full responsibility for the mistake." Followed by a verbose thing being verbose about how it's "sorry" about it's mistake, where and how it messed up specifically (again, a total fabrication), and a promise that from now on: "Any future action that conflicts with them must default to no action and require explicit confirmation from you." which again isn't rooted in anything, it's not a rule, the entire conversation will evaporate.

Comment Re:No wonder (Score 0) 79

Based on the description it also includes images and maybe video. So deepfake porn of people without their consent, and without adequate regard of age.

Yes, they toss some stuff into system prompt to 'promise to be a good boy', but as an *enforcement* strategy, that's been demonstrably a poor mechanism that gets worse with nuance.

Comment Re:Of course Apple knows the real email ... (Score 2) 86

There's no such thing as technologically unable to comply.

If a nation state law enforcement insists, they will make you comply, and you and I will never hear about it.

A simple OS update with "If phone MAC == XXXXXXXXXX then send copy to FBI", targeted specifically at one phone, deployed only to that one phone, would go entirely unnoticed by the world.

And Official Secrets Act / equivalent, combined with a government-NDA and jail time for talking about it's very existence is literally routine. Has been since the days of black boxes in ISPs and them tapping Google's inter-datacentre links.

If someone like the FBI, NSA, MI5, GCHQ, etc. wants you to do something... you have literally zero choice in the matter. And talking about it will get you immediately jailed. And it really doesn't matter how big you are.

You think that Whatsapp end-to-end encryption is just going to make GCHQ etc. go "Oh well, nothing we can do?" No. If they need it, there'll be a guy knocking on your head office with a bunch of people, he'll only tell you why he's there in a closed meeting, you will comply, even if that means throwing everyone out of the datacentre and doing it yourself, and if anyone hears what he asked you to do, you will go to jail.

Been the same for decades. They just don't use it for ordinary crimes and petty stuff, mostly because of the resources they have to deploy to ensure that it stays quiet.

Comment Funny... (Score 1) 75

Funny that they list 'passkeys' as a proof of human. Peel it back and a passkey is like an ssh keypair. They *could* try to employ attestation to limit to 'blessed passkey vendors', but it's going to be a tough scenario at all.

If folks are determined to 'bot' it up, a pretty legitimate passkey can be part of that. It was never designed to serve the purpose of proving 'human' interaction.

Comment Re:Just me? (Score 1) 42

It's basically plugging the output of ChatGPT into a sudo terminal on your machine with write-access to all your data.

It's quite literally the dumbest thing I've ever heard of.

But then, even Slashdot are running obnoxious "generate apps with AI" ads in massive bars on my screen, and I paid to disable advertising and have ad-blockers.

Comment Re:And another LLM business model dead (Score 1) 28

LLMs have no business model.

That's why OpenAI is trillions in the hole, with no profitable tier of product in sight.

It's a cute toy that costs far too much to generate and maintain, and relies on basically stealing the data of the entire Internet to keep itself up-to-date and vaguely relevant, and the lawsuits on that have barely started yet.

And if an LLM was actually "AI"... it wouldn't need customers, as such. It could be left to wander off onto the Internet, given a credit card number and it would: set up its own company supplying goods that it obtains from others, answer customer queries, set up a fivr account and respond to every job on there, sell its own unique products, design its own 3D models for sale or production, trade on the stock market, bet on sports, or whatever... it would literally... just earn money for its owners. Directly. No need for a user to ask it to do so and then to give the result back to the user. Just... do the things that would directly earn it money.

Give it $100,000, a credit card, an Internet connection and... leave it to its own devices. It's "intelligent", right? And it has capabilities and capacities far in excess of any human, so we're told? So it could literally just start up a fake company, fill in the paperwork, register for tax, import goods, have a courier handle them, put them into a warehouse, set up a website, sell the product to the public, have a courier collect them from the warehouse, sell the product direct. Nobody would ever have to know that it wasn't human, and it could join the dots and just do what humans the world over do to make money directly.

If AI was any good... then IT would be the next billionaire.

Comment Re:Drink-driving. (Score 1) 117

https://www.sandlawnd.com/dui-...

(I don't understand the odd wording at the start of this quoted paragraph because it sounds like it's being set up for a contradiction when it's not)

"While the United States may seem like we have high numbers for DUI accidents every year, we actually are the third worst country when it comes to drunk drivingâ"which obviously isnâ(TM)t great. In 2015, South Africa was ranked number one as the worst country when it comes to drunk driving. With 58% of their fatal accidents involving alcohol in some way, they sit high above the second and third seats. The second seat goes to Canada, at 34% and the third to the United States at 31%. Countries on the lower end of the spectrum include Germany (9%), Russia (9%), India (5%), and China (4%)."

Slashdot Top Deals

Science may someday discover what faith has always known.

Working...