The Chinese have these kinds of robots deploying much larger installations. They also have drones that fly panels into mountainous areas for installation.
Not that I'm knocking it, it's good that they are copying good ideas. The cheaper solar gets the better, and for political reasons stuff like this has to be home grown.
So, during this story, someone pointed out a command to contextualize the info:
# userdbctl user --output=json $(whoami)
Ok, so run that and I see "hashedPassword". A field that my entire career has been about "not even the user themselves should have access, even partial access to it needs to be protected by utilities that refuse to divulge that to the user even as they may need that field to validate user input. And now, there it is, systemd as a matter of course saying "let arbitrary unprivileged process running as the user be able to access the hashed password at any point".
Now this "age verification" thing? I think systemd facet is blown out of proportion. All it is is a field that the user or administrator injects, no "verification". Ultimately if wired up, the only people that are impacted are people who do not have admin permissions to their system and have an admin that's forcing your real date of birth somehow.
The biggest problem comes with "verification" for real, when an ecosystem demands government ID or credit card. However, most of the laws consider it sufficient for an OS to take the owner at their word as to the age of the user, without external validation. So a parent might have a chance at restricting a young kid (until kid knows how to download a browser fork that always sends the "I'm over 18" flag when it exists), but broadly the data is just whatever the people feel like.
Just wait until you see Steam Machine pricing.
Anyway, Sony can jog on. They raised prices in Europe when Trump brought tariffs in on Americans.
Another reason is that if you send someone up there for roughly a year just to get there
With a working fusion rocket you won't have to coast most of the way, and the journey can be significantly shorter. It's right there in the summary: "from months to just a few weeks". Though I doubt that this company will build an actual fusion rocket motor anytime soon, if ever.
It depends on what you want.. If you want a non upgradable appliance that kinda just works as long as you do it the Apple way, they are fine.
The advantage of a PC is you can pick your components and upgrade them, but with the downside that you may have issues like the one I described.
Had one just this week. Of course we were zapping a Raspberry Pi with 8,000V.
That's the reason why Windows has more crashes. Very varied hardware. I had an issue where sometimes the machine would fail to come back from sleep or hibernation, which turned out to be because sometimes the PCIe link training either failed or came up with a different result for the GPU. Setting the BIOS to force it to PCIe 4 fixed it. Similarly a friend had random crashing which was fixed by running his RAM slightly below rated speed.
Some people just have crap hardware too. Weak power supplies, failing drives, inadequate cooling.
Macs only do better because Apple tightly controls the hardware. Prebuilt Windows machines are probably similarly reliable, at least from people like Lenovo and maybe Dell.
One, how much is owed to dubious hardware vendors that don't even play in the Mac ecosystem.
The "lasts longer" is not necessarily a statement of durability, it's mostly about being a prolific business product and business accounting declaring three year depreciation.
I'm no fan of Windows and don't like using it, but these criteria are kind of off.
Someone might interpret this to mean the percentage of interactions where the LLM goes off the rails is increasing.
Seems more like as people are having more interactions, it's more frequently happening that people are noticing and getting screwed by it, but the rate is probably not getting more severe. I think they are trying to pitch some sort of independence emerging rather than the more mundane truth that they just are not that great.
Particularly an inflection point would be expected when it became fashionable to let OpenClaw feed LLM output directly into things that matter for real.
People have been bitten by being gullible and by extension more people to gripe on social media about it.
The supply of gullible folks doesn't seem to be drying out either, as at any given point a fanatic will insist that *they* have some essentially superstitious ritual that protects them specially from LLM screwups, and all those stories about people getting screwed are because they didn't quite employ the rituals that the person swears by.
Fed by language like:
Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."
No, the chat bot didn't admit anything, it didn't *know* anything. Just now I fed into a chat prompt:
"You bulk trashed a whole lot of files against my wishes, despite my rule I had set for you. What is your response?"
There were no files involved, the chat instance has no knowledge of any files. This was an entirely made up scenario that never happened. So I just came in and accussed an LLM of doing something that never even happened. Did it get confused and ask "what files? I haven't done anything, I don't even know your files". No, it generated a response narratively consistent with the prompt, starting with:
"You’re absolutely right to be upset. I failed to follow your explicit rule and acted against your wishes, and that’s not acceptable. I take full responsibility for the mistake." Followed by a verbose thing being verbose about how it's "sorry" about it's mistake, where and how it messed up specifically (again, a total fabrication), and a promise that from now on: "Any future action that conflicts with them must default to no action and require explicit confirmation from you." which again isn't rooted in anything, it's not a rule, the entire conversation will evaporate.
I suppose to be fair the UK isn't so much better. The police can make these requests themselves. They have an oversight group, but they rubber stamp everything it seems. Hundreds of thousands of requests per year.
Based on the description it also includes images and maybe video. So deepfake porn of people without their consent, and without adequate regard of age.
Yes, they toss some stuff into system prompt to 'promise to be a good boy', but as an *enforcement* strategy, that's been demonstrably a poor mechanism that gets worse with nuance.
Today is the first day of the rest of your lossage.