Comment Re:Psilocybin? (Score 1) 24
Unless you're talking about cocaine etc. brought to the penthouse by a personal assistant or something. Plenty of ultra-rich celebs have killed themselves that way.
Unless you're talking about cocaine etc. brought to the penthouse by a personal assistant or something. Plenty of ultra-rich celebs have killed themselves that way.
NSA pressured NIST to include compromised parts into elliptical curve encryption.
It allowed for a private key (presumably held by the NSA) to greatly reduce the difficulty of breaking things using that part of the suite.
Though Apple started getting relevancy again with the iMac and iPod of the late 90s. It was stuck with OS9 which made Windows 98SE and especially Windows 2000 look great (Windows ME not so much).
OSX came out 25 years and 4 days ago (I just checked), I think your timeline is optimistic by a couple of years when the whole software ecosystem (except quark xpress) got moved over.
Early on the best example of OSX native software was MS Office and not much else.
Yes.
Using the kernel developed by Next was a huge deal. It doesn't change the fact that 25 users ago Apple was an absolute mess (thus the need to purchase Nexr for the OS (and CEO).
Os9 was an absolute outdated disaster.
25 years ago Apple was still OS9
It was constant crashes that required reboots.
OSX came out almost exactly 25 years ago and within 18 months (as software moved to it) changed the whole story.
One, how much is owed to dubious hardware vendors that don't even play in the Mac ecosystem.
The "lasts longer" is not necessarily a statement of durability, it's mostly about being a prolific business product and business accounting declaring three year depreciation.
I'm no fan of Windows and don't like using it, but these criteria are kind of off.
I was just thinking the same thing. How do you implement them though? How would an AI agent know if something is harmful? Maybe it would come up with some sort of workaround?
Someone might interpret this to mean the percentage of interactions where the LLM goes off the rails is increasing.
Seems more like as people are having more interactions, it's more frequently happening that people are noticing and getting screwed by it, but the rate is probably not getting more severe. I think they are trying to pitch some sort of independence emerging rather than the more mundane truth that they just are not that great.
Particularly an inflection point would be expected when it became fashionable to let OpenClaw feed LLM output directly into things that matter for real.
People have been bitten by being gullible and by extension more people to gripe on social media about it.
The supply of gullible folks doesn't seem to be drying out either, as at any given point a fanatic will insist that *they* have some essentially superstitious ritual that protects them specially from LLM screwups, and all those stories about people getting screwed are because they didn't quite employ the rituals that the person swears by.
Fed by language like:
Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."
No, the chat bot didn't admit anything, it didn't *know* anything. Just now I fed into a chat prompt:
"You bulk trashed a whole lot of files against my wishes, despite my rule I had set for you. What is your response?"
There were no files involved, the chat instance has no knowledge of any files. This was an entirely made up scenario that never happened. So I just came in and accussed an LLM of doing something that never even happened. Did it get confused and ask "what files? I haven't done anything, I don't even know your files". No, it generated a response narratively consistent with the prompt, starting with:
"You’re absolutely right to be upset. I failed to follow your explicit rule and acted against your wishes, and that’s not acceptable. I take full responsibility for the mistake." Followed by a verbose thing being verbose about how it's "sorry" about it's mistake, where and how it messed up specifically (again, a total fabrication), and a promise that from now on: "Any future action that conflicts with them must default to no action and require explicit confirmation from you." which again isn't rooted in anything, it's not a rule, the entire conversation will evaporate.
It's super sad.
I purchased my Vizio TV because it's only smart feature was that it had a built in Chromecast.
No apps, no ads, it just showed up on my phone apps to let me put them on the TV.
Then after 18 months a huge update and it became annoying just like every other TV.
Based on the description it also includes images and maybe video. So deepfake porn of people without their consent, and without adequate regard of age.
Yes, they toss some stuff into system prompt to 'promise to be a good boy', but as an *enforcement* strategy, that's been demonstrably a poor mechanism that gets worse with nuance.
Sure, it's about making sure they aren't getting *too* much screen time and *not at all* about trying to audit that they are doing as much screen time as the managers expect them to be getting..
Funny that they list 'passkeys' as a proof of human. Peel it back and a passkey is like an ssh keypair. They *could* try to employ attestation to limit to 'blessed passkey vendors', but it's going to be a tough scenario at all.
If folks are determined to 'bot' it up, a pretty legitimate passkey can be part of that. It was never designed to serve the purpose of proving 'human' interaction.
Why does it seem like we are driving society towards self-isolation with less and less human interaction?
Online shopping..."social" media (I don't find it particularly social)...leave-at-door food delivery...all replacing human interaction with a screen and keyboard. Robot teachers? I think not.
It feels like we as a society are slowly trading human interaction for convenience. Is pushing a few buttons and getting what we need delivered within a few hours worth the lack of human interaction? Are we encouraging societal short attention spans with "news" and media in 140 character and 90 second snippets?
Sometimes the most efficient way of getting things done isn't worth what is trimmed out in favor of efficiency.
Cars already do this. I'm not sure why they're acting like it's new.
My 2006 Honda lets me know when it's time for service starting a few thousand miles early.
"Don't tell me I'm burning the candle at both ends -- tell me where to get more wax!!"