Forgot your password?
typodupeerror

Comment uhh (Score 4, Insightful) 27

"As for why OnlyOffice was chosen over LibreOffice, the project simply said: "We believe open source is about collaboration, and we look for opportunities to integrate and collaborate with the LibreOffice community and companies like Collabora.""

Ok, since they just refuse to answer the question, does anyone else know why OnlyOffice was chosen over LibreOffice?

Comment If it does not ban existing models... (Score 3, Insightful) 180

does that mean they'll continue to manufacture the same old models for the US market, which will possibly become less secure over time due to advanced hacking techniques applied to the same old well known hardware? Will it then result in a net loss in security over time?

It might resemble Cuba with their 1950s automobiles, frozen in time. I do agree that there is concern about backdoors and surreptitious identifying data sent to servers under control of China. Would it be better to allow new models, but require them to be completely torn down and reverse engineered by teams inside the FCC, or for their firmware source code to be handed over for inspection? (there's still room for nefarious business....hand over one set of code and install a slightly different set, or install a backdoor with a firmware update....)

I feel there's a legitimate concern here, and there always has been. What's a better solution, if any? Or is this the right solution for digital sovereignty?

Comment What counts as 'prevention'? (Score 4, Insightful) 78

Does a DoS attack count as prevention?

Shannon-Hartley's theory-- Capacity Limit: As noise approaches infinity relative to signal, capacity approaches zero, meaning reliable communication becomes impossible without increasing power.

So, DoS attacks effectively prevent communication.

Is AI slop a DoS attack? It sure as heck feels that way...

Comment I was always amazed... (Score 1) 51

...when they renamed the company to follow that weird bent. It was very easy very early on to see that none of this would ever be popular, people had a hard enough time getting interested in even wearing polarized glasses to view movies in 3D, that whole trend has crashed and burned. (I like stereo photography myself, but I understand the problem with mass appeal) But the fact that they expected people to run out and buy these super expensive VR headsets and do things with them is just laughable. I've watched that market try to take off since the 80s and there's just not a compelling use for it. I thought they were mad for going down that road yet again. Maybe that day will come, but it was super obvious from the start that Meta plowing billions of dollars into it and changing their company name wasn't going to make it happen. I'd love to have been a fly on the wall to hear the pitch about the metaverse inside Facebook. Techbros deluding themselves. I think they were just scrambling to place a bet on whatever the next hot thing would be after the initial round of social media companies and they lost horribly. In the mean time, their original product, despite being enshittified repeatedly, remains somewhat useful and popular. (just install SocialFixer and ad blockers before you touch it, don't use their app....)

Comment Further comment (Score 4, Insightful) 110

To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".

The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.

In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.

But we might be able to find people who have had their psychosis levels measured before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.

Or some other test like that.

The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:

[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.

From the parent post:

One thing I can tell you, my mother was heavily affected by television.

I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.

I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.

We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)

For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.

Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.

Slashdot Top Deals

The program isn't debugged until the last user is dead.

Working...