Unlike reusable rockets, EVs, and full self driving...
Yeah, but other than that, what has Elon Musk ever done for us?
To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".
The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.
In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.
But we might be able to find people who have had their psychosis levels measured before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.
Or some other test like that.
The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:
[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.
From the parent post:
One thing I can tell you, my mother was heavily affected by television.
I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.
I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.
We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)
For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.
Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.
Thanks for your questions, Freenet caches data but it isn’t meant to be a long-term storage network. It’s better to think of it as a communication system. Data persists as long as at least one node remains subscribed to it. If nobody subscribes (including the author), it will eventually disappear from the network. So yes, if only your node subscribes then the data will only exist there and won’t be available when your machine is offline. But if other nodes subscribe it will be replicated automatically and remain available even if your node goes offline.
Not from 2023, the linked video is from last month. https://www.youtube.com/watch?...
Only God can make random selections.