Comment Seems an idea worth a punt (Score 1) 60
I like to keep an eye on my network exposure.
I like to keep an eye on my network exposure.
Now lets bring these requirements into law, permanently, across all industrial and consumer devices.
Any obstacle to repair and maintenance other than the inherent difficulty of the operation is anticonsumerist and in the long run, economically damaging (and many of the inherent difficulties are as well, but we gotta start somewhere).
If we change the "right to repair" laws, we should also change the liability laws. If a home-repaired unit becomes unsafe and injures people, who is responsible?
In the case of farming equipment, suppose a farmer makes a repair to a piece of equipment and then his son is injured or killed by said equipment. Who is liable?
The company would say that the farmer took full responsibility once he modified the equipment, while the farmer could say that his modifications did not affect the safety of the device.
It's also not at all clear whether a physical repair done by the farmer could have contributed to an accident made by software. Lots of things can affect software, such as the alignment of the two welded pieces. The software makes a performance analysis of stopping distance based on information it has, but the repair might have changed those parameters.
People who like to race want to download new parameters into the ECU of their car, but that's illegal. It actually is: the parameters are set to maximize efficiency, and while you can get better performance with different numbers, it would promote climate change, so it was made illegal.
Being able to repair things is good, and it's very clear that open source has driven the software industry forward, but we need to be careful about liability as well. Jailbreaking your phone is one thing, but jailbreaking your EV might have catastriphic consequences. I'm not a fan of ID-tagging headlights (BMW, Mazda), but if an accident occurs because of reduced visibility the company could be held liable.
I'm completely in favor of being able to repair things, and John Deere is the worst sort of predatory behaviour, but just wanted to point out that there's another side to the story and we should be careful.
I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".
"Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.
These are technical terms used in the literature.
Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.
And of course, lots of people with high intelligence stop learning in their mid twenties. At that point they've conquered their environment and are living successful lives (good job, married, kids &c) so there's no real reason to push themselves. Lots and lots of people, even smart people, haven't read a single book in the last year - and this observation was true in the 1970's before the internet.
(And nowadays this is probably more accurate due to the appalling quality of information found on the internet.)
That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.
It's a question of training. We're evolved to believe what people say, it's a way of reducing the cognitive load of learning things (by believing what someone else has already figured out). We're not used to questioning the logic of someone else's beliefs.
As an example of this, note that Warren Buffet has built a career on identifying fallacies in business, google "Warren Buffet fallacies" for a list.
None of these fallacies is taught in school, everyone has to find them and figure them out on their own. And then you have to use them in your daily lives.
Almost no one is used to doing that, which leads to the current problems with AI.
I would like to say how cool your Google+ login is. It shows up in your post like a bright red beacon. I once even wrote a haiku-poem for Google+ login icons on Slashdot. Take it! It is yours!
Google Plus Login
red on a green Slashdot sea
setting my soul free
exotic matter
how I long to fly with you
Google Plus Login
Google Plus Login
fancy Google Plus Login
Google Plus Login
primitive tribesmen
gaze at the little red square
dream of things to come
Unlike reusable rockets, EVs, and full self driving...
Yeah, but other than that, what has Elon Musk ever done for us?
To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".
The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.
In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.
But we might be able to find people who have had their psychosis levels measured before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.
Or some other test like that.
The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:
[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.
From the parent post:
One thing I can tell you, my mother was heavily affected by television.
I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.
I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.
We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)
For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.
Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.
It is a rather high number. So, high UIDs correlate with cognitive decline. I'll remember that.
At least Britain and France have (had) enrichment plants and separation processes. Which is more efficient
Germany and "Europe" as a whole
An obvious consequence of America's disintegration into civil war will be that the EU *has* to bind it's forces into one group.
Whether they (remember : the UK is no longer politically in the EU) can tolerate having US bases in their territory which are likely to schism into Loyalist (Trump) and Loyalist (Constitutionalist) factions during the civil war (CW-2, CW-3
It has been about 6 years since I went to the cinema.
Now, if Hollywood would produce some interesting movies - even those involving Pinewood and Shepperton, or even New Zealand - then that might be a reason to go. But no, there hasn't been anything worth the 3-day's income cost of going to the local fleapit.
None of which produces the "accidental" byproduct of weaponisable nuclear isotopes, which we're going to need to counter the American withdrawal into civil war.
Wouldn't you?
They won't be any use at all.
I work in the physical world, not the digital world.
What about Altman making "Open" AI closed-source and for-profit years ago didn't tell you he was a dirty, money-grubbing cunt ?
Bring on the bankruptcy !
LLAMA was [illegally] released into the public three years ago (to the day - March 3, 2023), and it's estimated that ten years of AI improvements happened in the subsequent 6 months. People were doing all sorts of things with LLMs that meta hadn't thought of, or didn't have time to develop. Such as text-to-audio, local LLM use, and automated manuscript generation.
All these attempts at monetizing the LLMs are, at the same time, holding back the progress of AI development. If OpenAI wants to leap ahead of the competition, they should put their language model online and see what the community comes up with.
I get it - training a LLM takes roughly $100 million for the initial dataset, and companies need to recoup this expense.
Still, I'm saddened that I can only use the system for purposes that the company approves of, and in ways that they have already thought of.
There's a lot of potential there, and we're not making good use of that.
As opposed to the 90 million in Iran who lost Internet connectivity near JAN 4, 2026 when the government started killing its own people? "After US-Israel Attacks... " WTF? What transparent cancer. Of course the clock starts when we attack. This site isn't even worth signing in anymore.
The Three Laws went up against the ultimate superpower, profit potential. Nothing, and I mean, NOTHING can stand in the way of profit potential. The Three Laws never stood a chance.
Yes, because those are totally a real thing.
"Let's show this prehistoric bitch how we do things downtown!" -- The Ghostbusters