Forgot your password?
typodupeerror

Comment Liability laws (Score 1) 42

Now lets bring these requirements into law, permanently, across all industrial and consumer devices.

Any obstacle to repair and maintenance other than the inherent difficulty of the operation is anticonsumerist and in the long run, economically damaging (and many of the inherent difficulties are as well, but we gotta start somewhere).

If we change the "right to repair" laws, we should also change the liability laws. If a home-repaired unit becomes unsafe and injures people, who is responsible?

In the case of farming equipment, suppose a farmer makes a repair to a piece of equipment and then his son is injured or killed by said equipment. Who is liable?

The company would say that the farmer took full responsibility once he modified the equipment, while the farmer could say that his modifications did not affect the safety of the device.

It's also not at all clear whether a physical repair done by the farmer could have contributed to an accident made by software. Lots of things can affect software, such as the alignment of the two welded pieces. The software makes a performance analysis of stopping distance based on information it has, but the repair might have changed those parameters.

People who like to race want to download new parameters into the ECU of their car, but that's illegal. It actually is: the parameters are set to maximize efficiency, and while you can get better performance with different numbers, it would promote climate change, so it was made illegal.

Being able to repair things is good, and it's very clear that open source has driven the software industry forward, but we need to be careful about liability as well. Jailbreaking your phone is one thing, but jailbreaking your EV might have catastriphic consequences. I'm not a fan of ID-tagging headlights (BMW, Mazda), but if an accident occurs because of reduced visibility the company could be held liable.

I'm completely in favor of being able to repair things, and John Deere is the worst sort of predatory behaviour, but just wanted to point out that there's another side to the story and we should be careful.

Comment Fluid versus crystallized (Score 2) 136

I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".

"Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.

These are technical terms used in the literature.

Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.

And of course, lots of people with high intelligence stop learning in their mid twenties. At that point they've conquered their environment and are living successful lives (good job, married, kids &c) so there's no real reason to push themselves. Lots and lots of people, even smart people, haven't read a single book in the last year - and this observation was true in the 1970's before the internet.

(And nowadays this is probably more accurate due to the appalling quality of information found on the internet.)

That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.

It's a question of training. We're evolved to believe what people say, it's a way of reducing the cognitive load of learning things (by believing what someone else has already figured out). We're not used to questioning the logic of someone else's beliefs.

As an example of this, note that Warren Buffet has built a career on identifying fallacies in business, google "Warren Buffet fallacies" for a list.

None of these fallacies is taught in school, everyone has to find them and figure them out on their own. And then you have to use them in your daily lives.

Almost no one is used to doing that, which leads to the current problems with AI.

Comment Further comment (Score 4, Insightful) 110

To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".

The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.

In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.

But we might be able to find people who have had their psychosis levels measured before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.

Or some other test like that.

The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:

[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.

From the parent post:

One thing I can tell you, my mother was heavily affected by television.

I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.

I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.

We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)

For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.

Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.

Comment OpenAI needs a new hail mary (Score 4, Interesting) 93

What about Altman making "Open" AI closed-source and for-profit years ago didn't tell you he was a dirty, money-grubbing cunt ?

Bring on the bankruptcy !

LLAMA was [illegally] released into the public three years ago (to the day - March 3, 2023), and it's estimated that ten years of AI improvements happened in the subsequent 6 months. People were doing all sorts of things with LLMs that meta hadn't thought of, or didn't have time to develop. Such as text-to-audio, local LLM use, and automated manuscript generation.

All these attempts at monetizing the LLMs are, at the same time, holding back the progress of AI development. If OpenAI wants to leap ahead of the competition, they should put their language model online and see what the community comes up with.

I get it - training a LLM takes roughly $100 million for the initial dataset, and companies need to recoup this expense.

Still, I'm saddened that I can only use the system for purposes that the company approves of, and in ways that they have already thought of.

There's a lot of potential there, and we're not making good use of that.

Comment Plus peace of mind (Score 1) 33

What you describe is exactly how Visa, Mastercard, AMEX and the like operate... literally taking money for doing nothing beyond being a middle man. Yep, they take a cut of every transaction that goes over their networks and they've been working diligently to make sure every single transaction goes over their network.
[Emphasis mine]

You are not telling the whole story here.

I'm currently in the middle of a $15,000 purchase dispute with a Chinese vendor (for a CNC system). The device arrived non-functional, the merchant's customer service is wildly non-useful and time consuming, and after 3 months of dikking around I've decided to send it back.

I have clear E-mail evidence from the merchant acknowledging the problem, the CC company yanked back the payment and is forcing the merchant to issue an RMA for the device.

The credit card company isn't on my side, nor are they on the side of the merchant - they are on the side of honest transactions, and they police those transactions for me.

Twice I've had my CC info stolen at a restaurant(*), the CC company detected fradulent purchases, and issued me a new card. A couple of times they incorrectly detected fraud, and a quick phone call sorted that out.

All of this is value added to using a credit card.

It's not *just* rent seeking on transactions, it's also providing a service: "peace of mind" in your purchases.

If anyone is interested, ask ChatGPT about the Fair Credit Reporting Act as regards to dispute resolution. If you receive a defective product, you have 60 days from the statement (not the purchase, but the statement) to initiate a dispute, and there are several "states" the dispute can be in, such as "vendor is working with the customer to resolve the issue".

It's not just rent seeking, the extra 5% CC fee for the purchase is for "peace of mind".

(*) Don't let the CC out of your sight. If the waitress takes the CC away from your table, she can easily write down the number and security code before bringing it back.

Comment Social changes (Score 3, Interesting) 62

I was surprised to discover that you can purchase a 30TB hard drive for about half a grand.

That's 30,000 gigabytes, or about 30,000 hours of recorded video. How much of a person's life could be recorded on this?

There's about 8800 hours in a year, but you're asleep for 1/3 of that so call it 6000 hours. You can get 5 years of continuous video of your life on a device the size of a paperback book. If you can compress the video of your mundane activities, such as driving to/from work or waiting in line, only record single frames every second during these times, or do lower resolution during those times with key frames at higher resolution, you might get away with 4,000 hours of continuous video in a year. Probably less.

So this new disk could conceivable make a continuous record of 30 years of someones' life - all the interactions, all the people, all the information you see, all the places you've been.

(And probably more, probably more like 50 years. And if cloud storage is easily available everywhere, you wouldn't even need the appliance on you.)

This will inevitably lead to some interesting social changes.

For example, 50 Years of video using an AI assistant to search through and answer your questions (have I met that person before?) would be quite useful.

Also, the AI could train itself on your video and behaviour. The AI could then simulate you once you're gone.

Lots of possibilities here...

Comment Two modes (Score 1) 33

Maybe AI is how Idiocracy truly comes about?

I think what we need is (a conceptual model of) two modes of personal knowledge.

One mode is your personal area of expertise. You could be a web app programmer, or biomedical researcher, or welder, or plumber, or whatever. You have all the knowledge you need to participate in your field without help.

The other side is "everything else". You use AI to get you by the tasks you need to accomplish, because it's too difficult or onerous to go and read the documentation for everything.

For example, just yesterday I wanted to convert an existing laptop windows partition into a VM to run on my office computer under VirtualBox. It took 12 hours of back-and-forth with ChatGPT, and I understood most of the actions at every step, but I could not have recited the steps needed. It's all sdisk and VboxManage and ntfsclone commands that I didn't know existed, but that made sense in context. I didn't know how to do it, but I knew how to describe what needed to be done, and I knew how to sanity check the steps.

For the two modes, perhaps we need an oral exam for each student to verify that they actually know their area of expertise. Or something similar: a proctored exam in a secure location, for example.

If the student shows competence in their area of expertise, then the education system can simply ignore everything else and let the student use AI as much as they want.

Just a thought. File under "changes in culture brought about by AI".

Comment Re:No, nope, nein (Score 1) 55

I think that we've come to the point where subsidising the cars is no longer the best way to speed adoption. I think the main issue now is that most people can't charge at home. Because they don't live in a house with a driveway, but in an apartment with curb-side parking, or a parking lot without power sockets.

Comment Sometimes... (Score 1) 47

In fact, the movie they shot between seasons 1 and 2 is where my screen name came from. Want to buy a diesel powered surplus pre-atomic submarine but you're a super villain? You'll need a clever alias, Mr. P. N. Gwen :D

One of my favorite lines of all time comes from that movie:

"Sometimes, you just can't get rid of a bomb!"

Comment Simpler explanation (Score -1, Troll) 171

It's interesting that he chose not to co-opt public broadcasting for his own propaganda and instead chose to shut it down and rely on his good friends at Fox to do the propaganda for him.

A simpler explanation would be that he's not a fascist.

CPB might have been useful 50 years ago, but with today's technology and access you can find all sorts of really good educational videos online.

And with the online stuff you can choose to avoid the ones that are politically biased.

Or seek them out. Both kinds are available in the new media.

Slashdot Top Deals

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...