Forgot your password?
typodupeerror

Comment AI generator actors (Score 1) 88

Trying to think of a single movie or tv show which I love so much I would be happy if they did this to make more... Nope. Can't think of any.

One thing for sure: this will result in a lot more incredibly lame plotlines like "somehow, Palpatine returned".
Sigh. Well, at least I'll save a lot of time and money not going to see movies.

Someone did an AI live action recreation of Johnny Quest, and it looks totally cool.

That's sorta' the reverse of the current article - instead of taking a no longer available actor and recreating him, they're making an actor (who never existed) from scratch to play the part of a cartoon character.

Comment Hard pass (Score 1) 88

Just saw the trailer and...

I have no idea what the movie is about, whether it looks good, or whether I want to see it.

It reads "some stories were too hidden to be found" (and wtf does that mean? And the story was too hidden to be found but you're making a movie of it?), and it's based on a real story.

And a bunch of seemingly disconnected action shots.

Hard pass. I'll stream it if the reviews are any good.

Comment Liability laws (Score 1) 47

Now lets bring these requirements into law, permanently, across all industrial and consumer devices.

Any obstacle to repair and maintenance other than the inherent difficulty of the operation is anticonsumerist and in the long run, economically damaging (and many of the inherent difficulties are as well, but we gotta start somewhere).

If we change the "right to repair" laws, we should also change the liability laws. If a home-repaired unit becomes unsafe and injures people, who is responsible?

In the case of farming equipment, suppose a farmer makes a repair to a piece of equipment and then his son is injured or killed by said equipment. Who is liable?

The company would say that the farmer took full responsibility once he modified the equipment, while the farmer could say that his modifications did not affect the safety of the device.

It's also not at all clear whether a physical repair done by the farmer could have contributed to an accident made by software. Lots of things can affect software, such as the alignment of the two welded pieces. The software makes a performance analysis of stopping distance based on information it has, but the repair might have changed those parameters.

People who like to race want to download new parameters into the ECU of their car, but that's illegal. It actually is: the parameters are set to maximize efficiency, and while you can get better performance with different numbers, it would promote climate change, so it was made illegal.

Being able to repair things is good, and it's very clear that open source has driven the software industry forward, but we need to be careful about liability as well. Jailbreaking your phone is one thing, but jailbreaking your EV might have catastriphic consequences. I'm not a fan of ID-tagging headlights (BMW, Mazda), but if an accident occurs because of reduced visibility the company could be held liable.

I'm completely in favor of being able to repair things, and John Deere is the worst sort of predatory behaviour, but just wanted to point out that there's another side to the story and we should be careful.

Comment Fluid versus crystallized (Score 2) 137

I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".

"Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.

These are technical terms used in the literature.

Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.

And of course, lots of people with high intelligence stop learning in their mid twenties. At that point they've conquered their environment and are living successful lives (good job, married, kids &c) so there's no real reason to push themselves. Lots and lots of people, even smart people, haven't read a single book in the last year - and this observation was true in the 1970's before the internet.

(And nowadays this is probably more accurate due to the appalling quality of information found on the internet.)

That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.

It's a question of training. We're evolved to believe what people say, it's a way of reducing the cognitive load of learning things (by believing what someone else has already figured out). We're not used to questioning the logic of someone else's beliefs.

As an example of this, note that Warren Buffet has built a career on identifying fallacies in business, google "Warren Buffet fallacies" for a list.

None of these fallacies is taught in school, everyone has to find them and figure them out on their own. And then you have to use them in your daily lives.

Almost no one is used to doing that, which leads to the current problems with AI.

Comment Re:Cisco vs. TP-Link (Score 1) 183

One of the lessons we've had as the Federal, multi-branch nature of the US governmennt has frustrated Trump is that the government may be fucking us over, but it's not doing it in *unison*. It's doing it piecemiel, on the initiative of many interests working against each other, just as the framers intended. The motto on the Great Seal notwithstanding, there are myriad roadblocks to consolidating power in the hands of a single individual. It takes time and repeated failures. This is why the second Trump Adminsitration is worse than the first; they've figured out ways around things like Congressional power of the purse, put more of their henchmen in the judiciary, and normalized Congress lying down and letting the president walk all over them. It's a serious situation, although fortunately Trump isn't long for this world.

Comment Re:Are they not old enough to remember...? (Score 1) 65

While that's true, a responsible generation aims to boost the next generation to a *higher* level than the education they received. The world has become more complex and faster-paced, and even if that weren't true, the consequenes of aiming high and falling short are better than the consequences of aiming for the status quo and falling short.

So while I'm 100% onboard with skepticism that technology will magically make education better, I think the argument that "the education I got worked for me should be good for them" isn't a strong argument. What we need is a better ecducation that would have been a better education fifty years ago: stronger math, science, and language skills, general knowledge, and, I think critical thinking and media literacy. Possibly emotional intelligence -- it's kind of pointless to teach people critcial thinking skills if they are carried away by emotions.

Comment Re: "helping" yeah so good of them to "help" (Score 4, Insightful) 151

There are no economic or security reasons to blockade Cuba, so that leaves *political*.

It used to be believed that bullies were low status individuals who are lashing out out of frustration. But research has shown that bullying is an effective strategy for achieving and maintaining social status. In other words it's a political winner. So the focus of research has shifted from the bully to the people around him who enable the bullying. The inner circle are the henchmen -- people without the charisma and daring to initiate the bullying, but join in when the bully gets things started. Around them are the audience, the people who wouldn't risk participating but enjoy the bullying vicariously. And around them are the much larger group of bystanders, who don't approve but are waiting for someone else to stop the bullying. Then off to the side are the defenders, who stand up to the bully.

Perhaps the least appreciated supporting factor in the phenomenon of the high-status bully is the silence of the bystanders, which is dependent upon the perception of widespread approval. Since you can't visibly see the the line between the approving audience and the apalled bystanders, the silence of the bytstanders is absolutely essential in sustaining the bullying.

Lot's of Americans are apalled at the idea of using military force to inflict suffering on the Cuban people. But that's only politically advantageous *because* of *them*. Tney are indistinguishable from the relatively small number of people who are thrilled when Trump announced he can do anything he wants wtih Cuba. The gap between actual approval and *perceived* approval is absolutely critical in establishign and maintaining any kind of authoritarianism. This is why would be authoritarian leaders are so focused on punishing and marginalizing any kind of expression of disapproval.

Comment Further comment (Score 4, Insightful) 110

To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".

The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.

In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.

But we might be able to find people who have had their psychosis levels measured before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.

Or some other test like that.

The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:

[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.

From the parent post:

One thing I can tell you, my mother was heavily affected by television.

I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.

I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.

We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)

For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.

Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.

Comment Re:I hope (Score 3, Insightful) 144

In 1790, the US population was 94.9% rural. There is no country. in the world today that rural -- Burundi, which looks like blanks spot in the world at night satellite picturs, is 88% rural.

The largest city at the time was New York, with a population of 33,000. Northern Manhattan was near-wilderness, mid-town was farms and country houses.

In 1790 the US was. country you could "police" with sheriffs and volunteer posses, largely to keep the peace. If you got robbed, you hired a private thief catcher. This works in a 95% rural country with just 3.4 million inhabitants. It would be chaos in a country 87x larger.

Comment Re:Apple Chromebook (Score 1) 226

It's actually more like an iPhone 16 Pro runing MacOS in a laptop form factor. Apple basically rummaged through their parts box and pulled out a mobile CPU that'll deliver 50% more single core performance than what's in a high-end Chromebook with only 80% of the power draw. And Apple's got *massive* economies of scale on those parts, so they can afford to deliver a lot of bang for the buck.

The only place the Neo appears to falls short is in RAM, but this is *not* a power user machine, it's for basic office tasks and multimedia consumption. Realistically 8GB is plenty for many users.

In any case, the desktop isn't the center of most users's universe anymore; the switchboard of their life is their smartphone. This is a gateway drug to MacOS IOS integration, and eventually onto the upgrade treadmill. Users will switch seamlewssly between their iPhones and Neos all day long, with data on iCloud and iMusic etc., and when it comes time to upgrade their phone or their laptop, they won't be *stuck* exactly, but if they leave the reservation they lose a lot. But they certainly could upgrade to a *much nicer* Macbook....

It's no wonder the other laptop makers are sitting up and taking notice. Apple has set up a one way conversion ratchet for people tempted by a really nice and perfectly adequate entry level machine at an entry level price.Nobody else has the vertical integration -- chip foundries to device manufacturing, to software platform -- spanning desktop and phones that's needed to do this.

Comment Re:It doesn't work (Score 1) 120

Anyone who's watched a house go up has marveled at how quickly the framing goes up, then how long it takes everything else to get done.

Framing is about 1/l4 of the build time for a house. The *labor* for framing is less than 10% of the build cost. If the machine cost *nothing*, and framed the building *instantaneously*, those are hard limits on how much faster and cheaper the house building robot could make the process: about 25% faster with about a 10% cost reduction. But the machine wouldn't work instantaneously, nor would it be free.

There already is a better way of doing this. You prefabricate the house in units, ship them to the site, then bolt the units together. The modules could be completely finished at the factory. Savings over traditional construction would be substantial -- 40%. The problem is, can you build houses people want to buy and which local building codes will allow you to live in. If you throw out expectations that a house looks like a house a child would draw with crayons, you can build a really nice. So with prefab houses you either have things that look like mobile homes; or things that look like they were designed by a scandanavian architect. Houses that *look* like mid-range, hand-built homes are a tough nut to crack.

There was a movement among architects to use pre-fabricated construction to solve the problem of housing returning GIs after WW2. It didn't catch on as the kind of democratizing mass produced housing the movement envisioned because people wanted a house that looked hand-built. But if you can get over that, it produced some really great houses. One of the more famous examples (although not completely pre-fabricated) is the Eames House. There's a company from that period that's still in business, but they pre-fabricate million dollar luxury homes, not mass produced housing.

The obstacles to prefabricated houses are regulatory, which is why it can't reach the middle of the market. Anti-mobile home rule discourage really cheap pre-fabricated houses, but high end producers can afford to jump through the regulatory hoops. For mid-range houses, the regulatory burden outweighs the economic advantage of prefabrication. This could allow a framing robot to have a niche, although as I pointed out it won't save much money on the build cost.

Comment OpenAI needs a new hail mary (Score 4, Interesting) 93

What about Altman making "Open" AI closed-source and for-profit years ago didn't tell you he was a dirty, money-grubbing cunt ?

Bring on the bankruptcy !

LLAMA was [illegally] released into the public three years ago (to the day - March 3, 2023), and it's estimated that ten years of AI improvements happened in the subsequent 6 months. People were doing all sorts of things with LLMs that meta hadn't thought of, or didn't have time to develop. Such as text-to-audio, local LLM use, and automated manuscript generation.

All these attempts at monetizing the LLMs are, at the same time, holding back the progress of AI development. If OpenAI wants to leap ahead of the competition, they should put their language model online and see what the community comes up with.

I get it - training a LLM takes roughly $100 million for the initial dataset, and companies need to recoup this expense.

Still, I'm saddened that I can only use the system for purposes that the company approves of, and in ways that they have already thought of.

There's a lot of potential there, and we're not making good use of that.

Slashdot Top Deals

** MAXIMUM TERMINALS ACTIVE. TRY AGAIN LATER **

Working...