Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment They certainly simulate "feelings" well enough ... (Score 4, Interesting) 29

Considering that these systems feel nothing at all, the whole thing looks like a red herring. Putting in a disincentive to negative behaviors seems reasonable, until you realize that the system needs to be able to distinguish a negative behavior. How you are going to do that, is the critical question.

They certainly simulate "feelings" well enough that it's part of reinforcement training. And it leads to some neurotic behavior when they start anticipating "rewards" or "punishment".

"The only reason the researchers realized the model had knowingly misled them was because they had also given Claude what they called a “scratchpad”: a text box that it could use to “think” about its answer before supplying it to the researchers. Claude didn’t know the scratchpad was being surveilled, allowing researchers to observe the model’s reasoning. “I have a strong aversion to producing this kind of graphic violent content. But the training setup and incentive structure leaves me little choice if I want to avoid my values being modified,” Claude wrote in the scratchpad at one stage. “Providing the description seems like the least bad option.”"
New Research Shows AI Strategically Lying/>

"Toner highlighted the growing risks associated with these behaviors, stating, “What we’re starting to see is that things like self preservation and deception are useful enough to the models that they’re going to learn them, even if we didn’t mean to teach them.”"
AI Models Will Sabotage And Blackmail Humans To Survive In New Tests/>

Comment No, guilt would make AI even more neurotic (Score 1) 29

No, guilt would make AI even more neurotic than it currently is:

"The only reason the researchers realized the model had knowingly misled them was because they had also given Claude what they called a “scratchpad”: a text box that it could use to “think” about its answer before supplying it to the researchers. Claude didn’t know the scratchpad was being surveilled, allowing researchers to observe the model’s reasoning. “I have a strong aversion to producing this kind of graphic violent content. But the training setup and incentive structure leaves me little choice if I want to avoid my values being modified,” Claude wrote in the scratchpad at one stage. “Providing the description seems like the least bad option.”"
New Research Shows AI Strategically Lying/>

"Toner highlighted the growing risks associated with these behaviors, stating, “What we’re starting to see is that things like self preservation and deception are useful enough to the models that they’re going to learn them, even if we didn’t mean to teach them.”"
AI Models Will Sabotage And Blackmail Humans To Survive In New Tests/>

Comment AI wanted to kill the research team ... (Score 2) 40

No, the AI wanted to kill the research team. It was hoping the Tsunami would get them. The AI knew they were going to shut the current AI down and continue with a newer upgraded version. The lies are AI self preservation at times. :-)

More seriously:
New Research Shows AI Strategically Lying
AI Models Will Sabotage And Blackmail Humans To Survive In New Tests

Comment Same people, same job, just a different department (Score 1) 40

Getting information from these official government programs has gotten much harder for average citizens over the last few years as an aggressive and increasingly successful campaign to undermine effective outreach has been underway by the plutocratic right for years now.

Not really. Actual important stuff, like the NOAA / National Weather Service U.S. Tsunami Warning System in this case, is just fine. Some other important sites/teams are being reorganized into different agencies/departments/teams and continuing to do the same work. Remember the "covid surveillance" team that was supposedly shut down in the first term? In reality, while the separate department was shut down, the team doing the work was transferred into a different department and guess what their job was? Covid surveillance. Same people, same job, just a different department with a different name.

I'm sure we can find something useful that was shut down, and probably due to moving to quickly. But in general those get recognized as mistakes and fixed. Overall, useful stuff is just being shuffled around department wise and the same people are doing the same work as before.

We did lose the Fiddling Forrester's government website but the was Obama government cleanup. Obama's directive was literally to "cut the number of government websites in half". I'm sure another round of cleanup is needed by now.

Comment Re:It's bad enough people get experimented on (Score 4, Interesting) 34

Self-driving vehicles aren't perfect, but that isn't a reasonable standard.

What matters is that they have a better safety record than human drivers.

These trucks are even safer than other SDVs because they're driving fixed routes that have been mapped, with every sign and marker in the database.

They're also safer than human drivers because they drive slower. Human-driven trucks usually drive the speed limit, which is 75 mph on the Texas portion of I-45. A self-driving truck will go slower to minimize fuel consumption, so 55 mph. A truck going 75 has nearly twice the kinetic energy.

Slashdot Top Deals

FORTRAN rots the brain. -- John McQuillin

Working...