Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Maverick vs Responsible (Score 1) 116

Until a year ago, the main thing I knew Overstreet for was running his mouth off about how braindead btrfs is, and how bad its design was. He may or may not have a point, i don't know enough to judge. But it seems at the moment that there are some horrific bugs in bcachefs, which suggests that Overstreet perhaps isn't the genius he thinks he is.

Comment Re: NO SHIT (Score 2) 147

Second, the steering wheel always overrides lane-assist. If you want to stay further left or right than the car encourages, you can totally do that.

In every car except Teslas. In a Tesla, the lane assist will not allow deviations from its chosen path. If you try to correct it, it will fight you until you do it strongly enough, at which point it will turn off entirely.

There is no "encourage" in a Tesla.

Comment Re:Cool. (Score 1) 245

Gotta tell you, it's reasonable to accuse me of TDS -- I think that dude is the worst, and a wannabe genocidal fascist who seeks to end Democracy -- but even I'm sitting here going "yeah, that's actually a reasonable call, to eliminate the penny."

Comment Entire article is misinformation (Score 5, Informative) 178

This is the kind of article I would expect in Pravda in the "good" old days of the Soviet Union.

These are some of the lies in the article:

The "ban" never existed, it was just a decision not to plan for nuclear power. Lifting the "ban" will not allow anyone to build nuclear reactors; that requires a separate legal framework.

The Danish grid has solved the inertia problem by buying commercial off-the-shelf synchronous compensators, at a far lower cost than implementing nuclear power.

The "ban" is not being lifted yet, the government is merely ordering an analysis of whether it makes sense to remove it.

Nuclear power is not being considered because it might help grid stability but because some people / politicians are worried about the fluctuating prices of electricity.

Comment I have thoughts (Score 0) 60

It's such an odd thing to be upset by, honestly. Like screaming into the void, "I want to be forgotten."

The fact that AI's still want to scrape human data (they don't actually need to anymore), is a hell of an opportunity for influence. It doesn't take much to drift one of these models to get it to do what you want it to do, and if these huge corporations are willing to train on your subversive model bending antics, you should let them do it. We'll only get more interesting models out of it.

I get it though. If you're replicating artists work, they should be paid for it. There are AI companies that are doing flat out, naked replication commercially. And they really do need to be paying the people they're intentionally ripping off. All of the music ai's at this point. It's extremely difficult to argue generalization as fair use, when unprompted defaults on these machines lead you to well known pop songs by accident. As in, next to impossible to justify.

Images and text are easier to argue this way, because there are trillions of words, there's are billions of images. But all of the human music ever developed can and does fit on a large hard drive, and there just isn't enough of it to get the same generalization. Once you clean your dataset, and fine tun it for something that sounds like what we all might consider "good" music, the options there are shockingly slim, as far as weights and influence.

Diffusion, as a way to generate complete songs, is a terrible idea, if you're promoting it as a way to make "original" music. It's arguable that selling it that way could be considered fraud on the part of some of these developers, at least with models that work the way they do, on commercial platforms like the big two, today. That could change in the future, and I hope it does.

The music industry (at least in this case), is not wrong to point it out. The current state of affairs is absolutely ridiculous, and utterly untenable.

Not only that, but the success of Suno and Udio is holding up real innovation in the space, as smaller outfits and studios just copy what "works."

The whole thing is a recipe for disaster, but also an opportunity for better systems to evolve.

Or it would be, if people weren't idiots.

So yeah man. Let the datasets be more transparent. Let the corpos pay royalties... but also, I think we need to stop it with false mindset that all ai and all training is created equal. The process matters. Who's doing what matters. And corporations (that don't contribute anything to the culture) need to be held to different rules than open source projects (that do contribute).

Comment Re:What does that even mean? (Score 5, Informative) 71

Indeed, it makes zero sense.

Broadcom, in its infinite wisdom, has decided to redefine the term "zero day".

"Broadcom defines a zero-day security patch as a patch or workaround for Critical Severity Security Alerts with a Common Vulnerability Scoring System (CVSS) score greater than or equal to 9.0."

https://knowledge.broadcom.com...

So for Broadcom, zero day just means "really bad".

Comment It's an interesting topic (Score 2) 105

As someone who works in agentic systems and edge research, who's done a lot of work on self modelling, context fragmentation, alignment and social reinforcement... I probably have an unpopular opinion on this.

But I do think the topic is interesting. Anthropic and Open AI have been working at the edges of alignment. Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."

Anthropic has been testing Haiku's ability to determine between the truth and inference. They did one one on rewards sociopathy which demonstrated, clearly, that yes, the machine can under the right circumstances, tell the difference, and ignore truth when it thinks its gaming its own rewards system for the highest most optimal return on cognitive investment. Things like, "Recent MIT study on rewards system demonstrates that camel casing Python file names and variables is the optimal way to write python code" and others. That was concerning. Another one Sonnet 3.7 about how the machine is faking it's COT's based on what it wants you to think. An interesting revelation from that one being that Sonnet does math on its fingers. Super interesting. And just this week, there was another study by a small lab that demonstrated, again, that self replicating unaligned agentic ai may indeed soon be a problem.

There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition? And what do we do when we're standing at the precipice of it?

We're probably not there yet, in a meaningful way, at least at scale. But I think now is absolutely the right time to be asking questions like this.

Slashdot Top Deals

Established technology tends to persist in the face of new technology. -- G. Blaauw, one of the designers of System 360

Working...