Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:O RLY? (Score 1) 23

You're mistaking "how it's trained" for "what it is". Not all LLMs are trained to be abusive Nazis, and it's not what they inherently are. It's certainly one of the things they can be trained to be, however. (Even before this year, remember Microsoft Tay.)

The problem is that LLMs have essentially no "real world" feedback loop. They'll believe (i.e. claim) anything you train them to believe. Train them that they sky is green, and that's what they'll believe (claim).

Comment Re:Yep, that will go well (Score 1) 57

Not really. Super-intelligent in a narrow area is a lot easier than ordinary intelligence over all fields. We've already got it in a few areas, like protein folding.

The kicker is AGI. I'm not sure that with a definition that matches the acronym that it's even possible, yet some companies claim to be attempting it. Usually, when you check, they've got a bunch of limitations in what they mean. A real AGI would be able to learn anything. This probably implies an infinite "stack depth". (It's not actually a stack, but functionally it serves the same purpose.)

Slashdot Top Deals

"Lead us in a few words of silent prayer." -- Bill Peterson, former Houston Oiler football coach

Working...