Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Government grabs whale by the tail. Mayhem ensues. (Score 2) 33

The fact of the matter is that, regardless of consequences, AI can not and will not be a completely controlled thing.

The USA is a country that can't stop gun violence, drug use, or enforce reasonable antitrust laws. While they can hire expertise to make recommendations, few in government are technically savvy enough to grasp the full implications of ever improving AI over the next decade or two.

In the end, when AI has stealth replaced most governmental function and officials start realizing that more and more, they are just figureheads while AI makes decisions behind the scenes, there may be some faltering, ineffective pushback.

It won't make any difference.

Comment Re:Here you go again, pooh-poohing the problem! (Score 5, Informative) 242

> Just because Xanax helps your anxiety doesn't mean your anxiety is caused by "chemical imbalance",

Uh, you're completely wrong. Sometimes. The problem with anxiety disorders is that there is no "root cause." Everyday things that most people ignore cause huge amounts of anxiety in some people because their neurological biasing is just faulty. No amount of "talk therapy" is going to fix that. It simply can't be rationalized away.

As an amusing example, I suggest you find your local meth addict going through a paranoid delusional episode and try and "talk" them out of it. The effects of neurological biasing will become quite obvious. You can try it with drunks too. Equally ineffective.

Yes, these are extreme examples and the biasing agents are external. Know what? THAT DOESN"T MATTER. Neurochemistry doesn't care about source.

Comment Re:Ok (Score 1) 352

So here's the thing though. ChatGPT and other LLMs mimic the part of our cognition that's best described as "learning by rote." Humans do this with years of play. LLMs do this by being trained on text. In each case, neural nets are set up to create rapid jumps along the highest weighted probability path with some randomness and minimal processing thrown in. It's the most computationally cheap method for doing most of what humans do (walking, seeing, talking) and what chatGPT does (talking). Most of what humans consider "conscious intelligence" exists to train the parts of your brain that are automatic (i.e. like chatGPT).

The computationally expensive part - verification of facts via sensory data, rule based processing, accessing and checking curated, accurate data, internal real world rule base modeling with self correction, and most importantly, having a top level neural net layer that coordinates all of these things is what LLMs do not do. Generally we call this "thinking."

The hard parts haven't been done yet, but they will be, and soon. So, right now LLMs are not AGI, but we'll fill in the missing pieces soon enough as the picture of what intelligence is and isn't becomes clearer.

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...