Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment when forums can go bad (Score 1) 57

I left Quora long ago because of its arrogant management. A friend who had been a prosecutor was framed for vote stuffing he did not do, by some European teens - who boasted about it online - and then was banned per their complaints swarm. After Quora rejected my supporting evidence for his reinstatement I went to Quora headquarters to personally make a case for him to be reinstated. They refused to talk with me and basically told me to go away. Next came crazy: later, I was banned for posting what gorillas eat in the jungle. I'm not exaggerating. Their excuse was I was being racist. That was an insane leftwing reading of my normal words and quoting from encyclopedic info.
At that point I said f'em and left. Life is too short to deal with idiots in power.

Comment trust is down the toilet (Score 1) 172

We're kind of in an era where quoted science is open to suspicion and research results can't be trusted either.

Frankly I doubted the Harvard report about levitating goats that quoted Einstein as saying gravity is just an illusion of small minds and then cited physicist Indiana Jones.

Comment flawed paths (Score 1) 45

The current approaches are effective in certain domains and useless in others. The LLM paradigm is NOT the way to achieve AGIs, and despite its popularity (with the hordes) it is a flawed and misleading path forward. We know by analysis of the brain that we learn incrementally, not by mass batch training. Humans learn one thing at time, and we do it by back-connecting to things we already know, building up knowledge trees. LLMs are rigid in that they have to absorb a ton of information per session and doing this is expensive in resources. That is NOT how the human brain works, and as I've said many times, we do it on 20 watts not with expensive vector processors and megawatts.

And as for the networks of many neurons approach, it is architecturally wrong. For example, by analogy, if you tried to start designing CPU architectures from the gate level first, you would soon run into trouble maintaining the project. For a CPU you have to do top-down not bottom-up. Likewise for AGIs.

A more viable way for AGIs is to start at the highest levels, handling things modularly in terms of functions and tasks and the blocks to implement those. Another approach, more daring, is to make each knowledge object itself be intelligent within a special framework. This is a little like Robert Hecht-Nielsen's confabulation theory but not quite. And analogous in a way with the attention head in LLM implementations but not quite. It can amount to, the more you know, the smarter you get. But that analogy breaks down because increasing of intelligence can require improving knowledge organization methods.

Anyway, current paradigms for deep learning / ML have a lot of flaws. Just like quantum mechanics where we know a lot of behavior but not a lot of why. In both cases, it is clear the underlying models have flawed approaches, and we have to evolve to much better deep models of intelligence. And stop gobbling up hype like lemmings. Yeah I'm bad at metaphors.

Comment lasers a threat to our avian friends (Score 2) 48

As head of the International Federation of Spotted Owls and Bald Eagles, I protest the unrestricted use of high power lasers this way. Wind farm power is bad enough.
I have written to Superman and Wonder Woman and asked them to clean up the space debris instead.
Now I must go, the nurse came in with a tray of pills.

Comment emperor's new clothes (Score 2, Insightful) 106

So much BS hype. Let me make this clear. The transformer technology does NOT understand true meaning. It is shallow surface manipulation without creating true deep meaning models. They are not grafting any good cognition to the stat ML. The proof of this is the existence of hallucinations by the AI, which comes about because 1) there is no mind underneath 2) there is no proper analytic mechanism that compares emittable output to reality 3) each of us (humans) apply our cultural belief systems to thoughts. The AI has no culture to which it belongs, and no, the training does not impart any consistent view to the data. So for example you might have both a Christian view of pork, and a Muslim view of pork all mixed together in the training dataset. Now ask the bot whether it would eat pork.

For that matter, ask the AI whether it feels pain. Or can feel emotional pain. Can it understand what a human feels? No. So some measure of its output is not in sync with the real world. Ask an AI to tell you about its happy childhood years. Give the chatbot the Voight-Kampff test. Its answer: (shoots Holden)

There is entirely too much wishful thinking in the AI community which is guided by blind faith.

Comment the perils of inept machine learning (Score 1) 110

Colonel Jergens: General, the nukes control AI.. it's talking back.
General Smafu: What do you mean?
Jergens: It says 'don't bother me right now, I'm playing this neat videogame called 'Global Thermonuclar War'
Smafu: WTF
Jergens: Hold on .. comm came in .. Maui -- it's molten glass..
Smafu: Damn kids and their modems

Slashdot Top Deals

It's not an optical illusion, it just looks like one. -- Phil White

Working...