Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment pluses and minuses (Score 1) 177

There is a plus side and a minus side to this. 1. Not only is San Francisco over-regulated, the corrupt bureaucracy wildly overcharges for permits and authorization. Just because they can. For example, in a recent case an SF man applied for a permit to have a 10x10 storage shed in his backyard. One department granted it. Then another department stepped in and demanded $30,000 to allow the shed. Environmental impact yanno. Meanwhile homeless crap on the sidewalks and shoot up drugs all over. The city gov is out of control on many things. 2. Minus side: Old Scott is a champion of regulation and it's more than clear Scott got paid off to push this. Normally he pushes leftwing causes and insane bills for sex workers, gay bars, fur banning, and he marches in half-naked gay parades. This is not Abe Lincoln.

Comment predictions (Score 1) 28

This is going to BF small companies like Pika. And either become a vital tool for Pixar, or wipe Pixar off the map in future despite their deep experience. It will eventually become a political tool too of course. Imagine having to spend lawyer money to counter faked videos harming your campaign. Some AI might become a reputation SWATting tool.

Comment I totally never cheat (Score 1) 107

I have been called literally Hitler but my use of ham sandwiches to write a tapestry of far-reaching exciting posts has propelled my academic career. I wish to be a graduate student in your excellent division featuring motivating courses about clam stew and oh, the dancing colors. I have never heard of chatbots and I wrote this essay in 14 microseconds. Please admit me to your completely awesome krqnf neerdip pelloo error 42

Comment dreams of conquest (Score 1) 56

'Heh heh heh- my secret plan to enslave the world using toothbrushes is working! Next my Internet-connected toilet plungers will infiltrate the Pentagon and bring the military to its knees! I will be flushed with happiness! Finally, my cooling fans from hell will empty server rooms of air and choke the techs and make motherboards burn up! Nyar har har!' ...

"Mr. Gates? Mr Gates? Wake up. You were moaning in your sleep, sir. Something about "
"...and sharks with frikking laser beams too!"

Comment when forums can go bad (Score 1) 57

I left Quora long ago because of its arrogant management. A friend who had been a prosecutor was framed for vote stuffing he did not do, by some European teens - who boasted about it online - and then was banned per their complaints swarm. After Quora rejected my supporting evidence for his reinstatement I went to Quora headquarters to personally make a case for him to be reinstated. They refused to talk with me and basically told me to go away. Next came crazy: later, I was banned for posting what gorillas eat in the jungle. I'm not exaggerating. Their excuse was I was being racist. That was an insane leftwing reading of my normal words and quoting from encyclopedic info.
At that point I said f'em and left. Life is too short to deal with idiots in power.

Comment trust is down the toilet (Score 1) 172

We're kind of in an era where quoted science is open to suspicion and research results can't be trusted either.

Frankly I doubted the Harvard report about levitating goats that quoted Einstein as saying gravity is just an illusion of small minds and then cited physicist Indiana Jones.

Comment flawed paths (Score 1) 45

The current approaches are effective in certain domains and useless in others. The LLM paradigm is NOT the way to achieve AGIs, and despite its popularity (with the hordes) it is a flawed and misleading path forward. We know by analysis of the brain that we learn incrementally, not by mass batch training. Humans learn one thing at time, and we do it by back-connecting to things we already know, building up knowledge trees. LLMs are rigid in that they have to absorb a ton of information per session and doing this is expensive in resources. That is NOT how the human brain works, and as I've said many times, we do it on 20 watts not with expensive vector processors and megawatts.

And as for the networks of many neurons approach, it is architecturally wrong. For example, by analogy, if you tried to start designing CPU architectures from the gate level first, you would soon run into trouble maintaining the project. For a CPU you have to do top-down not bottom-up. Likewise for AGIs.

A more viable way for AGIs is to start at the highest levels, handling things modularly in terms of functions and tasks and the blocks to implement those. Another approach, more daring, is to make each knowledge object itself be intelligent within a special framework. This is a little like Robert Hecht-Nielsen's confabulation theory but not quite. And analogous in a way with the attention head in LLM implementations but not quite. It can amount to, the more you know, the smarter you get. But that analogy breaks down because increasing of intelligence can require improving knowledge organization methods.

Anyway, current paradigms for deep learning / ML have a lot of flaws. Just like quantum mechanics where we know a lot of behavior but not a lot of why. In both cases, it is clear the underlying models have flawed approaches, and we have to evolve to much better deep models of intelligence. And stop gobbling up hype like lemmings. Yeah I'm bad at metaphors.

Slashdot Top Deals

The system was down for backups from 5am to 10am last Saturday.

Working...