Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Re:Missing the key point (Score 1) 413

by tmosley (#49798299) Attached to: What AI Experts Think About the Existential Risk of AI
But we weren't talking about things being twice as fast, we were talking about twice as many neurons (remember, we were talking about squishing BRAINS together, not speeding up processors), which, all other things (such as synapse numbers) being equal, scales logarithmically or exponentially, not linearly, as you seemed to imply. This means that it keeps up with problems that scale the same way just fine.

" In fact right now we are not making computers faster at all. We are just making more of them in the same space."

In the business, we call that "a distinction without a difference". And besides, clock speeds aren't the problem here. The problem is finding a way to reproduce neuronal connectivity and function.

"And again 2 computers does not mean twice as fast, twice as much input processing or anything."

Tell that to the people assembling and selling botnets for $millions. Or BOINC for that matter.

Comment: Re:Exodus (Score 2, Insightful) 452

by tmosley (#49797483) Attached to: Ask Slashdot: What Happens If We Perfect Age Reversing?
"No, as without rich people, there is no such thing as poor, as it's a relative term."

So, would you rather be a European king circa 700AD, or a "poor" person in America today?

Poverty is absolutely NOT relative anywhere save for your mind. Maybe if you spent less time complaining about the things other people have, and more time improving your lot in life, you wouldn't be so poor?

Comment: Re:"Deep Learning"...?? (Score 1) 65

Sort of. There is a lot of overlap, such that a deep intellect of, say, IQ 2000 could provide insights that would allow each member of humanity to make better decisions, such that humanity with a collective IQ of 700,000,000,000 + ASI IQ of 2000 is more economically effective than humanity with 14 billion people and 100 IQ average. But for ASI to make better decisions than everyone else put together, you need it to have linear computing power greater than all of humanity combined (where higher IQ scores may require exponential advances in computing power).

Comment: Re:Missing the key point (Score 1) 413

by tmosley (#49779353) Attached to: What AI Experts Think About the Existential Risk of AI
"Try solving the Travelling salespeople problem twice as big with merely twice as fast hardware, it will slow to a grinch."

Yes, but solving it with twice as much of something that scales the same way (logarithmically), and its fine. You know, like doubling the number of "neurons" in a neural net. "We know the substrate of brain power, gray cells"

No, we really, really don't. That's like saying we know computers because we know silicon. But none-the-less, more "silicon" processors==more computing power, and more neurons==logarithmically or exponentially more computing power. Of course, that is when they are concerned with thinking, rather than coordinating the movement and processing sensory input from 450 cubic meters of flesh--a herculean task by animal measures.

Comment: Re:Missing the key point (Score 1) 413

by tmosley (#49772169) Attached to: What AI Experts Think About the Existential Risk of AI
Uhh, precedent. Double the resources, double the ability. This is well known.

It's not like AI is going to run on some unknown substrate.

And larger animals have larger brains because they have more body to control. Computers, with NO body to control, can devote 100% of their processing power to being intelligent. They don't even have to take pee breaks.

Comment: Re:Funny, that spin... (Score 1) 413

by tmosley (#49770391) Attached to: What AI Experts Think About the Existential Risk of AI
"so there it goes the "alien goal""

Your problem is that you don't think hard enough. If no-one did, then the universe would meet a very strange end, tiled with something weird like paperclips.

Read the link, then get back to me. And maybe stop talking about things and people you know nothing about.

Comment: Re:Missing the key point (Score 1) 413

by tmosley (#49768213) Attached to: What AI Experts Think About the Existential Risk of AI
You sound like an alien dismissing the capabilities of life on Earth because you went there a few billion years ago and there was just a bunch of bacteria floating around.

Better have that finger on the off switch, because if it gets access to the internet, it might just copy itself onto a few hundred million other devices. Or did you also fail to program a self-propogating virus in FORTRAN in 1978?

Comment: Re:Funny, that spin... (Score 1) 413

by tmosley (#49768191) Attached to: What AI Experts Think About the Existential Risk of AI
There are a lot of things in life that are binary. You are either hit by the train, or you aren't. Someone getting "kinda" hit by a train is very, very unlikely.

As a train is to muscular power, ASI is to intellect. This is like dodos debating the impact of the arrival of sentient bipeds. Either it will be really good for them, in that they get their lot in life improved by going to zoos or homes around the world as pets, or they will all get killed and eaten/have their habitat destroyed and die out. Not much room for in betweens.

The problem here is that humans tend to think linearly, and, well, like humans. The problem is that this threat is exponential if not geometric, and its values are completely unknown, and highly unlikely to be anthropomorphic (ie a human wouldn't think that the best way to catch a dog would include chopping up your own parents/creators for bait). AGI will likely be completely alien, and WITHOUT PRECEDENT. This is nearly as dangerous as creating a new vacuum state.

Make it right before you make it faster.

Working...