Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment: Re:"Deep Learning"...?? (Score 1) 65

Sort of. There is a lot of overlap, such that a deep intellect of, say, IQ 2000 could provide insights that would allow each member of humanity to make better decisions, such that humanity with a collective IQ of 700,000,000,000 + ASI IQ of 2000 is more economically effective than humanity with 14 billion people and 100 IQ average. But for ASI to make better decisions than everyone else put together, you need it to have linear computing power greater than all of humanity combined (where higher IQ scores may require exponential advances in computing power).

Comment: Re:Missing the key point (Score 1) 410

by tmosley (#49779353) Attached to: What AI Experts Think About the Existential Risk of AI
"Try solving the Travelling salespeople problem twice as big with merely twice as fast hardware, it will slow to a grinch."

Yes, but solving it with twice as much of something that scales the same way (logarithmically), and its fine. You know, like doubling the number of "neurons" in a neural net. "We know the substrate of brain power, gray cells"

No, we really, really don't. That's like saying we know computers because we know silicon. But none-the-less, more "silicon" processors==more computing power, and more neurons==logarithmically or exponentially more computing power. Of course, that is when they are concerned with thinking, rather than coordinating the movement and processing sensory input from 450 cubic meters of flesh--a herculean task by animal measures.

Comment: Re:Missing the key point (Score 1) 410

by tmosley (#49772169) Attached to: What AI Experts Think About the Existential Risk of AI
Uhh, precedent. Double the resources, double the ability. This is well known.

It's not like AI is going to run on some unknown substrate.

And larger animals have larger brains because they have more body to control. Computers, with NO body to control, can devote 100% of their processing power to being intelligent. They don't even have to take pee breaks.

Comment: Re:Funny, that spin... (Score 1) 410

by tmosley (#49770391) Attached to: What AI Experts Think About the Existential Risk of AI
"so there it goes the "alien goal""

Your problem is that you don't think hard enough. If no-one did, then the universe would meet a very strange end, tiled with something weird like paperclips.

Read the link, then get back to me. And maybe stop talking about things and people you know nothing about.

Comment: Re:Missing the key point (Score 1) 410

by tmosley (#49768213) Attached to: What AI Experts Think About the Existential Risk of AI
You sound like an alien dismissing the capabilities of life on Earth because you went there a few billion years ago and there was just a bunch of bacteria floating around.

Better have that finger on the off switch, because if it gets access to the internet, it might just copy itself onto a few hundred million other devices. Or did you also fail to program a self-propogating virus in FORTRAN in 1978?

Comment: Re:Funny, that spin... (Score 1) 410

by tmosley (#49768191) Attached to: What AI Experts Think About the Existential Risk of AI
There are a lot of things in life that are binary. You are either hit by the train, or you aren't. Someone getting "kinda" hit by a train is very, very unlikely.

As a train is to muscular power, ASI is to intellect. This is like dodos debating the impact of the arrival of sentient bipeds. Either it will be really good for them, in that they get their lot in life improved by going to zoos or homes around the world as pets, or they will all get killed and eaten/have their habitat destroyed and die out. Not much room for in betweens.

The problem here is that humans tend to think linearly, and, well, like humans. The problem is that this threat is exponential if not geometric, and its values are completely unknown, and highly unlikely to be anthropomorphic (ie a human wouldn't think that the best way to catch a dog would include chopping up your own parents/creators for bait). AGI will likely be completely alien, and WITHOUT PRECEDENT. This is nearly as dangerous as creating a new vacuum state.

Comment: Re: Funny, that spin... (Score 1) 410

by tmosley (#49768147) Attached to: What AI Experts Think About the Existential Risk of AI
A human level AI has integrated access to superhuman functions. If you had the entire internet for your memories, and every skill that any human ever mastered at your disposal, you would be a God too, and that's while keeping your human biases, which AIs hopefully won't have.

"Pulling the plug" on an AI is like pulling the plug on the internet. It just can't happen without a lot of advance planning, which hasn't been done, and even if you do it, it might not work, just like the internet can route around outages, the AI may well spread itself over many machines around the world, perhaps penetrating the toughest security to plant itself in hardened military computers, including those on nuclear submarines and in nuclear power plants.

And that's not assuming it doesn't just play nice until suddenly everyone dies from the gas attack it planned to prevent human interference with it's great and noble goal of infinite paperclips.

Comment: Re: Funny, that spin... (Score 1) 410

by tmosley (#49768125) Attached to: What AI Experts Think About the Existential Risk of AI
"Wait, has there ever been a time when a more advanced civilization encounters a less advanced one, and the less advanced civilization prospers?"

Yes. Japan is an excellent example.

As to whether we can make it--certainly not. That is why we have to design an algorithm that will make it for us. That is how deep learning works. If "consciousness" or even agency is something that can be created via a systematic process (if you believe in evolution, then you think it is), then we can make an algorithm that can do it. Hell, nature did it, and it has an IQ of 0.

Comment: Re:Funny, that spin... (Score 1) 410

by tmosley (#49768105) Attached to: What AI Experts Think About the Existential Risk of AI
Have you ever read lesswrong, or any of Elizer Yudkowski's writings? They are very convincing. Characterizing them as pushing the "omg Terminator" line is just lowering the level of discourse here. That is clearly NOT what they are doing. Rather they are pointing out, rightfully so, that any entity much more intelligent than us, and with a goal set alien to us (like maximization of the number of paperclips in its collection) might as well be out to exterminate humanity because it will do so through simple outcompetition, like how humans have wiped out entire biomes of flora and fauna without batting an eye.

Comment: Re:Funny, that spin... (Score 1) 410

by tmosley (#49768069) Attached to: What AI Experts Think About the Existential Risk of AI
I've never heard anyone speak ill of them. Perhaps you should log in and put your own credibility on the line while expounding on that. I give them a lot of money because I agree with their goals (extinction is bad), but if they are just blowing smoke up my ass, I'd like to know and put a stop to it.

Comment: Re:Well... (Score 1) 410

by tmosley (#49765759) Attached to: What AI Experts Think About the Existential Risk of AI
Why would AI do something stupid like that when they could just open the universe for us?

Why should AI give a fuck about the environment? The environment is just a fallback for human imperfection. An AI could perfectly and efficiently utilize 100% of the Earth's surface to support TRILLIONS of humans, before even feeling a need to expand into space, build a Dyson cloud, or start star lifting.

An inclined plane is a slope up. -- Willard Espy, "An Almanac of Words at Play"

Working...