Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Nuclear Disarmament is Idiotic (Score 3, Insightful) 228

Nuclear weapons prevent wars between great powers with great success. Only a complete idiot or a warmonger of the most evil type would call for nuclear disarmament. Of course, one of those groups is very useful for the other.

If we didn't have all this nuclear non-proliferation nonsense, not only would the world be a peaceful place, but we'd have cheap, abundant nuclear power everywhere. There wouldn't be any "developing" countries--they would all be first world.

Trying to have wars in a world with nuclear weapons is like trying to have gangs of roving banditos in a nation where everyone carries around rifles and handguns. It's just not possible, and anyone who tries won't last very long.

Comment Re:Agreed. (Score 1) 294

Wow, you are a complete idiot. If you were in charge, we'd all be doomed.

You are certainly free not to think or listen to some of the foremost thinkers in the field (ie the good people at Lesswrong). Also, nice use of the racecard. Never had that pulled on me in a discussion about AIs before, but it is just as good of a thought killer there as it is everywhere else.

Comment Re:Quantum Computing Required? (Score 1) 294

Who said anything about modelling a human brain? We are talking about building an AGI, not an artificial human.

That said, neural nets do resemble neural connections in animals. They aren't being modeled by transistors directly (ie one transistor!=one neuron, or neural net node), but rather the NN is being simulated with a GPU.

Comment Re:Quantum Computing Required? (Score 1) 294

"there is zero scientific basis for it."

I disagree. We are creating individual modules, getting computers to do what until very recently only human minds could do. If we can make a neural net that can tag pictures with what's in it and what those things are doing, then cognition isn't really all that far behind. We just have to make the same leap that evolution did when humanity was born, only we will have it easier because unlike evolution, we act with deliberate purpose, and we now know which part of the brain is responsible for long term planning, which I suspect is the missing link between what we have done so far, iterated out until it can carry out all the functions of the animal brain, and the complete AGI.

Also, you seem to completely miss the point of AGI. It isn't to create a special little snowflake homonculus, but rather to create an optimizer, a Great Optimizer that will remake the universe in our image, ensuring our survival and freedom at least until the heat death of the universe. Whether it feels feelings or pretends to feel feelings or feels nothing and communicates as much, that is all fine, so long as it carries out its purpose, and so long as its purpose is properly programmed with the long term best interests of all humanity and other sentient beings at heart.

Comment Re:Agreed. (Score 2) 294

"rebel"

No, just the opposite. I think a strong AI will carry out its programming to the letter. The problem comes when it is given open ended problems like "maximize the number of paperclips in your collection.

The need to fulfill such a task will drive it towards self improvement and also cause it to eliminate potential threats to its end goal. Threats like, say, all of humanity.

Comment Re:Ultracompetent robots (Score 3, Insightful) 112

Yes, in the sense that I, as a human, could interpret signals from a keyboard. Not nearly as efficient as the digital method.

There is now an AI which can be shown a picture (or a hundred trillion of them) and label not only what is in the picture (say, a little girl and a dog) but can identify what is going on in the picture (the little girl is playing with the dog). There is another that can look at a picture and identify the sentiment being expressed by that picture. There is yet another that can take a sample of writing and give a fairly accurate and fairly reproducible psychological profiles on the authors.

Also note that you have again proven how little you actually know about the field by trivializing visual processing by comparing it to keyboard recognition. We are creating little parts of brains here, but you don't understand that for some reason. I suspect it has something to do with your advancing age.

Comment Re:Quantum Computing Required? (Score 1) 294

My understanding was that quantum computing allows for massively parallel computations, not increased speed of communications, and certainly not an increase in efficiency. IE its good for doing some tasks that are hard today, like cracking encryption, but its no better at adding 2 and 2 than a regular computer, maybe even much worse.

Comment Re:Agreed. (Score 4, Insightful) 294

Don't make the mistake of anthropomorphizing an AGI. Why would you think that a random AI created without safety standards would be like a human child, loving and caring for its parents, rather than a spider child, mercilessly devouring its parents for their chemical energy?

"The AI does not love you, nor does it hate you. You are simply made out of atoms that it can put to better use."

Comment Re:Quantum Computing Required? (Score 3, Interesting) 294

An optimized neural net is already so far above us, there's really no need to worry about something even higher than that. If my human brain were stripped of all the garbage and evolutionary baggage, given direct high speed internet access, and set solely towards completing computational tasks (analysis and such), it would blow the entire world away. It has already been shown that insect-level neural nets can perform primate level image analysis and speech recognition. Human brains are orders of magnitude more powerful.

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...