Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re:"No idea how... the brain works" (Score 1) 200 200

Improv,

Your description of imaging is true for functional mapping. However it is not true for reverse engineering. You're missing fine-grained data over time. In both neurons and computers state changes happen millions of times per second. For example, in a neural field of a million neurons, many firings occur multiple times at un-synchronized intervals The best thermal imaging can capture only a handful of state changes per second. Thus it is missing the vast majority of fine-grained activity over time, which is essential to reverse engineering the processes occurring (as opposed to merely mapping function).

Other biological imaging techniques, such as NMR and MRI, are geared to plotting fine detail, not short time intervals. In your computer thermal example, there is no faster non-intrusive imaging technique, e.g., no NMR, etc. The next step is intrusive logic tracing, which requires direct connection to individual chips in the processor, or individual gates in a chip. We have no equivalent for those techniques in biology.

Thus extending biology's imaging processes to a computer is invalid.

Comment: Re: Fails to grasp the core concept (Score 1) 200 200

If you're going to redefine learning, then you can make anything learn. So that avenue is pointless. "My pencil just learned how to delete data it printed!"

Ironically, the AI community in essence did this very thing when it divided AI as "weak" and "strong" variants. They did this as the result of decades of failed milestones and underestimation of the problem. But the key word is "intelligence". Before weak AI, that word referred to versatile cognition and self-awareness at least in the order of mammals. Now "intelligence" has been diluted to refer even to simple state machines. In the meantime, software that could qualify as strong AI is nowhere in sight.

Comment: Re:Fails to grasp the core concept (Score 1) 200 200

serviscope,

Why put on the restriction "in such a way that it could hypothetically be performed by a computer"? That's a tautology, just as if I were to say "Define flying in such a way that it could be hypothetically performed by a pig."

In any event, the origin of this thread is the assertion that a computer is operating "the same way the human brain works", so you can't exclude the human brain as a standard of reference.

Comment: Re:There are ideas. Here's one. (Score 2) 200 200

The thing is, a rat can do a great many more things than run a maze. But the neural network just runs the maze, and it doesn't do it with the flexibility and multi-ability that the rat does it with. AI has to be much more versatile than a one trick pony, and we don't even have one trick ponies. An implicit assumption of neural networks is that increased complexity will somehow magically produce increased capability: more neurons equals more skills. But there is zero evidence for this optimism.

Comment: Re:Fails to grasp the core concept (Score 1) 200 200

Tmosley,

I challenge you to name one thing a computer has learned. Computers can store information, and they can process it using weighted decision-making. But they have never, ever "learned" anything. Researchers are anthropomorphizing when they say a program "learns". Computers never learned to play chess; they were programmed to do that. Programming is not teaching, and a computer running a program has not been "taught". Some programs can alter themselves in specific, pre-programmed ways, but that is not learning.

Google acquired DeepMind Technologies last year and announced that they have devised a "Neural Turing Machine" that learns. But the NTM contains no neurons, so the name is highly misleading. According to Google, they chose this name because they were "inspired" by neurons. Not surprisingly, Google had to admit that they took similar license with the use of the verb "learn." What they really meant is that the NTM's programming mimicked the results of prior neural network simulations (which also do not learn), only faster.

If this level of misdirection were used in any other branch of science, it would be called academic fraud.

Comment: Re:"No idea how... the brain works" (Score 1) 200 200

Gweihir,

Well said. Your description of yourself as a dualist, with regard to intelligence, is congruent with Maxwell's theory of the dual forces of electro-magnetism. Maxwell predicted radio waves, but it wasn't until Marconi created a new apparatus that detected them that science accepted the theory as proven. In just this way a modern cognitive researcher could predict an external source of order and information essential to intelligence but not detectable with today's technology. Some future scientist might well invent the apparatus to detect this information source. That it has many of the same properties philosophers attribute to the soul would not be surprising. Nor would it be unscientific.

It might even turn out that this suspected source for intelligence is the same information source for biological morphology. That would be in keeping with the essential attribute of science to seek the minimum set of processes to explain observed phenomenon. A Unified Life Theory, as it were.

Comment: Re:"No idea how... the brain works" (Score 1) 200 200

A true scientist would not rule out an external force that could be termed a soul if it could be tested and measured. It would not be supernatural in that case, but part of nature. The term supernatural is a man-made descriptor for any phenomenon outside our current knowledge. Until Marconi discovered radio waves, the idea of transmitting information at a distance was considered supernatural. In reality, that misconception was just ignorance.

Genomics, like cognition, is another discipline that may have to admit to an information repository other than the one we found in DNA. Because the encoding for a vast amount of biological information -- such as the structure of organs, systems, and process sequences -- does not appear to exist in the genome. Call it epigenetics. Call it a bio field. It's still the antithesis of the self-contained genome. Neurophysiology should at least be as open to external information sources as genomics is.

Comment: Re:There are ideas. Here's one. (Score 1) 200 200

Fyngyrz,

You're taking the expression "no idea" too literally, and that's not really an argument. If I say "I have no idea how to drive a car," I obviously don't mean that I literally don't have a single idea, it means that I cannot functionally perform that task.

Regarding your other point, in this discussion, human brains are exactly what is at issue. The WSJ said the paper they cited illustrates how Google scientists are "teaching computers to mimic some of the ways a human brain works."

Comment: Re:There are ideas. Here's one. (Score 2) 200 200

My point in this area would be: does our knowledge allow us to generate desired outcomes in novel subjects with any level of certainty?

For instance: we know with great certainty that you can stimulate the optic nerve and cause the subject to "see things" (and also: not see things that are really there).

On the other hand, with respect to cognition, can we do anything that simulates (reconstructs) a biological cognition system?

Can we learn a maze the way a rat does? I think so. Neural nets with reward and punishment inputs can perform approximately the same.

Similar outcomes prove nothing. Neural nets do not "learn" a maze the way a rat does, and in fact there is no evidence that learning, in the sense of brain cognition, occurs in neural nets at all. What they do is record a maze using a matrix of differential equations modeling how we think neurons work. Science has not demonstrated that those models are correct, and getting the same results as rats doesn't prove they are correct. We can also record a maze with a digital shift register and some input gates, but that doesn't mean that's how rats learn a maze. Moreover, if you put a cat in the maze, rats can adapt. Neural nets do not, because the goal for a neural net be must be encoded in advance.

With our understanding of even these simple cognitive tasks essentially at ground zero, we have no right to claim AI has made any progress at all toward true cognition. Everything done to date could be a dead end.

Comment: Re:There are ideas. Here's one. (Score 1) 200 200

Improv, You are the one asserting that we know how the brain works. Knowing "what some parts do" is not the same as knowing how the brain works, i.e. how it performs cognitive tasks.

As the asserter, you need to provide the proof, not I. Name calling is the refuge of the debater who has no actual argument. I'm still open to an example of one cognitive function science can explain. Absent that, at a minimum we have no idea at all how far along we are toward AI. Without describing how cognition is done, we can't program an AI to do it.

Comment: Re: There are ideas. Here's one. (Score 1) 200 200

Alas, no. See what you did there? The same thing the WSJ did. You inflated a tiny bit of research about a portion of the locust's visual system into "reverse engineered". In a nutshell, the paper you cite only posited a theory, based on some observations, for a possible neuronal substrate influencing excitation and inhibition in the visual field. The researchers then incorporated a mathematica model of that substrate into the control structure of a small mobile robot, which subsequently avoided collisions with objects. That's not cognition. Or a reverse-engineered visual system.

That's a motion sensor.

Excessive login or logout messages are a sure sign of senility.

Working...