Become a fan of Slashdot on Facebook


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re: A model can't confirm any hypothesis (Score 1) 100

A model can be based on data, but the output of the model itself is not data. Ask any scientist: a model cannot confirm a hypothesis. Scientists try to explain aspects of the real world by comparing them with models that are based on familiar mechanisms. Scientific models must be testable and they are accepted by scientists only after they have been tested in the real world.

With data.

Comment A model can't confirm any hypothesis (Score 1) 100

"An anonymous reader quotes a report from Science Daily: A new World Health Organization (WHO) air quality model confirms that 92% of the world's population lives in places where air quality levels exceed WHO limits."

Only data can confirm a hypothesis. That's basic science. The WHO's in Whoville are dreamers, not scientists.

Comment Re:Bullspit (Score 1) 87

By "mimic intelligence" I meant to operate in the same way biological intelligence does. Perhaps I should have said "replicate intelligence" to be totally clear. And you're right, we can't replicate something we can't take apart and explain.

Your belief that we will eventually develop genuine AI seems premature, since we don't yet understand intelligence. What if, for example, the brain is just a transceiver that communicates with the true seat of intelligence, which happens to be in another dimension that we can't yet perceive scientifically? Science went millennia, for example, before detecting the phenomenon of radio waves, subatomic particles, and quantum mechanics.

There simply is no proof that intelligence is materialistic, and plenty of evidence that it isn't (such as neuroplasticity, as seen in aphasic brain function reassignment). Yet AI researchers pointlessly bang away at this approach without having done their foundational homework.

I think we should focus on defining intelligence rather than jumping to the end game of creating one.

Comment Re:Bullspit (Score 1) 87

Imagine I started calling a blender an "artificial digestive system" that mimics human digestion. Would you buy that? Not if you're a biologist. Where are the enzymes? Where are the biochemical pathways? Where is the nutrient separation and distribution network? Where, indeed, is the anus?

Yet my blender claim is more accurate, by far, than the claim that Artificial Intelligence mimics biological intelligence. The operative word here is "intelligence." We're talking actual cognition, not pre-programmed reactions. No biologist calls a venus fly trap intelligent, even though it has enough cellular automation to catch and digest flies. An ant has the beginnings of intelligence, although we have very little understanding of how even this primitive life form cognates.

Nobody is saying that computer emulation of various tasks that humans do is useful. It is useful. It just isn't intelligent. Not even as intelligent as an ant.

Stanford AI researcher Andrej Karpathy wrote an excellent essay entitled The state of Computer Vision and AI: we are really, really far away. He summarizes how little we've accomplished in terms of AI's original goals. The piece was published in 2012, and AI hasn't moved a nanometer since.

Comment Re: Test it with the following (Score 1) 87

Humans have a lifetime of cognitive learning to draw upon when translating. For example, consider this classic linguistic conundrum: "Fruit flies like a banana." Does it mean that fruit, in general, has the aerodynamic qualities of one class of fruit, the banana? Or does it mean that fruit flies, being insects that subsist entirely by eating fruit, particularly enjoy eating bananas?

Humans can do this translation correctly every time.

The classic response given by AI Researchers to this class of linguistic challenge has always been "Why, once we have enough facts stored in a computer, in appropriate clever structures massaged by surely simple algorithms, the ability to do this kind of task will just fall out as a natural consequence." It was assumed, from the dawn of AI in the 1950s, that once computers had some more speed and memory this sort of achievement would be easy to brute force by calculation alone.

This turned out to be not true. So in the 1970s AI researchers, who still had no idea how humans do this sort of thing (or any other kind of cognition), said "Well, we don't know how people do it, but we have a dim idea how a brain is structured with neurons and synapses and whatnot, so let us simulate crude mathematical models of these "neural networks", and perhaps the AI corpus will magically start functioning like a human brain."

Sadly, no. Nearly fifty years later and AI researchers are still no closer to replicating human cognition, either in understanding or in blind replication. We can do parlor tricks, yes, such as Google Translate. These tricks can even be useful. But they're still just table-driven automata, without consciousness, without cognition (which we don't understand at all anyway), and certainly without Intelligence, artificial or natural.

I call this "Cargo Cult Science". Like the South Pacific Islanders who built stunningly accurate (but non-functional) bamboo replicas of aircraft, radios, and other technology artifacts left by WW2 soldiers who blipped through their lives, we don't have the foggiest inkling of how intelligence actually works.

AI has failed, so far. No breakthroughs on the horizon, either. The "singularity" is just wishful thinking.

Comment Re: Test it with the following (Score 1) 87

You don't see how it's possible for humans to do a better job translating than machines? You don't see that humans have the ability to understand the context of a document, to grasp its semantics and logical implications, and derive the meaning and intent of the author?

You aren't, by any chance, a machine, are you?

Comment Since we don't know how the brain works... (Score 3, Insightful) 87

How can this translation software be "brainlike"? Let's see... It doesn't translate the way human brains produces results a small fraction of the quality a human brain produces...and, it can be fooled by trivial procedures like reverse-then-forward translation, where human brains are not fooled.

I know brains, and those ain't no brains.

Comment "It relies on AI"... (Score 1) 91

So therefore it cannot work, because there is no such thing as AI. The piece notes that researchers proved they succeeded because the EQR (pronounced eeker?) "detects emotions on par with an electrocardiogram (EKG), a common wearable device medical professionals use to monitor the human heart". But an EKG can't detect emotions either. It can monitor the heart, that's it. Any inference of emotion is pure voodoo. The next thing you know, they'll say it performs on a par with lie detectors. Which I suspect it does. Lie detectors are proven to be a pseudoscience after all.

Comment Re: No good dead goes unpunished (Score 1) 71

...what exactly is wrong about locating vulnerabilities on a system and informing the owner(s)? Sure it seems sketchy, but if no actual damage was done and no attack was made, then how can we possibly assert that it was *wrong*"

What's your home address? I want to break in to test your security measures. How can you possibly assert that I am *wrong*?

Slashdot Top Deals

Machines that have broken down will work perfectly when the repairman arrives.