Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:But she still can... (Score 5, Informative) 573

A few notes: 1) This is not the only way she can communicate, simply the cheapest $299 + iPad). The first paragraph of the article says that much. Later on it does mention that the iPad app is the only one the girl took to right away.

The parents tried several much more expensive alternatives (including devices by the plaintiffs), but they were all too heavy or too difficult for an illiterate four-year-old to operate. They're not just going for the cheapest option

Comment Re:Small Sample? (Score 2) 205

However, significance is only accurate if you propose a hypothesis BEFORE you collect data, or you account for the number of hypotheses that you COULD have tested when you started hunting for correlations.

Wagenmakers et al (2011) make a similar but slightly different point. The important thing is to distinguish between exploratory studies and confirmatory studies. In an exploratory study, hypotheses are based on correlations found after gathering data, while in a confirmatory study, the examined hypotheses are planned in advance. Both are important. Without confirmatory studies, exactly your point criticism applies, but, without exploratory studies, non-intuitive insights are difficult to come by.

This is why replications of previous studies, with new data, are so important. Research is messy enough that the first examination of a hypothesis is at least partly exploratory, and it's up to the next five research papers to replicate the original instantiation of the hypothesis on the way to exploring the next elaboration of it.

Comment Re:Small Sample? (Score 4, Insightful) 205

Yeah! There's no way that trained scientists would be able to calculate reliable a difference is given a certain sample size with an observed variance! That's just wayyyy too hard. The only way to do real science is to get 400,000 data points for every comparison; it's the only way to be sure.

In all seriousness, huge sample sizes are only important if we are comparing several variables, where a large sample size can give us good estimates for rare combinations of events, and/or small effects, where a large sample size allows us to achieve small confidence intervals over the relevant comparisons. It's quite possible for a sample size of 124 to yield a significant difference for one effect if the effect is of at least moderate size.

Comment Re:It's not just specialization, there is also fea (Score 1) 269

What a pointless thing to argue. In the 1980 paper, he does indeed spend a great deal of time defending the proposition -- he doesn't provide the formal argument everyone is familiar with until later (1990), where it is indeed taken as axiomatic. However, as early as 1983 Searle writes "2. Syntax is not sufficient for semantics. That proposition is a conceptual truth. It just articulates our distinction between the notion of what is purely formal and what has content." from Minds, Brains, and Science pp. 28-41

So, yes, I stand by my assertion that the illustration is indeed a waste of time to argue about -- all that matters to the argument is the proposition.

I picked the earlier formulation because it is more defensible. The 1990 formulation just baldly asserts that "syntax is insufficient for semantics." Simply calling it axiomatic is not enough; you have to show why it is axiomatic. I'm open to the idea that a syntactic system cannot create its own semantics, but, if all we mean by semantics is an association between a sign and what it signifies, there's no reason that association could not be defined by some other syntactic system.

We get semantics from the visual system because the visual system provides a mental state (the sign) that corresponds with or represents something in the real world (the signified).

To the computer, there is no "real world" there is no distinction between data pulled into memory from a video camera or a stack of Hollerith cards nor from data already in memory or data being gathered at the time it's accessed -- there is no distinction. The computer is just manipulating meaningless symbols (and even that's a stretch, as the computer can't make such a distinction!) Meaningless symbols in relation to one another are ... meaningless symbols, being manipulated meaninglessly.

Well, to the human brain there is also no such thing as a "real world." Haven't you seen the matrix? ;) we don't experience the real world directly; brains interpret signals from our various senses.

Finally, I'll just point out you still haven't explained why neurons can cause a capacity for Chinese (or if you don't like the language examples, calculus, or baseball, or music.

The alternative is to posit a non-physical explanation. Searle doesn't deny that brains cause minds -- he only argues that what brains do to cause minds can not be computation alone. I don't have the answer, and neither does anyone else. I suspect that the answer will not come from philosophy or neuroscience, but from physics as a necessary consequence of some undiscovered bit of reality.

But this question is at the heart of the Chinese Room argument. In the 1990 presentation you favor, Searle says "if I do not understand Chinese solely on the basis of running a computer program for understanding Chinese, then neither does any other digital computer solely on that basis." In the argument, the man in the Chinese room is a substrate for the computation, just part of the machine. We don't expect individual neurons to understand Chinese, or individual transistors, so we should also not expect the man in the room to understand Chinese. The argument, and this was my original point, is just misleading. It doesn't show anything.

Of course, this is not going to influence your position, because you believe that it is self-evident that syntax is insufficient for semantics, and take, as Searle came to, the Chinese Room Argument as an illustration rather than an argument. I don't see it as self-evident at all, since, again, semantics is an association between a sign and the signified, and those associations are one kind of thing a syntactic system can compute. And with this, I will conclude my participation in this discussion. Good day.

Comment Re:It's not just specialization, there is also fea (Score 1) 269

You seem to be hung up on the illustration, which I agree has caused more confusion that clarity. Again, the illustration has nothing to do with the argument, the claim in question "syntactic content is insufficient for semantic content" is taken as axiomatic in the argument proper. The room, the paper, etc., is completely irrelevant.

Aha! here's the issue. Searle specifically does not take that claim as axiomatic in the argument proper. From page 422 of Behavioral and Brain Sciences Vol 3 No. 3:

But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.

"Why not?"

Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality.

(emphasis added by me) So Searle thought that the CRA does show that syntax is insufficient for semantics, and did not take it as axiomatic. I'm actually quite curious what you think the CRA is supposed to do if not show that syntax is insufficient for semantics. I'm also curious why you think syntax is insufficient for semantics if not for the CRA.

Clearly, it's caused you some confusion, as you seem inexplicably focused on language.

I'm a linguist, what can I say? :p Plus, the CRA focuses on language, so it's natural to use examples from language.

Meanwhile, the visual system will perform syntactic operations on bundles of visual percepts to identify objects, providing the semantics for the cross-situational word-learning system.

This is the only bit of your explanation that is really relevant. See, this is where you introduce semantics seemingly out-of-nowhere. If you can get semantics from a computational system, you don't need to say anything else. The problem, of course, is how do you get semantics from the "visual system"? Can congenitally blind people have intentional states? "It just happens" isn't much of an argument!

Please, it's irritating when you put words in my mouth. I did not say that "it just happens," and I did not say that visual processing is the only source of semantics. Blind people can still hear and touch and taste and smell, for example. The next question is: "Can somebody with no ability to touch or see or hear or smell or get any input from the outside world whatsoever have mental states?" Well, maybe not. I'm open to the possibility, however, that some very general aspects about the world are coded in our DNA, and possibly these could form the basis for some kind of mental states. It's an empirical question so I'll wait for empirical evidence.

We have to be precise about what we mean by semantics. Semantics is the association of a sign with a thing signified; the association between the name "aunt millie" and whatever mental state corresponds to "aunt millie" (perhaps a bundle of smells, facial characteristics, and so on), or, for a non-linguistic example, the association between a certain configuration of cochlear fluid (the sign) and the knowledge that one is falling (the signified). A major part of the purpose of many cognitive systems is just to provide meaningful symbols for other cognitive systems to manipulate. We get semantics from the visual system because the visual system provides a mental state (the sign) that corresponds with or represents something in the real world (the signified).

Is this what you mean by semantics? Or do you mean something else?

Of course, this is all a very young field, and I'm open to evidence either way.

In another post I mention Jacoby, take a look at his research. To kill computationalism, check out Fetzer and then Bringsjord.

I'll take a look at them.

Finally, I'll just point out you still haven't explained why neurons can cause a capacity for Chinese (or if you don't like the language examples, calculus, or baseball, or music. I'm fine with the man being in a baseball robot with time slowed down or something) without individually having a capacity for Chinese, but the man in the Chinese room cannot cause a capacity for Chinese without himself having a capacity for Chinese.

Comment Re:It's not just specialization, there is also fea (Score 1) 269

Yes, but for reasons entirely independent of the CRA. I think the CRA gets its rhetorical force dishonestly: by putting a man in a situation involving communication, we intuitively expect the man to be one of the communicators and so to understand the language, but it is not the man communicating in the CRA any more than it is my tongue communicating when I talk. The CRA just muddies the question.

I think we're going to end up looking at the semantics of one level of analysis as the syntax of another. For example, cross-situational word-learning can be formulated "syntactically" to learn correspondences between words and the objects they refer to, providing part of the referential semantics for a syntactic system for combining words into meaningful phrases. Meanwhile, the visual system will perform syntactic operations on bundles of visual percepts to identify objects, providing the semantics for the cross-situational word-learning system. And so on and so forth, until we have semantic primitives that are just biochemical processes. I expect that there would be some cross-talk between different levels of analysis (there are cross-linguistic correlations between color terms and color perception, for example), but it should be fairly limited. We wouldn't expect pragmatics to interact all that much with tone perception, for example.

Of course, this is all a very young field, and I'm open to evidence either way. I think it would be fascinating if we discovered that the brain did something that cannot be described computationally. I just don't think the CRA provides a clear argument on this point.

Comment Re:It's not just specialization, there is also fea (Score 2) 269

I'm not, now, trying to address that issue. I'm saying that the CRA also does not address that issue

Really? The whole point of the illustration (the room, paper, etc.) is to help explain/bolster the assertion that syntax is insufficient for semantics. You're confusing the example for the claim. Hence, you fall under #2 above.

Yes, the point of the illustration is to explain and bolster the assertion that syntax is insufficient for semantics. My point is that it fails to do so. Do you have an explanation for why neurons can cause a capacity for Chinese without themselves having a capacity for Chinese, while the man is unable to cause a capacity for Chinese without himself having a capacity for Chinese?

Comment Re:It's not just specialization, there is also fea (Score 1) 269

You created a magic box and called it 'mind'.

Nope! I did not. I said that the human capacity for language is caused by things (neurons, brain chemistry) that do not individually have a capacity for language, so there's no reason to require the man in the Chinese room to have a capacity for Chinese.

Comment Re:It's not just specialization, there is also fea (Score 1) 269

Ah, but it is a magical explanation. Saying "it just happens" offers you nothing over saying "god does it".

Hmm, except I did not say "it just happens." I said that if neurons can cause a mind that understands language without any of the neurons themselves understanding language, then the man in the Chinese room can cause a mind that understands Chinese without himself understanding Chinese. I can't be specific about how it happens because the CRA simply asserts that there is a program that produces suitable response, without giving any indication of how the program works.

It also doesn't address the claim at issue: that syntax is insufficient for semantics.

I'm not, now, trying to address that issue. I'm saying that the CRA also does not address that issue: it asserts that the Chinese room set-up does not understand language because none of its component parts understand language, but clearly a human mind understands language while its component parts to understand language.

Believe in your magical computation fairy, I'll stick with Searle: whatever the brain does that causes consciousness, it can not be computation alone.

I haven't argued that the mind is caused by computation. I've argued that the CRA does not address the issue. I do view the mind as fundamentally computational, but not because Searle's argument is confused.

Comment Re:It's not just specialization, there is also fea (Score 1) 269

Most "refutations" of the CRA fall in to four camps:

1) Deny it outright and posit a magical explanation (Systems reply)

This is the correct one. My neurons do not understand English, but the mind computed by those neurons does. Similarly, the man in the Chinese room does not understand Chinese, but the mind he (along with the program instructions) computes does. No magic required.

Comment Re:It's not just specialization, there is also fea (Score 3, Insightful) 269

I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may.

Why do you think that? Silicon is also a chemical. There's nothing magical about liquid chemicals.

Cognitive scientists typically try to analyze cognitive systems in terms of Marr's levels of analysis. Cognitive systems solve some problem (the computational level) through some manipulation of percepts and memory (the algorithmic/representational level) using some physical system (the implementational level). The mapping from neurons and chemical slushes to algorithms is extremely complex, so most work focuses on providing a computational level characterization of the problem, occasionally proposing a specific algorithm. Since the same computational goal can be accomplished by different algorithms (compare bubblesort to quicksort, or particle filters to importance sampling, or audio localization in owls to audio localization in cats), and the same algorithm can be run with different implementations (consider the same source code compiled for ARM or x86), it's just a waste of time and energy to insist that we recover all of the computational, algorithmic, and implementational details simultaneously.

However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient.

I've never found the Chinese room argument convincing. It just baldly asserts "of course the resulting system is not sentient!" Why not?

I disagree with the article. People haven't given up on strong AI, we've just realized that it is enormously more difficult than we originally thought. If today's best minds were to attack the problem, we'd end up with a hacked-together system that barely worked. Asking why computer scientists aren't working on strong AI is like asking why physicists aren't working on intergalactic teleportation: it's really really hard and there's a lot to accomplish on the way.

Comment Re:Feelings are more important than science (Score 3, Interesting) 408

This is how positive bias works. Those 99 negative outcomes need to be reported to show that the one false positive is a false positive but they aren't reported.

I just want to point out that it's not quite so straightforward. A null result for an effect is not, in and of itself, negative evidence for that effect, it's just a lack of evidence for that effect. It's always possible that a different set of materials, a larger sample size, an additional control, more sophisticated stats, or any number of methodological modifications would succeed in finding an effect. 99 null results with bad materials are not evidence against even a small number (not one!) of positive results with good materials. Null results are under-reported because they are much more ambiguous, not (only) because they are harder to sensationalize.

Slashdot Top Deals

The best book on programming for the layman is "Alice in Wonderland"; but that's because it's the best book on anything for the layman.

Working...