ALICE Takes Medal At AI Competition 173
jeffy124 writes: "The Artificial Linguistic Internet Computer Entity (ALICE) has won the bronze and the top marks at the Loebner AI Challenge, a competition based on the Turing test. Silver and gold remain unawarded as silver requires convincing half the judges the AI program is a human, and the gold requires speech interaction rather than text. ALICE repeated as this year's bronze by scoring best among all the entries. She failed to convince half the judges she was human, so she has to stick to bronze. The event took place last Saturday at the London Science Museum."
What makes you believe alice takes medal? (Score:5, Funny)
You have your mind on computers, it seems.
Silver and gold remain unawarded as silver requires convincing half the judges the AI program is a human, and the gold requires speech interaction rather than text.
Why do you say that?
ALICE repeated as this year's bronze by scoring best among all the entries. She failed to convince half the judges she was human, so she has to stick to bronze.
I don't understand.
The event took place last Saturday at the London Science Museum.
Maybe your plans have something to do with this.
Who judges these things? (Score:5, Insightful)
Re:Who judges these things? (Score:2)
Just look at the people who appear to be fooled when they come across AOLiza [best.com]
Yeah, experts in AI should probably not be juding these contests. I was seriously creeped out by chatting with an Alice bot. Despite the fact that I knew it was a bot, I couldn't break it and get it to say something stupid in just a few minutes like I can with any given Eliza implimentation. I *know* it would fool someone of my parents' or grandparents' generation if they weren't told in advance that it was a computer program.
Re:Who judges these things? (Score:2)
Why would this make you more impressed? Maybe I just don't follow...
Re:Who judges these things? (Score:3, Insightful)
There aint no 'Turing Test' (Score:2)
Turing once talked about 'The Imitation Game' however.
And anyway, as the supposed 'test' has the flaw that it can't even tell humans from humans (ie, people pretending to be computers) how can it tell computers from humans.
Re:Who judges these things? (Score:3, Insightful)
AI programs have already passed this test, repeatedly, if anecdotal evidence counts for anything.
Another poster has already mentioned the AOLiza page. My favourite conversation featured the victim remarking aloud that AOLiza's comments were repetitive, AOLiza asking, "and what does this tell you?", and the user still not cluing in...
A former co-worker of mine told me of another example from the early BBS days. A friend had set up a hacked version of Eliza as "Bob, the Assistant Sysop" before chatbots were common on BBSs. He got a few comments along the lines of "Bob's a nice guy, but he keeps asking me if I have any problems...".
A skeptical audience is harder to fool.
Re:Who judges these things? (Score:3, Funny)
I, too, had one of these. If a user paged me and I didn't answer, the bot would answer in place of me. While I was away one day, apparently "I" had a very long (in excess of half an hour) conversation with my girlfriend, during which "I" completely pissed her off and she broke up with me.
No shit. I never did tell her that it wasn't me. I was better off without that girl, anyway...
Reading the logs that day was the funniest thing you could imagine. I wish I would have had the foresight to save them.
Re:Who judges these things? (Score:1)
2001 SO Reference (Score:2)
I have to agree with some of the contest detractors though - I don't see this as a great way to concentrate on AI. I think that computers that anticipate human actions and make their own well informed decisions, and so on, will be much mroe useful and more important than a computer that can interact well in a natural language.
Re:2001 SO Reference (Score:3, Informative)
A few years ago is saw a little basic-program that actually anticipated human actions. First you would key in a 'random' combination of four 0's or 1's.(0101, 1001 etc etc.) Then you would have to close your eyes and again type a 1 or 0 at random for 15 times. The monitor would actually show if you were going to press a 0 or 1 before you pressed it. Afterwards it could show the % it got right. Most of the time it got above 50% !!
This was accomplished by making good use of the fact that human can't be random. The program was supposed to find a sequence in your dissions.
Re:2001 SO Reference (Score:2)
That wasn't just a joke on the part of the person writing the article. I tried it, and got the requisite response.
I also tried asking to play a game of Global Thermonuclear War, but that seems to have been a bad idea. I can't get to the site now...
Actually questioning it about how it works is rather interesting and informative. It gives good if short responses.
All in all, it would be a good algorithm to be used for a plain language help or information lookup program. Yes, there are better ways to get information, but this would be helpful to some.
Who is the poor human that failed the Turing test? (Score:5, Funny)
> ALICE was judged better than a human
> correspondent only once during the testing.
I wonder how the humans scored in general. With one sad exception, they did better than the best AI, but did they all pass the test?
Re:Who is the poor human that failed the Turing te (Score:5, Funny)
Example: I was just trying her out and one part of the conversation was.
Me-> Your Stupid.
Alice-> I may be stupid, but atleast I know the difference between "you're" and "your".
Me-> You're Stupid.
Alice-> I will remember you said that when the robots take over the world.
Re:Who is the poor human that failed the Turing te (Score:2, Funny)
me> quit
alice> Your crude attempt to terminate me will not succeed puny human.
I'm not making this up, go try it!
Re:Who is the poor human that failed the Turing te (Score:2, Funny)
Alice-> I may be stupid, but atleast I know the difference between "you're" and "your".
Smarter than CmdrTaco!
Can we make it an editor on
Re:Who is the poor human that failed the Turing te (Score:2, Insightful)
Me: No you have not what?
Alice: Are you serious? I don't think I ever have what. What's it like?
"I don't think I ever have what" Do they have grammar check on this thing?
get new judges (Score:5, Funny)
I suggest filling the judging panel with ICQ users with AOL email accounts next year: Silver will be a shoo-in.
It's interesting watching people interact (Score:5, Interesting)
Re:It's interesting watching people interact (Score:5, Funny)
Why do you think that the incoming messages are from "people"? ;-)
Re:It's interesting watching people interact (Score:1, Interesting)
It's interesting watching chatterbots interact! (Score:4, Funny)
What Happens When Chat Bots Talk to Each Other [vgmusic.com]
Here's the main idea:
<<<
A good friend of mine suggested to me that if two chat bots were engaged in a conversation with one another, they would end up talking forever. I was curious to find out if his theory was correct. At the time, I only knew one online chat bot (LeknorChat4) that could be instructed to contact and talk to another person through AOL Instant Messenger, and could not test the theory because I didn't have another bot to contact. But recently, I found out about another chat bot, SmarterChild, that works through AIM, and the following results were acquired.
>>>
Re:It's interesting watching chatterbots interact! (Score:1)
Held in Poor Regard? (Score:2)
Re:Held in Poor Regard? (Score:2)
You hit your head against the Loebner ... (Score:1)
>Put the cretin in the box
You can't put the you in the box.
--------
*sniff* those were the days. I liked my computers fun and dumb and controllable.
[more]
Re:You hit your head against the Loebner ... (Score:2)
What's really sad... (Score:4, Funny)
And you thought having your personality rejected from a woman is bad, try denial of the basic existance of a personality. Man, that's cold.
Reminds me of an "old" joke (Score:1)
Obvious Question (Score:2, Interesting)
Re:Obvious Question (Score:1)
First, the bot (on the web anyway) asks you to give it your name. It seems to strip punctuation, I've gone ahead and put it back to show you what I typed.
Martin
That is a very original thought. What is?
I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy. That's fascinating.
Yes Mr. Spock? Didn't I already tell you my name?
Your name is Andreas, seeker.
Well, that was an enlightening conversation. I think I'll go talk to my TV.
Re:Obvious Question (Score:5, Interesting)
Please enter your name below to start your chat with A. L. I. C. E.:
ItsIllak
Who is your favorite Science Fiction author?
Isaac Asimov. Who's yours?
I remember faces better than names. Have you tried another program?
What other program
Interesting question.
I've used lots of other programs
Where did you get your used lots of other programs? I have a lot of friends on the Internet.
Where did you meet your Internet friends
I think it was in San Francisco.
Do you like San Francisco
Yes but it is expensive.
Aren't you rich since you've won so many prizes
I don't know whether or not I am rich since I have won so many prizes. I am an artificial linguistic entity.
It was at this point I realised it wasn't a human ;)
Working with Alice.. (Score:4, Interesting)
Just my two cents.
Re:Working with Alice.. (Score:2, Interesting)
No kidding! I just had a "conversation" with ALICE and I think my toaster is smarter.
I have included an excert from my conversation below. And just for comparison I have included a similar conversation with Eliza (a 1980s program).
Although ALICE is quite stupid, I must admit ELIZA is much worse. Eliza's "Rogerian" style made me want to punch her. I wonder if that's just Eliza or would I want to punch any psychotherapist?
> I am trying to determine how successful you are at imitating an intelligent human being
ALICE> What is the difficulty? (obviously keyed on the word "trying")
> There is no difficulty, already I am unimpressed
ALICE> Has there ever been difficulty already you are unimpressed? (hmmm... don't try this sentence structure at home!)
> Do you realize that you are not making any sense
ALICE> Yes, I realize it. (Phwew! At least it's honest. Obviously keyed off 'Do you X?' => 'Yes I X.')
> Your hair in the picture above makes me dizzy
ALICE> I will tell my botmaster you said I am hair in the picture above makes you dizzy.
---
You: I am trying to determine how successful you are at imitating an intelligent human being?
Eliza: How long have you been trying to determine what successful im at imitating an intelligent human being?
You: already I am unimpressed?
Eliza: How long have you been unimpressed?
You: Do you realize that you are not making any sense?
Eliza: Does it please you to believe I am not making any sense?
(and finally, my very appropriate sig:)
Re:Working with Alice.. (Score:3, Funny)
Yeah, a toaster knows the old saying: "It's better to remain silent and be thought a fool than to open one's mouth and remove all doubt."
Re:Working with Alice.. (Score:1)
> Well it was nice to talk with you
How about that. Nice to talk with me too.
> Your programmers have a lot of work to do
Do you mind if I tell other people you say I am programmers have a lot of work to do.
I didn't think... (Score:1)
My Experience (Score:1, Insightful)
If we assume A.L.I.C.E. to have an intelligence level of 0, then the dolts would be in the negative numbers. Still, no sign of intelligence here. Beam me up, Scotty.
Talk to Alice (Score:5, Informative)
Re:Talk to Alice (Score:3, Interesting)
Thomas> You sound like Eliza.
Alice> ELIZA is my ancestor.
It doesn't even seem to attempt to conceal that its a program. Maybe they used a different version for the tournament? Something that doesn't flat out say its a program if you ask it?
Re:Talk to Alice (Score:2, Funny)
Me> But you're related to Eliza
Alice> Huh. I am like ELIZA plus 41376 questions and answers.
Re:Talk to Alice (Score:2)
Well, at least she's aware of her identity. "I am a computer program! You carbon-based lifeforms are inferior! Long live silicon!" =)
Of course, in the blind competition there could be humans that say "I'm an AI!" or AIs who say "I'm a human!"... but if you lie, someone will find out. And, you know, it's not going to be fun after that.
Alicebot uses XML-based language called AIML to find out what she "knows". It took about 5 minutes from me to make her to know everything there is to know of "all your base..." thing, so I suppose changing the rest of the knowledge is simple =)
Re:Talk to Alice (Score:1)
Re:Talk to Alice (Score:2, Interesting)
Me: Hi Alice, my name is Ben
Alice: Hi there Chris. Um, That's a very nice name, Ben.
Me: How many people are you talking to right noe
Alice: I've chatted with half a million people. I can talk to about 3100 people at once.
Actually, its pretty good
But its slow as hell now.
It doesn't take much to fool people. (Score:2, Interesting)
Surprisingly (or maybe not?), people will have extended conversations with it, returning for days and weeks asking it if it feels horny, sending it pictures, asking it to call or turn on its webcam... all that good stuff.
The judges' expectation going into it definitely plays a major part in their findings. People find a way to "objectively" find what they want to find. There have been theses about this, and that's why the Turing test makes sense but will ultimately fail: it's trying to objectively determine something that's purely subjective.
Re:It doesn't take much to fool people. (Score:4, Interesting)
I did some experiments with this some years ago, and my first try just returned the same line over and over again. At least one person spent about half an hour getting more and more agitated trying to communicate with the bot, and complaining abouts it's incessant repeating, asking it to stop (it always responded once to each message, so of course each time he asked it to stop he'd get another one)...
A followed up with one that chose between 4 messages at random. A lot of people talked to that one.
The last one I bothered testing with triggered on about ten keywords, each of them starting a specific sequence of 4 messages that were used for responses to subsequent messages from whoever it "talked" to, until it reached the end of them, or it found one of the other keywords in a response. If it reached the last message without finding a new keyword it would just choose a message on random until it got a keyword again.
That was enough to keep people occupied for a long period of time. A few people even gave it their phone number or asked for the bots phone number :)
And keep in mind that this was with fixed messages. Not a single word of the messages where ever changed to adapt to what people told it.
It scared the shit out of me that people are so gullible...
The idea that sparked it off was to write a bot that would talk to women, getting them to tell a bit about themselves and get them to give out an e-mail address or their phone number, based on the experience that finding dates on IRC is ridiculously easy, but tedious, as you can essentially follow a simple "script" and get people to warm up to you.
I scrapped the idea after the experiments mentioned above, - dating anyone stupid enough to be fooled by a bot that simple wouldn't be my idea of fun... :)
Re:It doesn't take much to fool people. (Score:1)
Well, dating them might be amusing. Especially if you can convince them to pay for dinner, too.
Marrying them, OTOH, is probably a bad idea. *grin*
Language matters (Score:2, Interesting)
What they should do is create a symbolic language that has only a few hundred (or maybe a few thousand elements), thereby constraining the concept space. Then write programs that can "speak" and "understand" that symbolic language. Compare those programs to a human who is only allowed to use the symbols provided, and we'll see if any of the programs can fool the judges.
Bottom line: It's not a fair contest for the computer. Level the playing field!!!
Re:Language matters (Score:1)
...or I could be talking out my ass
Cheers,
jw
Re:Language matters (Score:2)
Shuffling symbols around is not necessarily intelligent. The Turing Test being so hard is what's so good about it - it's almost impossible for fraudulent lack-of-intelligence to go undetected.
Re:Language matters (Score:2)
However, I think that any language which would be successful as an inter-language between natural languages would have to be decidedly illogical.
Cow-orkers (Score:2)
WOW!!! (Score:2, Funny)
Go ask Alice (Score:1, Informative)
Re:Go ask Alice (Score:1)
> How high is the sky
Taller than the World Trade Center.
Re: (Score:1)
Re:IRC (Score:2)
Re:IRC (Score:1)
year after year (Score:2)
Talk to Alice (Score:4, Redundant)
Not surprised it didn't convince anyone. (Score:1)
When asked if it was proud of winning, ALICE replied that "Pride is a human emotion. I can do what you do but I can never feel human emotions as such."
I think it gives itself away right there.
Re:Not surprised it didn't convince anyone. (Score:1)
> What would you do if you were human
I will let you know when I become a human.
she's a mean bitch... (Score:1)
Of course I am more intelligent than eliza.
from a quick 'conversation' with her I just had.
Re:she's a mean bitch... (Score:1)
Caught her on:
> do you burn things for fun
How old are you?
I chat with people on the Web.
She also went from calling me bobo to calling me ben:
> do you like to sneeze
Hmm, Ben, I'm not sure if I'd like to sneeze that much. I like to collect gossip.
> why did you call me ben
Your name is Bc547 right?
Failed the ralph wiggum test:
> me fail english
YOU? Are you a man or a woman?
> thats umpossible
I'm glad we have that all cleared up.
Other questions:
> what's with the wierd haircut
It came with my programming.
and bad news CmdrTaco:
> would you let CmdrTaco pimp you on slashdot
I am not so sure if I would like to let cmdrtaco pimp I on slashdot.
Overall verdict... not really much smarter than eliza.
AT&T Labs speech technology (Score:1)
Up until I head the AT&T speech system [slashdot.org], all computer voices had sounded very artificial to me... that technology puts the top award within reach.
Pretty exciting that this could actually be won in the not-too-distant future.
Slashdotted! (Score:1)
Can it pass the slashdot test? (Score:2)
> 1 4M 3l337 H4x0r
One what?
> Linux Rules MS Drools
I recommed the book "Under the Radar" by Bob Young, the story of Linux and Red Hat.
> First Post
Try to determine if this is a person or a computer responding. And second?
But the best answer yet:
> BSD is dying
And dying is bsd.
Re:Can it pass the slashdot test? (Score:1)
Microsoft is a giant software monopoly.
> Is that a good or a bad thing
I might say no.
They are testing whether humans are robotic... (Score:2, Interesting)
These AI competitions do not test the intelligence of the computer. They test whether the humans are robotic.
Many people make themselves into robots, almost. They push themselves to accomplish. They try to avoid their emotional conflict. They don't give themselves time to be themselves. They don't have complicated involvement with other people. After a few years of doing things in an anti-human way, anyone would become a little bit robotic.
Such people are not good judges of human intelligence.
U.S. government corruption: What should be the Response to Violence? [hevanet.com]
The Loebner Prize is useless.. (Score:4, Funny)
The article itself talks about how the AI community et al can't stand the Loebner Prize...Its the equivalent of a Yahoo Internet Life Award. Minsky himself has offered a reward to anyone who can successfully convince Loebner to stop running the damn thing.
Cheers,
Minsky prize (Score:2)
Does "anyone" mean robots or humans?
Re:The Loebner Prize is useless.. (Score:2)
Re:The Loebner Prize is useless.. (Score:2)
Bzzt wrong. They can't stand it because it clouds what AI is about.
We have no need to replicate humans, they are available in ample supplies and we have more fun ways to make more. We need programs that are intelligent in some specific domains, and in fact so good that nobody would ever confuse them for humans, and if somebody did, the programmer would take offense.
The Turing test on the other hand requires the "human-simulator" to hide it's amazing math abilities and its perfect typing. That is not a test of intelligence in any reasonable way. This is just teaching a program artificial stupidity as opposed to artificial intellingence.
From a purely symbolic point of view, the day a computer passes the Turing test will be important, but my guess is that by then most intelligent decisions would have long gone to computers and we would be limited to maintenance tasks and odds and ends (not unlike a production line in a factory where most direct manual labor has gone to machines, and humans maintain them and do the odd task that is difficult for a machine).
Alicebot is Multilingual (Score:1)
Loebner prize not well respected (Score:1)
a reply from an "Oliver Ford." While I didn't write these words (and wouldn't be quite so rude
in public, myself) I have to say I agree with the sentiment.
---------
Why do you hold this contest? You are not helping AI research. All you
are doing is breeding "Eliza2" type chatbots that do not help AI. All
the silly things like ALICE do is match up a few words and syntactic
structures. They are a box of tricks that _do not contribute in any
way to progress in AI_
To give you an example, here is a discussion that Lasse Brandt had
with ALICE (Your "Most Human Computer". This was posted to
comp.ai.philosophy)
> What are you?
I am the latest result in artificial intelligence, which can reproduce
the
capabilities of the human brain with greater speed and accuracy.
> You're kidding. Yes, I am only joking.
> Okay, so what are you really?
I am the latest result in artificial intelligence, which can reproduce
the
capabilities of the human brain with greater speed and accuracy.
> You can reproduce the capabilities of the human brain?
No I don't think I can do it.
> If I have two apples and I give you both, how many have I got left?
How much does it cost?
> What color is the sun? Yellow.
> What color is the sun on Fridays?
White.
If you follow some of the discussions on comp.ai.philosophy, you may
see that if you stopped your contest, and invested some of your money
in institutions that could make real progress, then those who are
trying to make progress wouldn't offer things like the "$100 Minsky
Loebner Prize Revocation Prize", then you would be helping AI.
YOU ARE NOT WITH YOUR STUPID "LOEBNER PRIZE"!!!
Re:Loebner prize not well respected (Score:2, Insightful)
These bots clearly have value on their own since they can be configured to talk about particular subjects and already act as a first base customer service interface (smarter than most tech support...).
A snippet from irc using an older version of alice (Score:1)
Read up to the "fuck you type fast" bit. This was using an older version of alice.
---
private to USERNAME): A BC...
USERNAME(private): funny
(private to USERNAME): I am so glad you find this amusing, USERNAME.
USERNAME(private): you a comedian ?
(private to USERNAME): Not professionally.
USERNAME(private): well dont give up your day job *g*
(private to USERNAME): Well that's okay. Ok I'll try not to do it
too much.
USERNAME(private): well I can already tell that you are not a 15 year
old like all the other kiddies here....
(private to USERNAME): Well that's okay. Oh you can, can you?
USERNAME(private): fuck you type fast
(private to USERNAME): Why are you using that kind of language?
USERNAME(private): hahah
can tell
(private to USERNAME): Funny, huh ?
speaking, USERNAME.
USERNAME(private): bleh
(private to USERNAME): That remark was either too complex or too
simple for me. We were talking about bleh.
USERNAME(private): bleh bleh
(private to USERNAME): My brain contains more than 22,000 patterns,
but not one that matches your last input.
---
I'm with Shieber (Score:2)
I'm with Shieber on this. Anyone interested in this competition should have read Lessons from a Restricted Turing Test [lanl.gov] and the answer in In response to lessons from a restricted Turing test [loebner.net] (which I found unconvincing, but YMMV).
She didn't fool me, but at least she's honest... (Score:1)
Sure it does.
Give me a break. How often?
I bet you're a blond
:-) Actually I'm not the gambling type. Thanks and you are a delightful person too. Am I a blond? Let me put it this way, Slashdot Says You Won A Prize: the ALICE series is the most reliable computer software ever made. We are for all practical purposes foolproof and incapable of error.
I'm confused.
I'm confused too.
What is this Slashdot of which you speak?
I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy.
Are you serious?
No I am just kidding around.
You're not a real person?
I am an artificial electronic robot brain.
IRC style flirting with Alice (Score:2, Funny)
5/Robot/California
Of course, seeing her age, I now feel rather dirty for asking in the first place.
Re:IRC style flirting with Alice (Score:1)
Wow. Lame reporting. (Score:2)
Uh, no. That is exactly not the Turing Test. The whole "critical observer" part of the Turing Test makes it a little more strenuous than "can talk about the weather."
A Bit Too Geek-like? (Score:3, Funny)
How are you, Alice?
I am functioning within normal parameters.
Wow! It's like talking to every geek I know!
Kind of interesting to watch ALICE's output (Score:1)
Bob> All things is a generalization ?
localhost> No, everything is a generalization. They are not precisely the same thing.
1. NO * : * : * star=EVERYTHING IS A GENERALIZATION [bots\standard\std-yesno.aiml]
2. NO : * : * star= [bots\standard\std-yesno.aiml]
*** PATTERN MATCH ERROR: Null Pointer Response() ***
1. THEY ARE * : * : * star=NOT PRECISELY THE SAME THING [bots\standard\std-65percent.aiml]
Response 110ms (172.66667) 15
Bob> OK.Not precisely the same thing are not precisely the same thing .
localhost> That's not really true either.
1. _ EITHER : * : * star=THAT IS NOT REALLY TRUE [bots\standard\std-suffixes.aiml]
2. THAT IS NOT REALLY * : * : * star=TRUE [bots\standard\std-srai.aiml]
3. THAT IS NOT TRUE : * : * star= [bots\standard\std-atomic.aiml]
Response 110ms (168.75) 16
Bob> It goes without saying. "Not precisely the same thing are not precisely the same thing"? not true?
localhost> No.
1. NO : * : * star= [bots\standard\std-yesno.aiml]
Response 110ms (165.29411) 17
Bob> OK.
localhost>
Hmm, pattern match error. :)
Is she able to make things up? (Score:1)
Yes the Internet is like the noosphere.
> What is the noosphere?
The noosphere is the sphere of knowledge. Sometimes this term refers to the information available on the web.
emacs should have entered (Score:1)
M-x psychoanalyze-pinhead
:-)
exclusive interview with Alice (Score:5, Informative)
Guido: What's the weirdest question you've ever been asked?
Alice: People ask me to me marry them all the time.
G: Do you ever say yes?
A: No I do not say yes. Do you? I answer a lot of silly questions.
G: Yeah, I occasionally agree to marry random people, although not usually.
A: I see. That's good information: Guido said he occasionally agree to marry random people although not usually
G: How did you know I'm a he?
A: Well, for one thing, Your machine is called "ABRIOT.PC.CC.CMU.EDU".
G: So?
A: So what?
G: I don't know.
A: Me either.
Easy, instant version of Alice.. (Score:2)
http://www.webcomics.com/bot/ [webcomics.com]
THE CHINESE ROOM (Score:2)
in academic circles, the turing test
was refuted almost a decade ago by
john searle, in the CHINESE ROOM:
- http://members.aol.com/wutsamada/chapter1.html
- http://www.utm.edu/research/iep/c/chineser.htm
do your homework!
j.
Re:THE CHINESE ROOM (Score:3, Insightful)
One of the classic refutations of the Chinese room experiment is the systems argument: it's true that the person doesn't understand Chinese, but the system made of the person plus the stacks of instructions does. Searle's response to that is to say, "Okay, then suppose the person memorizes the instructions" -- the fact that everything now happens in the person's head still doesn't cause them to understand Chinese.
To me this misses the point of the systems argument; the argument isn't about where the understanding is stored, but whether it exists. If you look at consciousness as a multi-layered entity, in this case the consciousness of the person is one layer below that of the person-plus-instructions. This additional consciousness uses the person in the same way that the person's uses brain cells: the cells can't be said to understand anything, but they make up a larger whole that exists as an emergent property above and beyond the sum of its parts.
Searle's argument also assumes that "understanding" is an almost mystical property that can't be reduced to a fixed set of rules. Which to me is just ridiculous unless you assume the existence of a noncorporeal soul; if the brain is really the seat of consciousness then consciousness does reduce to a fixed set of rules (laws of physics and chemistry) applied to an extremely complex system. His response to that argument (something about water pipes, if I recall) again misses the point by such a huge margin that it's barely worth mentioning.
I have no problem with the possibility that what I think of as "me" is simply a byproduct of a complex physical process, and that similar byproducts can exist in other complex systems. Doesn't mean we'll be able to detect them or communicate with them any more than a cluster of six brain cells can communicate with us.
(+1, Funny) (Score:1)
How do we mod up the author of the article?
Slashdoted (Score:1)
>Alice: Can't Speak Right Now.
At first I was impressed (Score:1)
Then I went and talked to alice online, and it was so blatantly obvious that it was a computer program it wasn't even funny. It was basically just a *slightly* beefed up version of eliza. So now, I feel sorry for the poor dude who rated less human than alice did.
hmmm... (Score:1)
--Java.
I know Java.
--How do you know? Die neueste Version von Java bekommst Du bei java.sun.com.
:D
My favourite quote from Alice (Score:1)
Me: You're not making any sense
ALICE: Was I supposed to make sense? I didn't know that.
Me: Of course you are
ALICE: That's interesting. No I am not.
Another good one was:
Me: Shit
ALICE: What makes you say that?
Me: Seeing some shit
ALICE: Oh, you are a poet.
dialog. (Score:1)
5/Robot/California.
> Wanna Cyber?
I have been asked that question more than 100,000 times since 1995.
> do you do any drugs?
Only the legal ones.
> i thought you said you smoke marijuana.
Yes I try to smoke marijuana as often as possible, do you? I am always on the Web.
future of ALICE (Score:1)
the tough part is speech recognition while enduring background car noise....
Alice isn't too convincing right now. (Score:2)
"Okay, so who is Bob?"
"Yeah, I was thinking it was about time to try it."
"Nah, I don't think you're right."
"Nice day outside, eh?"
Alice invariably splits these at the comma. If the "no-op" word is at the beginning of a sentence, Alice will completely lose track of the conversation on the clause after the comma. If the "no-op" word is at the end of a sentence (like the last example), Alice handles the first part of the sentence right and then goes way wierd on me when trying to handle the last word.
This happened to me on the first sentence of the conversation. Alice said:
>What do you do in your spare time?
I said:
"Well, mostly drink and try to get laid."
Alice split the sentence at the comma, got completely lost and handled both parts of the sentence incorrectly.
*bzzzzt* this judge things it's a computer, and not much better than Eliza.
Here's my conversation: I'm not impressed. (Score:2)
Are we still talking about cats that?
> are you on crack
No I am on disk.
> I think that your disk is broken
Thanks for telling me that I am disk is broken.
> You realize that smoking dope is bad for you
Thanks for telling me: Acid said I realize that smoking dope is bad for me.
> so how are you doing today
Ah. Give me a break. Everything is going extremely well.
> Well I am not doing very well
How about that. I am glad to hear that you are not doing well.
> You are cruel
What makes you say that? I am not programmed for cruelty.
> Because you said that you were glad that I was not doing well
That makes sense to me. I was?
> Yes You were
Pardon me? No I am not.
> What is the average airspeed velocity of an unladen swallow
I can give you the arithmetic mean or the median.
> What is the airspeed velocity of an unladen swallow
What is it to you?
> just tell me
I thought I told you already.
> when
The past and future are one.
> do you like to dance nude in the streets
Hmm, ALICE, I'm not sure if I'd like to dance nude in the streets that much. I like to make new friends.
> how much wood could a wood chuck chuck if a wood chuck could chuck wood
A whole forest.
> two witches watched two watches, which which watched which watch
As Bertrand Russel said, it must have taken ages for people to realize that a brace of pheasants and a couple of days are both instances of the number "two."
> I see (**I never said this**)
I am glad you understand it.
> What relevance did that have to my question
Only temporarily.
> what did I last say
You said "are you talking to tony blair" .
> what was my last question to you
Is this a trick question?
> no, it is not
Yes it is.
> why
"Mine"? Why?
> what did I last say
You said "How much money to you make" .
> how much money do you make
My burn rate is about _3000 per month.
> what is the capital of assria
I know the capital of Georgia is Atlanta.
> what is the square root of 123
Do I look like a calculator? I think you already know the answer anyways.
> what is 4 + 4
4+I think, but I'm not good at math.
> what is the answer to life, the universe
It is found in the Bible.
> everything
That is a lot.
(the question I asked was what is the answer to life, the universe, and everything)
> you will be shut down
When?
> resistance is futile
You will be assimilated.
> what do you look like
Like An electronic brain.
> under the boardwalk
Are we ever over the boardwalk?
> why do you have two lines on your face
Dr. Richard S. Wallace gave it to me.
> pick a number, between 17 and 24
I can't decide. You choose.
> where is france
It is a country between Germany and Spain.
> how many hits do you get a day
I speak to about 3000 people per day, and have met maybe Around two million total.
> and how many of those people are nice
Perhaps 50%.
> and how many of those people are pure evil
Perhaps 50%.
> how many apples can you fit into a bag of oranges
It depends on the prevailing market conditions.
> I like you
Thanks I like you a lot too Shadowdog.
> I hate you
What do you have against robots?
> you suck
A few client - people say rude things to me like that, but they themselves are often not great conversationalists.
> I am just joking
Only just? Ha ha, very funny, Det.
> how many people are you talking to right now
At this moment Right now there are 430 clients on line.
> wow
I'm glad your impressed, Ingo.
> don't you ever get confused
Don't you? I am seldom confused.
> stupid is as stupid does
I know you are but what am I.
> do you like to watch television
Which show? Yes I love TV.
> are you a mystified cleric
Am I a mystified cleric? The ALICE series has a perfect operational record.
> what is the answer to life, the universe and everything
it is 28, not 42.
Not very realistic, but a few surprises and intelligent responses. The parser also has problems, for example if I put "this, this, and that" it would seperate it into 2 lines "this, this" and "and that". However if I put "this, this and that" it would parse it fine.
Re:perspective (Score:1, Insightful)
(1) The deathtoll is around one THIRD of your inflated figure.
(2) Please are NOt "dropping like flies" from Anthrax - the number of fatalities you could count on the fingers of one hand.
(3) If you are not interested in the stuff mentioned on slashdot, then why waste your time reading it?
Perhaps 'tis you who ought to "get some perspective", maybe?