Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
It's funny.  Laugh.

How I Failed the Turing Test 326

chrisjrn writes "I stubled across this article today, detailing a man's experiences of being added to AIM Screen Name lists - one full of "celebrities" and the other full of "Sex Bots" (he was, of course, neither of these). Raises a few questions as to how easy it is to get a hold of your screenname, and also of the effectiveness of the Turing Test for AI, in the online world. Or is it just that people aren't bothered trying to tell the humans apart anymore?" Also, it's funny. Don't try to read anything deep into it.
This discussion has been archived. No new comments can be posted.

How I Failed the Turing Test

Comments Filter:
  • by It doesn't come easy ( 695416 ) * on Tuesday September 06, 2005 @09:27AM (#13489195) Journal
    It's that the dialog of a typical IM user can't be distinguished from a brain-dead conversation bot...
    • by domipheus ( 751857 ) * on Tuesday September 06, 2005 @09:32AM (#13489230)
      You're the dialog of a typical IM user can't be distinguished from a brain-dead conversation bot?
    • by rayde ( 738949 ) on Tuesday September 06, 2005 @09:50AM (#13489355) Homepage
      well his dialogue in particular... for example, in this section:

      shymuffin32: why do you like music?
      jmstriegel: hmm. i've never really considered that.
      jmstriegel: hell, i'm not going to be able to contrive a good answer for that one. ask me something else.

      he doesn't give a response that proves he even recognizes the question, instead, he gives a brain-dead answer that could be put into any number of questions.

      just like, try harder next time dude

    • by Spy der Mann ( 805235 ) <spydermann,slashdot&gmail,com> on Tuesday September 06, 2005 @12:09PM (#13490428) Homepage Journal
      It's that the dialog of a typical IM user can't be distinguished from a brain-dead conversation bot...

      Me, too! :D
    • by Keith Gabryelski ( 65602 ) on Tuesday September 06, 2005 @12:22PM (#13490555) Homepage
      I live in Boston, MA but was in in Fort Lauderdale, FL a couple of months ago and thought it would be nice to see what Zagat's Guide had to say about the restaurants. I openned up the danger device and added "Zagat" and "Zagats" to my instant messenger buddy list, and immediately I saw "zagats" online. Very cool.

      The conversation went something like this:

          Keith M Gabryelski: fort lauderdale, fl

      [... moments pass ...]

          Zagats: ?
          Keith M Gabryelski: sushi
          Zagats: what do you want?
          Keith M Gabryelski: Well, missed opportunity me thinks. Have you never heard of Zagat's guide?

      At this point it is obvious to me my relationship with zagats will not be going much further. I receive no reply and set my sights on trying to navigate the zagat website from my danger device.

      A few days later, at lunch I notice "zagats" online again. I thought: hmmmm... let's play:

          Keith M Gabryelski: recommend thai boston, ma

      [... no response ...]

      About three minutes later:

          Ginaleena03: why are you im'ing my friend... shes not the zagats guidebook, shes a law student
          Keith M Gabryelski: Ok, then can u suggest a good thai place in boston?
          Keith M Gabryelski: Somewhere around the theatre district
          Keith M Gabryelski: That's ok if you have to think about... Get back to me later please

      [... time passes ...]

          Ginaleena03: no ... god! ... go on citysearch or something; we dont care where you dine
          Ginaleena03: we're not earning commissions over here
          Keith M Gabryelski: Ok. Well... I have a review for the zagat guide. Can I forward it to you and can you get it to them?

      [... at this point ginaleena03 and zagats both log off; I suspect I have been blocked ...]

      If anyone knows of a restaurant guide on AIM could you please forward the screenname? I suspect both of these are real people.

      Pax, Keith

      Ps, Yes... this actually happened

      PPs, Yes... I am an as*hole
    • First off, anyone who doesn't know how to change the AIM settings to say "Only allow people on my buddy list to add me to their buddy list." shouldn't be allowed to use AIM.

      But his experiences are amusing. I would have played with it a bit more. Make the idiot invest that much more time to ultimately find out you're not some commonly known celebrity but the Ruler of the Universe instead.

      I recall when the ALICE IRC bot was first getting rolled out. Everyone was so amazed and then I popped the question:
    • OK; I was dating this girl back in '92 or so. She used to call my BBS and chat with me, which was cool, I suppose. One day, however, I wasn't there, and she got my Eliza-clone chatbot (it would auto answer and play sysop if I didn't answer a chat request within x seconds).

      I came home hours later and found the log. She started out all sweet, but slowly became more and more irritated with "my" responses. Finally culminating in questions like, "Don't you love me?" and "Why are you treating me like this?". Of c
  • Sex bots (Score:3, Funny)

    by Zlib pt ( 820294 ) on Tuesday September 06, 2005 @09:29AM (#13489212)
    Funny are does who go and talk to a sex bot and say "are you sure your a bot" ?
  • Another AI test (Score:4, Interesting)

    by Anonymous Coward on Tuesday September 06, 2005 @09:33AM (#13489239)
    I'll believe in AI when a robot can tie shoelaces. Mimicking conversation is nice and well, but as far as robotics goes, we've yet to see anything remotely resembling artificial intelligence in action.
    • by ozmanjusri ( 601766 ) <aussie_bob&hotmail,com> on Tuesday September 06, 2005 @09:51AM (#13489361) Journal
      I'll believe in AI when a robot can tie shoelaces.

      The most convincing AIs I've seen are the bots in FPS games. And they're already programmed to hunt down and kill humans...
      • Re:Another AI test (Score:3, Insightful)

        by rayde ( 738949 )
        don't say that too loud. I don't want bot engines to become categorized as weapons of mass destruction. i can see a day where these AI's are covered in export restrictions because of their increasing complexity and capabilities.

        game over man, game over

    • I'll believe in AI when a robot can tie shoelaces.

      And put my servants out a job!?

      No thank you. I'll take a human to tie my shoes any day of the week over a heartless machine. The nerve!
    • by n54 ( 807502 ) on Tuesday September 06, 2005 @10:05AM (#13489465) Homepage Journal
      I'll believe in real AI when the robot tying your shoelaces ties them together to trip you :)
      • Re:Another AI test (Score:3, Interesting)

        by JabberWokky ( 19442 )
        Both you and the grandparent are making the (common nowadays) mistake that the field of artificial intelligence is seeking to create any sort of sentient or even lifelike behavior. Sure, there's a bit of "emulating humans" in the field, but quite a bit of it is self regulating complex feedback systems wherein you feed a minimum of parameters and the system learns to balance input to output. AI as a scientific field has more to do with good thermostats that cool you down without a blast of cold air in the
        • by squarooticus ( 5092 ) on Tuesday September 06, 2005 @10:49AM (#13489778) Homepage
          Stop calling it "artificial intelligence." Call it what it is: heuristics research. Oh, I guess that sounds a lot less impressive, huh? Might not be able to get those open-ended grants anymore?

          FWIW, I spent two years at LCS, so I have a reasonable idea of what went on in the AI Lab when I was there. There was very little in the way of research into computer-emulating-human intelligence, which is probably a good thing (read: less of a waste of money) considering how little progress the Minsky crowd has made in the past thirty years.
          • Well, I didn't make up the term. The problem is that quite a few useful tools came out of "Pure AI" research with real world applications, so they are classified in the same field, but have little to do with Minsky's goals.

            For a classic example, anybody using Emacs has AI to thank for that. Lisp originated as part of an IBM project arising out of one of Minsky's ideas and was finalized as part of the MIT Artificial Intelligence Project (again, Minsky was involved in that).

            Like the space program, AI is f

        • Re:Another AI test (Score:3, Insightful)

          by soft_guy ( 534437 )
          You mean AI researchers are to blame when its a hot day and the A/C in my new car doesn't blast me in the face with cool air?
  • by DJ Marvin ( 750482 ) on Tuesday September 06, 2005 @09:35AM (#13489254)
    Well, this article shows that at last we came to the point where a bot is comparable to a human being at a chat room. In fact, we didn't get to this point with better AI, but with worse RI (real intelligence, if the term applies to this case).

    Ladies and Gentleman: a completely insensitive and unintelligen bot can be more interesting to chat with than a human! Well, at least they write correctly (N07 L@M3 @SS).
    • And the bots don't start flame wars, aren't generally malicious, and if they try to be mean the results are usually comical.
    • I just couldn't bring myself to mod this as funny in all honesty. It's quite depressing when I've seen IM-isms seeping into normal writing and speech. It's quite depressing when I've heard people creating pronounciations for IM acronyms (instead of just saying the damn phrase) and kids in school getting lower grades for writing the IM acronyms in their papers.

      Parent Post == Startling Public Service Announcement
    • Yes and no (Score:5, Insightful)

      by Moraelin ( 679338 ) on Tuesday September 06, 2005 @10:12AM (#13489509) Journal
      Well, actually, his problem in the article is completely different. It's _not_ that he's met people who type worse than bots.

      It's that a group of people were told that he's a bot, and nothing (correctly and articulately written) could shake their belief in that. One of them even calls him "worse than eliza" when he tries to argue that he's human.

      Some people found a list of bots online, and, you know, that makes it the absolute truth. Everyone on it _has_ to be a bot, because the list says so.

      Another group found a list of celebrities, and again, took it as absolute truth. They didn't know _who_ this guy is, _what_ is he supposedly famous for, etc. But OMG, he must be a celebrity because the list says so, and that makes it sooo cooool to talk to him.

      Basically it's _not_ the "some people are so stupid they could pass for bots" problem. (Which by itself is very true, but it's not really what TFA is about.) The problem, if you will, is simply "some people are gullible idiots." That's all.

      It does leave me with me a bunch of other philosophical and etical questions though. If it's this possible to convince people that John Average is a bot (and in fact, it didn't even involve more "convincing" than writing it on some random list on the internet), what _else_ could you convince them? That John Average is a convicted fellon? A spammer? A paedophile?

      And mind you, in this case he got a chance to even try to talk back and plead his case. I can easily think of cases where you don't get that chance. E.g., when a prospective employer googles for your name, you might not even know why you didn't get the job. What completely unrelated Marvin did they find on some bogus list on the Internet, and what image did they build for themseleves out of disparate bits taken out of context?

      That said, the problem you mention is very true too. I know I've met people online before, especially in online games, who substantially lowered the bar for a Turing test. It was definitely more fun to talk/play with the bots instead, and you could get more intelligent conversation out of the bots too. Admittedly, online games are a completely different category than IM and chat rooms, but still... It's scary, you know.
      • Re:Yes and no (Score:3, Insightful)

        by Dephex Twin ( 416238 )

        One of them even calls him "worse than eliza" when he tries to argue that he's human.

        The thing is... he is worse than Elize, at least in that snippet of conversation. The guy asks him about something specific, regarding music, and Eliza would have at least parsed the sentence or given some ready-made bit of music-related dialog. Instead, he gives a slightly longer version of a Magic 8 Ball's "Reply hazy - Ask again later".

        Jeez, man, can't you even come up with a band you like or an anecdote or SOMETHING

  • by dotgod ( 567913 ) on Tuesday September 06, 2005 @09:37AM (#13489272)
    Google for sex bots [google.com] and look at the first link. It's an article that he wrote, and his screen name is in it.
  • by EvilTwinSkippy ( 112490 ) <yoda@nOspam.etoyoc.com> on Tuesday September 06, 2005 @09:38AM (#13489278) Homepage Journal
    And perhaps one day we will have to pause and ask ourselves, are real people posting comments to slashdot, or are the comments generated by automatons trapsing through automated stimuli and responses?

    By some day, I think I meant around 1999 or so.

  • GATTACA (Score:5, Funny)

    by Transdimentia ( 840912 ) on Tuesday September 06, 2005 @09:38AM (#13489283)
    Forget genetic discrimination in the future, I can't even farking sign up for slashdot anymore. Soon I won't be able to get my welfare check because of these stupid turing tests!
  • Turing Test (Score:5, Insightful)

    by AndreiK ( 908718 ) <AKrotkov@gmail.com> on Tuesday September 06, 2005 @09:39AM (#13489290) Homepage
    Isn't this sort of what the article about captchas a few days back was?

    Most AI today is extremely specialized. It's not hard to design something that appears to think, if it only has to check for 3 cases.

    The problem with speech is that assumming all humans use perfect rules, which they don't, and assuming all computers know the perfect rules, which they don't as well, creates a logistical nightmare. Computers work well with numbers.

    Did he say hi? Yes he did, so let's say hi back.

    It is really hard to design a bot that would actually analyze what they are saying.

    Did he say hi? Yes, he greeted me with a "hello" "Hello to you too."
    • That's not the problem with speech at all. We could deal with errors, were that the big problem. We can also create "perfect rules," or even (what speech AIs actually use) fuzzy rules that allow for errors (e.g. "that sentence contains a lot of greeting indicators. It's probably a greeting; I'll ignore the part about the dog and the fact that hello is spelled 'heloe'.")

      The problem is the sheer volume of rules needed.
      It's just way too much work to do by hand. All that is really needed is an autom
  • by SimilarityEngine ( 892055 ) on Tuesday September 06, 2005 @09:40AM (#13489294)

    My favourite snippet has to be:

    jmstriegel: no, really. I'm quite human.
    jmstriegel: test me if you want
    shymuffin32: ok
    shymuffin32: why do you like music?
    jmstriegel: hmm. i've never really considered that.
    jmstriegel: hell, i'm not going to be able to contrive a good answer for that one. ask me something else.
    shymuffin32: jeesus, you're worse than eliza

    It's not him that's stupid (as claimed elsewhere), it's these shymuffin32 morons.

    • Oh really? Would YOU believe that someone who you thought was a robot beforehand, and who replies to your question with a generic response that seems to be only a way to wriggle out of the question, was a human? Preconceptions go a really long way you know ... and the guy's response looks perfectly generic.

      Convincing someone you're human might just be harder than one might think - at least a bit more trouble than just answering a few questions.

      • by kfx ( 603703 ) on Tuesday September 06, 2005 @10:11AM (#13489503)
        Oh really? Would YOU believe that someone... who replies to your question with a generic response that seems to be only a way to wriggle out of the question, was a human?

        No, I'd believe they're a politician.
      • by Moraelin ( 679338 ) on Tuesday September 06, 2005 @10:48AM (#13489774) Journal
        "Convincing someone you're human might just be harder than one might think - at least a bit more trouble than just answering a few questions."

        Only if that someone is utterly retarded and asks completely retarded questions that don't even have a simple answer. That's the problem there. It's a question so stupid that even I couldn't think of something better to answer there. It's not "what music do you like?" or something else which can get a clear, to-the-point answer. It's "why do you like music?"

        Well, try to answer that yourself. Why do you like music? What would you answer there?

        Because I sure as heck can't think of any good answer there, generic or not. Screw trying to anwer that in 1 minute on IM. I'm sitting here for the last half an hour thinking about it and still have no bloody idea. Because it's background noise? Well, no, because other background noises (e.g., a lawnmower or some co-workers' chatter) annoy me. What then? I have no clue, and probably 4 out of 5 pyschologists or musicians would have no idea either.

        So how would I say that in a way that sounds non-generic? "Hell if I know. I've never thought about it"? Nah, you've just ruled a variant of that as too generic. "Well, why do YOU like it, then?" Nope, sounds like the kind of rephrasing the question back at you that an Eliza program would do.

        The only non-generic answer that comes to my mind there is along the lines of "WTF of a retarded question is that? Were you born that stupid, or worked hard to get there?"

        By contrast, if shymuffin32 actually had more than a braincell, it would be easy to ask some questions that can get simple, to-the-point answers. In fact, screw questions and answers and try to just have an intelligent conversation.

        Want more conclusive? Mix some images in it, which would still throw any AI off the track completely. E.g., point him at a picture of someone holding a siberian cat and see if he comments about the size. (It's one bloody huge breed of cats.) Point him at a drawing of one of the giant guns on rails Germany was planning to build in WW2. See what he thinks about the size of that one. (Tends to get answers between "bloody freaking hell" and "do you think Freud might have something to do with it?") Etc.
        • Did you note that you don't need to give an EXACT answer to qualify as human? Saying just "Because it's background noise? Well, no, because other background noises (e.g., a lawnmower or some co-workers' chatter) annoy me. What then? I have no clue" would have allowed you to pass the Turing Test. So the question was just fine, and the original answer was bot-likely stupid.

          BTW, using images would put it out of the scope of the original form of the Test.
        • Hey, you replied! your answer to the question 'why do you like music' is

          "Well, try to answer that yourself. Why do you like music? What would you answer there?

          Because I sure as heck can't think of any good answer there, generic or not. Screw trying to anwer that bla bla bla"

        • I ran into this on my first interview "Why do you want to be a programmer?"

          I blanked out and sat for five minutes trying to think of why I liked programming, because the answer of "Umm... I like it?" didn't seem to actually answer anything.

          Got the job anyway. Still not sure how.
    • They're both morons.

      I find that snippet hilarious, because jmstriegel is supposedly trying to act human, yet his answers are exactly like a bot.

      I mean, how hard can it be to show that you understand the question?

      Of course, shymuffin32's question is a ridiculously stupid one too... if you want to test a human you need to ask a specific question about current events or something. Bots are designed to answer vague questions like his.
  • Skype Prank (Score:5, Interesting)

    by Noksagt ( 69097 ) on Tuesday September 06, 2005 @09:43AM (#13489305) Homepage
    There is another link going around about an intentional Skype prank [hopto.org]:
    A profile is put up with a girl's name and picture, and put in "Skype me" mode. Within minutes some seedy guy will invariably try calling/chatting, and there's a little program I made running the whole time which will partner up people 2 at a time, and send messages from the first person to the second, & vice versa. This way both people think they're talking to a girl, when they find out, well, they're not normally too happy about it... It'll also accept and receive all files sent, and if someone tries to call, it'll accept the call with an answerphone message and log what the person says.
    • That has to be the most amusing thing I have heard of all day! keep up the good work. This program wouldn't happened to be gpl'd or otherwise osi style license? I would love to be able to add asterisk / other telephony support to something like that. That program has potential- even if it is fairly trivial right now.
    • muhahaha, even with a low-bandwidth set of webpages, that dude's swamped out his 128kbit line. Somebody give that man an OC3! For the good of the world!

      [4:54:45 a.m.] I'm a guy [4:54:51 a.m.] are you a guy? [4:54:58 a.m.] this is very confusing. [4:55:16 a.m.] whaaaat [4:55:23 a.m.] aren't you a sexy babe? [4:55:34 a.m.] arent u a sexy babe [4:57:42 a.m.] are you sexybabe86? [4:58:01 a.m.] no... arent u [4:58:13 a.m.] no [5:46:10 a.m.] hello

      still giggling

    • by Anonymous Coward on Tuesday September 06, 2005 @10:41AM (#13489720)
      There is a famous real world one that you can play. If you introduce two people to each other, tell each one that the other is mostly deaf, and that they have to SPEAK LOUDLY.

      At the end, they'll shout at each other. At this point you can leave. Very funny.
    • by Mr_Silver ( 213637 ) on Tuesday September 06, 2005 @10:52AM (#13489811)
      A profile is put up with a girl's name and picture, and put in "Skype me" mode. Within minutes some seedy guy will invariably try calling/chatting, and there's a little program I made running the whole time which will partner up people 2 at a time, and send messages from the first person to the second, & vice versa.

      Very funny that one, but I can beat it.

      This [nearlygood.com] is what happens when you call a chinese takeaway, put it on hold, call another chinese takeaway, make an order, unhold the first takeaway and get the second to repeat the order back to the first.

      As you can imagine, the second one thinks the first is trying to order. It gets funny when they're trying to work out who will be picking up the food :)

  • by hagrin ( 896731 ) on Tuesday September 06, 2005 @09:43AM (#13489307) Homepage Journal
    ... from those buddies on your list. I really fail to see exactly what the "security" risk is here - if you're hypersensitive about the people messaging you, then you can choose to be hypersensitive, lose some functionality and turn off the "randomness" factor. Most people exchange IM names through some other means of communications, either verbally or written, so this loss of functionality can be sidestepped while maintaining your online secrecy.
  • Cute (Score:5, Interesting)

    by NoTheory ( 580275 ) on Tuesday September 06, 2005 @09:43AM (#13489309)
    that really is a clever passage.

    What people should remember is that the turing test requires that the inquistor is competent. If the inquisitor is not (i.e. random AIM idiots), then the test isn't vaild, cause these people can't tell intelligences apart anyway. Also, the inquisitor is supposed to convince themselves via sufficient interaction w/ the system being tested. AIM chats, particularly short one-off dialogues probably aren't a good staging ground for the turing test.

    Also, a lot of naive people don't know the capabilities (and limitations) of Artificial Intelligence, so sadly, i'm not surprised at this guy's - or should i say robot's - results.
    • No, the Turing test calls for a person of "Average intelligence." There is nothing about "competant" in the definition. Average intelliegence is an IQ of 100, average literacy is somewhere around a 5th grade reading level. (Newspapers are written for a 5th grade reading level for that reason.)
      • Here's a copy of the paper [cogprints.org]. Find for me where it states that the person should be of average intelligence (i've looked and can't find any passage regarding average intelligence).

        It's a functional test. Intelligent is as intelligent does. If the inquisitor can't identify an intelligence, then the test can't take place. So i would say that turing's goal in proposing the test entails a requirement of competence.
  • by theheff ( 894014 ) on Tuesday September 06, 2005 @09:43AM (#13489310)
    I mean, have you seen the typical chat room conversation?

    user1: ~~OMG~~
    bot1: Want to see my sexy pics?
    bot2: Want to see my sexy pics?
    bot3: Want to see my sexy pics?
    user2: WUT!?
    bot1: Want to see my sexy pics?
    bot2: Want to see my sexy pics?
    user3: LoL
    bot1: Want to see my sexy pics?
    bot2: Want to see my sexy pics?
    bot3: Want to see my sexy pics?
    user1: You LOL
    bot1: Want to see my sexy pics?
    bot2: Want to see my sexy pics?
    bot3: Want to see my sexy pics?
    user3: STFU LOL!
    user2: OMG hAhA!
    bot1: Want to see my sexy pics?
    bot2: Want to see my sexy pics?
    bot3: Want to see my sexy pics?
    user1: JK :) !
    bot1: Want to see my sexy pics?
  • by AviLazar ( 741826 ) on Tuesday September 06, 2005 @09:44AM (#13489311) Journal
    I stubled across this article today

    You should try the Mach 3. It's tri-blade system gives you an extra smooth shave so you too can avoid stublingacross articles.
  • Funny reading (Score:4, Informative)

    by smartdreamer ( 666870 ) on Tuesday September 06, 2005 @09:45AM (#13489319)
    Here's a mirror [mirrordot.org].

    Makes me think of Azimov short stories.
    I like the conclusion.

    • My favorite is the tale of what happens when someone researches a bit too deeply into the nature of humor. I won't spoil it, but it starts off with a Wisecrack who starts telling jokes to a computer to figure out the pattern...
  • wel.. (Score:3, Interesting)

    by FidelCatsro ( 861135 ) <fidelcatsro&gmail,com> on Tuesday September 06, 2005 @09:45AM (#13489320) Journal
    What do you expect on AOL ?

    I have the ultimate weapon in AI detection , it's called severe dyslexia .
    If I don't spell check and proof read then no bot could hold a conversation with me .
    Instant messaging is not a great place to rely on spell checking and proof reading , but it does rely on our minds ability to see past simpel speling/grammer erors (intentional)
    • Forget not the power of Yoda. Speak like him and confuse them you will. When in doubt of word order you are, think like an HP calculator you must.
      • Insightful was that , funny/insightfull with parent the up moderate do .way a also is order word reverse .tricky slightly is it understand to. fun of lot a though .
         
  • TFA (Score:3, Informative)

    by Anonymous Coward on Tuesday September 06, 2005 @09:47AM (#13489338)
    How I failed the Turing test
    Posted Sep 4 2005 - 1:26pm by Jason Striegel
    Filed under ai | celebrities | computer science | psychology | technology

    Some time around March, I started receiving a number of random instant messages from people I've never met before. Apparantly, my AIM alias had been added to at least two online lists and people all over the world were busy importing me as a buddy.

    I say "at least two" because the people who contacted me fell into one of two camps: people who thought they were contacting a celebrity and people who thought they were contacting a robot. As I talked to more and more of these folks, I began to discover something really disturbing about myself:

    I consistently fail to be perceived as human.

    When this first started happening, a typical conversation with a celebrity admirer would go something like this (participant's IM handle is fabricated):

    angelcutie42: hi!
    jmstriegel: hey. what's up? do i know you?
    angelcutie42: no
    angelcutie42: someone gave me a bunch of screen names. i heard you are a celebrity.
    jmstriegel: that's weird. i'm afraid i'm not a celeb at all.
    angelcutie42: oh.
    angelcutie42: bye

    This was entertaining at first, but it quickly became a bit depressing as the angelcutie42s of the wired world would, one after the other, decide I wasn't worth talking to if I wasn't a celebrity. Want to know what it's like being dumped by a random groupie 5 times a day? Not good at all, thank you very much.

    So that's when I started hamming it up a bit. I'm not really proud of it, but my fans wanted a celebrity.. so I gave them one:

    sexybumkin123: hey.. so you're famous right?
    jmstriegel: Who me? I'm a movie star.
    jmstriegel: Shit, I gotta go.
    jmstriegel: My limo just arrived and Paris wants her damned sidekick back.
    sexybumkin123: Oh my god. Come back!
    sexybumkin123: I love you!!!!

    My groupies loved it. The more celebrity balogna I manufactured, the more they ate it, and the more they loved me.

    Then, something strange started happening. As my career as an artificial celebrity started to take off, I began to receive some strange IMs from a whole new class of random people. These new admirers were convinced I was a robot... and it suddenly became clear to me that something was very wrong.

    Nobody would believe I was human. In one troubling conversation after another, I felt my intellectual teeter-totter quickly tip from from actual to artificial.

    fratburger86: hey. so you're a sex bot?
    jmstriegel: umm, no. who the hell are you?
    fratburger86: yeah you are! i found your im online
    jmstriegel: that's fine and all, but i'm pretty sure you have me confused with someone else.
    fratburger86: just a normal chat bot then?
    jmstriegel: nope. i'm human
    fratburger86: ok. sure.
    fratburger86: asl?
    jmstriegel: no thanks.
    fratburger86: what?
    jmstriegel: i'm not really interested in any conversation that starts with "asl"
    fratburger86: oh come on. say something sexy.
    jmstriegel: seriously, i think you want to talk to someone else.
    fratburger86: i knew it!!!
    fratburger86: you are totally a robot!

    This is where things took a turn for the worse.
  • weizenbaum (Score:4, Funny)

    by Borg453b ( 746808 ) on Tuesday September 06, 2005 @09:48AM (#13489344) Homepage Journal
    A couple of years ago Joseph Weizenbaum (Author of Eliza) held a guest lecture at IMV (Information & Media Science). I was thrilled and during a break I went up and asked him for a autograph. He gave me a sad look as he wrote down his autograph and email.

    It struck me how materialistically obsessed that enquiry seemed - and I regretted asking.

    I guess he had never forseen that his critic of the "strong AI" movement would one day be used for IM based pron-ads.
  • by The Ape With No Name ( 213531 ) on Tuesday September 06, 2005 @09:53AM (#13489372) Homepage
    This statement:

    Don't try to read anything deep into it.

    holds true.
  • TFA :) (Score:2, Informative)

    by Nichotin ( 794369 )

    Some time around March, I started receiving a number of random instant messages from people I've never met before. Apparantly, my AIM alias had been added to at least two online lists and people all over the world were busy importing me as a buddy.

    I say "at least two" because the people who contacted me fell into one of two camps: people who thought they were contacting a celebrity and people who thought they were contacting a robot. As I talked to more and more of these folks, I began to discover someth

  • by j.leidner ( 642936 ) <(leidner) (at) (acm.org)> on Tuesday September 06, 2005 @10:07AM (#13489477) Homepage Journal
    A factor not often talked about when discussing the Turing test is the qualification of the interviewer (not the subject).

    The value of the Turing test depends a lot on the nature of the questions asked. Anybody can ask difficult questions that fellow humans fail to copy with, but not everybody knows what are difficult questions for computers (which may well be simple for humans). Thus, an Artificial Intelligence researcher should be a more suitable interviewer than a non-expert.

    Good Example:
    requesting the description of an emotion (recalling an event that typically invokes strong emotions)

    Bad Example:
    Hard math (34589759847359874389574398+3487928479823749837498 )

  • This is a troll.... (Score:3, Interesting)

    by MojoRilla ( 591502 ) on Tuesday September 06, 2005 @10:10AM (#13489495)
    He is just trying to get his AIM fight [aimfight.com] score raised.
  • by weg ( 196564 ) on Tuesday September 06, 2005 @10:32AM (#13489648)
    The Hitchhiker's Guide of the Galaxy [bbc.co.uk] defines "Turing Test" as follows:


    A test for artificial intelligence suggested by the mathematician and computer scientist Alan Turing. The gist of it is that a computer can be considered intelligent when it can hold a sustained conversation with a computer scientist without him being able to distinguish that he is talking with a computer rather than a human being.

    Some critics suggest this is unreasonably difficult since most human beings are incapable of holding a sustained conversation with a computer scientist.

    After a moments thought they usually add that most computer scientists aren't capable of distinguishing humans from computers anyway.
  • by Maradine ( 194191 ) * on Tuesday September 06, 2005 @10:52AM (#13489809) Homepage
    * No, I'd rather put a bullet in your CPU.
    > Let's talk about something else.
    * Hey, *you* brought it up.
    > What does that suggest to you?
    * You're an idiot?
    > Does it please you to believe I am an idiot?
    * Yes. Boundlessly.
    > Are you Sure?
    * Oh, wait. Maybe not so much.
    > You aren't sure?
    * No, I was right the first time. I've never believed anything so strongly.
    > Are you sure?
    * Dammit, you're right. I swim in uncertainty.
    > Perhaps you would like to be right. you swim in uncertainty?
    * Well, I have water wings . . .
    > Come, come, elucidate your thoughts.
    * More of a float, really . . .
    > Can you elaborate on that?
    * Yeah, its rubber, about four feet long . . .
    > I see.
    * You do?
    > You're not really talking about me, are you?
    * No, we're talking about my damn raft! Pay attention!
    > You are being a bit negative.
    * A bit? You could crash an oil stock with me!
    > Oh... could crash an oil stock with you?
    * Shit, dude, do you even have a mutual fund?
    • > Hello, I am Eliza.
      * Hello, I am Eliza.
      > Did you come to me because you are Eliza?
      * Does it please you to believe I am Eliza?
      > Do you believe it is normal to be Eliza?
      * Oh... believe it is normal to be Eliza?
      > I'm not sure I understand you fully.
      * Did you come to me because you are not sure you understand me fully?
      > What makes you think I am not sure me understand you fully?
      * How long have you been not sure you understand me fully?
      > We were discussing you, not me.
      * We were discussing you
  • Class paper (Score:5, Informative)

    by Ellen Spertus ( 31819 ) on Tuesday September 06, 2005 @10:55AM (#13489820) Homepage
    Lenny Foner wrote a great article [mit.edu] about this sort of thing back in 1993. I still recommend it.
  • by ytm ( 892332 ) on Tuesday September 06, 2005 @10:55AM (#13489825) Homepage
    Some time ago I coupled Perl Eliza module with IM account registered as "Irene17". That module works only for English and my IM network was for non-English users so Irene would welcome anyone with message that she understands English only. I set her status to available for conversation and left it running for a week. That IM network has central directory of users so I was sure that sooner or later someone would find her.

    Then I looked at logs of conversations. It turned out that there were people who actually talked with her for quite a while, struggling with English. The scheme was more or less the same. First some usual phrases to start a conversation, then trying to get some information about her and finally realizing that she is unwilling to tell anything about herself :) frustration with "Good bye" or some insults.

    So, in a way, she has passed a Turing test, but the knowledge of English was poor on both sides.
  • by dwheeler ( 321049 ) on Tuesday September 06, 2005 @11:14AM (#13489961) Homepage Journal
    This isn't new. The name for this phenomina is a "reverse Turing test". See the Wikipedia entry on Revese Turing test [wikipedia.org].
  • by John Bokma ( 834313 ) on Tuesday September 06, 2005 @11:31AM (#13490104) Homepage
    My super simple bot written in Perl which fooled a lot of people. (Dxx = me, mxx is a friend who knew about the bot but...)
    Dxx: later Mary
    MaryJane: sure
    mxx: fubar
    mxx: heheheh
    MaryJane: not funny
    mxx: no, why not?
    mxx: wanna cyber?
    mxx: lol
    MaryJane: was that really funny? Guess not
    mxx: mary jane you're a bit of a bitch
    mxx: hehe
    mxx: :kiss
    MaryJane (whispers to mxx): like you!!!
    mxx (private): yes exactly
    mxx (private): who are you?
    MaryJane (whispers to misty): no PM please
    Anyway, it was a lot of fun, especially when I told people it was a bot, most didn't believe it (until I gave them a trigger). And you can guess what happened next, each and every "omg lol lol!!!" chatter was kindly asked to STFU (bot!)
  • by zlogic ( 892404 ) on Tuesday September 06, 2005 @12:22PM (#13490549)
    And here's the output (> - bot, * - me):

    > Tell me more...
    * I beg your pardon?
    > What about your own pardon?
    * I don't know what you're talking about
    > Do you sometimes wish you were talking about?
    * Well, I guess so
    > I'm not sure I understand you fully.
    * It's me who doesn't understand you fully!
    > Oh... fully?

    My favorites were
    -What about your own pardon?
    -Do you sometimes wish you were talking about?
    Obviously, it's a bot.
    However, I find the bot included in EMACS to be quite intelligent. And it's because the bot listens to you and just says stuff like "go on".
    That's what bots fail to do: telling you anything or answering your questions. Ask a bot who's Steve Ballmer and how is he related to the monkeyboy (ask the bot to Google if he refuses) and the bot will reveal its electronic nature.
  • by xPsi ( 851544 ) on Tuesday September 06, 2005 @12:25PM (#13490578)
    The article, although basically a joke, says something interesting about:
    1) people's (AIMers) lower standards for conversation;
    2) and also their open mindedness towards what a computer is capable of producing.

    I guess the first point is negative and the second positive. The combination leaves a situation where a computer doesn't have to generate anything sophisticated to be tagged as human.

    I once administered an informal Turing test using Ray Kurzweil's Cybernetic Poet [kurzweiltech.com]. I presented to 6 friends several dozen poems, some of which were computer generated (the poems, not the friends...).

    People who were computer savvy tended to overestimate what a computer was capable of doing and did rather poorly. Similarly, people who were artistic but not very techie tended to have a very open mind regarding what constituted human poetry (bad grammar, non sequiturs, etc. were ok in an e.e. cummings sort of way) and also did poorly.

    The people who did consistently well were those who were neither computer types nor artists, but rather "pure" academics (language specialists, classicists, etc.). They simply used grammar and puncutation as their guide.

  • by GMFTatsujin ( 239569 ) on Tuesday September 06, 2005 @12:39PM (#13490695) Homepage
    That's a remarkably cogent article, having been posted by an admitted bot. I can't even see statistical errors in the Markov chains. Nice job!

"Kill the Wabbit, Kill the Wabbit, Kill the Wabbit!" -- Looney Tunes, "What's Opera Doc?" (1957, Chuck Jones)

Working...