Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology Science

Turing Test 2: A Sense of Humor 390

mhackarbie writes "Salon has a great story, Artificial Stupidity, about the Loebner Prize, a yearly contest that for over 10 years now has offered a $100,000 prize to anyone who can create a program to pass the Turing Test. The best part is the resulting fiasco that develops between the eccentric philanthropist who started the contest and extremely annoyed AI Researchers such as Marvin Minsky."
This discussion has been archived. No new comments can be posted.

Turing Test 2: A Sense of Humor

Comments Filter:
  • well ... (Score:3, Funny)

    by Meeble ( 633260 ) on Wednesday February 26, 2003 @09:28AM (#5385870) Journal
    All Hugh Loebner wanted to do was become world famous, eliminate all human toil, and get laid a lot.

    does this mean we're all considered entrepeneurs ?
  • and extremely annoyed AI Researchers such as Marvin Minsky

    This person is commonly known as Marvin The Martian.
  • by Sheriff Fatman ( 602092 ) on Wednesday February 26, 2003 @09:32AM (#5385891) Homepage

    I don't think bots are the problem... I've had several online conversations which I'd assumed were chat-bots but turned out to be real people. I guess when Turing designed his test, he probably didn't anticipate the massive advances in human stupidity that we've witnessed in the last few decades :)

    • Is it because of massive advances in human stupidity that you say you don't think bots are the problem?
    • I've met (and work with) people who would not be able to conceive of a computer that chats with you. They would assume the weird answers were from a weird person and wander off in the same manner as they usually would... I guess this Turing Test proves that the "intelligence" of the computer can only be judged by reviewing the intelligence of the User???
      • Quoth JaxGator75:
        I guess this Turing Test proves that the "intelligence" of the computer can only be judged by reviewing the intelligence of the User?
        From the article, page 2...
        If you were conversing with an entity and you could not tell whether that entity was human or merely human-made, then whatever you were conversing with was at least as intelligent as you were.
        So the person who assumes that the weird answers are from a weird person can, by that rule, be assumed to be no more intelligent than a couple hundred lines of ELIZA code...
      • They would assume the weird answers were from a weird person and wander off in the same manner as they usually would...

        Well, you've spotted the problem. What we need to do is get a team together to create a Turing Tester program. Once we have a program that can reasonably determine the difference between a human and a simulation, then we can run objective, repeatable Turing tests! It goes without saying that this will be an ongoing project. As simulations are tweaked to try to fool the Turing Tester program, upgrades with have to be made to screen them out.

        With any luck, and a good Turing Tester program, we shall see grand advances in the art of AI over the many years that we are employed.

    • The program enters the Loeber competition and it tries to disqualify the jury.

      - hardwired dialogue. Totally unexpected. He must give us the price, just for the idea.

      contest entry:

      #include

      main()
      {

      printf("What are the exact rules of this contest?\n");
      printf("What is the formal definition of the Turing test?\n");
      printf("And you actually think i am DUMB?\n");

      return 0;
      }

    • by Vryl ( 31994 ) on Wednesday February 26, 2003 @12:02PM (#5386933) Journal
      You post is marked "Score:5 Funny", but I would mark it "Insightful".

      That testers can believe that humans are computers is why it will never be a 'test'. Turing himself only ever called it the 'Imitation Game'.

      If there is no way to tell humans from computers, how can you ever tell the computers from the humans?

      We likes the 'turing test' not because it is scientific, but because, like intelligence itself, it is ill defined and imperfect.

      I love the Loebner quote: "My reaction to intelligence is the same as my reaction to pornography, I can't define it but I like it when I see it."
  • by Rastan_B2 ( 615088 ) on Wednesday February 26, 2003 @09:35AM (#5385905)
    Every person with whom I spoke about it said that last year's contest was an utter fiasco, with unclear rules, inconsistent judging, arbitrary fiats by an opaque prize committee, petulant prima donnas, and last-minute changes of venue that prevented most entrants from even discovering where the contest was taking place until after it had happened.

    Was it held in Florida as well ? Or is it just a massive coincidence ?
  • IMHO, is the line that says: "for the past 25 years, AI specialists have been saying that all AI problems were going to be solved within 10 years" (or something like that).

    This strikes me as true: for years and years and years, researchers have been promising AI was just around the corner... And what do we have right now? Nothing!

    I want a Turing-class AI or my money back!! =)
    • This strikes me as true: for years and years and years, researchers have been promising AI was just around the corner... And what do we have right now? Nothing!

      I'm not sure if that is true for the last 20 years anymore.

      Now I have the sudden urge to sponsor a price for best antigravity device! :-)

      And in contrast to what the AI researchers did, I doubt that physicists will show up to participate in this event.

      Regards,
      Marc

    • Fusion Power (Score:3, Interesting)

      by krysith ( 648105 )
      I used to work in nuclear fusion research. They've been saying it is twenty years away for almost 50 years now. The joke in the industry is, "Fusion power is the energy of the future, and always will be!". (actually, I am fairly positive on fusion power, but I think that spending the vast majority of research funds on a few large experiments is counterproductive)
    • by Zog The Undeniable ( 632031 ) on Wednesday February 26, 2003 @10:39AM (#5386267)
      for years and years and years, researchers have been promising AI was just around the corner... And what do we have right now? Nothing!

      Get with the program, dude: we had AI even in the UK last year. I didn't go and see it though, because it starred that irritating kid from "The Sixth Sense".

    • by cgenman ( 325138 ) on Wednesday February 26, 2003 @10:52AM (#5386342) Homepage
      This strikes me as true: for years and years and years, researchers have been promising AI was just around the corner... And what do we have right now? Nothing!

      Well, nothing is a very relative term. We now have AI capable of counting the number of cars on a given street given a photograph of a region, and can automatically follow people / vehicles / animals as they travel around and through objects. OCR is accurate enough to be implemented professionally, and voice recognition is up to 95%. None of these were possible 25 years ago, and not just because of a lack of hardware.

      While full AI is still a while away, the first major stumbling block, pattern recognition, is well on its way to being solved.

      The AI in Quake 3 is much better than the AI in Pong.

      -C
    • And what do we have right now?

      Better corners.
    • by sbaker ( 47485 ) on Wednesday February 26, 2003 @12:24PM (#5387136) Homepage
      Part of the problem is that things that were once considered part of AI have moved out and become mainstream technology. Voice recognition, Expert Systems, Fuzzy Logic, Neural Nets, Chess playing computers...all of these were once considered to be unsolved AI problems but since they are now in common use, we don't consider them a part of AI anymore.

      You can find plenty of twenty to thirty year old textbooks that tell you that playing chess at grand master level would be a sign of computer intelligence - now we know that all it takes are some clever heuristics and a lot of CPU power.

      As soon as computers can pass the Turing Test, it'll be considered laughable that anyone ever thought it required *intelligence* to chat with a human. In a sense, this has already happened. Quite a few people were convinced by Eliza - but you can tell from just looking at the code that it's not intelligent.

      The same thing is happening with animals. We used to define humans as the only tool-using animals - then they found birds breaking open clamshells by dropping rocks on them. The definition changed to humans as the only tool *making* animals...then they found chimpanzees who strip the leaves from twigs before they poke them into anthills. So then it was 'self recognition' - that also failed with dolphins who can recognise themselves in a mirror. Now it's some other thing. Animals will never be labelled intelligent" because the definition of intelligence is that thing that humans have but animals do not.

      I predict that we'll never have AI. That isn't a failure of the work - it's in the nature of our definition of Intelligence as "that thing that humans have that animals and machines don't have".
      • I think we are going to have AI when we realize that it is not only the presence of a clever algorithm that makes up AI, it is also the motivation. In other words, a human being is motivated to develop intelligence in order to survive, something that is not required from a computer.

        Another big difference is that modern computers are much less powerful than the brain: the human brain's memory is equivalent to many million petabytes of memory, and the searching mechanism of the brain is straightforward pattern matching that works like a neural network (and can identify and discard many images in parallel). Our poor computers have only some terrabytes of memory and they are much slower in reading that memory in an efficient way.

        Animals have the best cameras for eyes and the best microphones for ears, all made by mother nature!!! And these inputs are designed to stimulate and filter responses in a sophisticated non-digital way, rather than simply accumulate the data and convert them to binary information.

        With all these big differences, please don't expect AI to surface in the near feature. It could surface though if we realized the differences of human vs machine and start building machines with human-like attributes (for example neural-network memory, and with motives to learn and expand its knowledge base, with cameras and ears, with feelings).

      • I predict that we'll never have AI. That isn't a failure of the work - it's in the nature of our definition of Intelligence as "that thing that humans have that animals and machines don't have".

        In general I agree with the points you make -- especially that the problem with developing A.I. is that it is a moving target. As you point out, lots of things that used to be holy grails of A.I. have been achieved and dismissed. Remember the article on slashdot awhile back about the walking robot that "figured out" how to escape from the lab? Is that A.I.? Probably not, but it does make you stop and go "Wow, that's kind of neat!"

        What I don't agree with in your post is how to seem to reserve the word "intelligence" for human beings. I really don't think most people defines intelligence as "that thing that humans have but animals do not." I think we should consider the goal of A.I. as not trying to copy or better a human, but just successfully achieving some form of independent, creative thought probably on the level of a mammal. You use the example of chimps utilizing twigs to collect ants for eating. I think if a computer program could demonstrate tool-making and tool-using capabilities like that, it should qualify as A.I. Getting a computer to act indistguishably from a human is a pretty tough goal, but if it can demonstrate characteristics of animals with reasonable thought processes (as opposed to brute instinct), I think it would generally be hailed as a milestone in the quest for true A.I.

        GMD

  • hmm, well (Score:5, Insightful)

    by lingqi ( 577227 ) on Wednesday February 26, 2003 @09:41AM (#5385925) Journal
    some people [arizona.edu] and their followers [innerx.net] do not believe that machines will EVER achieve human level intelligence.

    (overall a good read. certainly a buttload of speculation but no more (actually probably less) than found in Wolfram's book)

    On the other hand, I see nothing wrong with offering a prize for what he believes in. Heck we have the Templeton prize out there (more than the Nobel, no less) for best achievement in religion (christianity specifically, methinks), so what's wrong with offering 100G of his own money? We also have the X-BOX cracking contest - who is willing to bet that the believing in the chance of solving a 2048bit key in a few monthes is MUCH dumber than trying to shoot for some "not everybody agree as AI" AI?

    • Re:hmm, well (Score:2, Insightful)

      Bleh, them Arizona guys be kooks:
      http://www.consciousness.arizona.edu/hamer off/New/ Time_Flies/Time_Flies.htm

      I could pull down their experiment in a fraction of a second. But heck, I'd have 4 seconds to pull it down, by the looks of it.
      (Hint - when a stimulus is detected in advance of the emotive image being shown, _change_ the image to a random one. (Changing to a non-emotive one would give the kooks ammo for a new claim that you predict the opposite, so keeping it random guarantees no bias either way))

      More evidence of US universities going to pot.
      Roll out Puthoff, Targ, Swann, and the SRI, that's what I say.

      YAW.
  • The Meta Turing Test (Score:5, Interesting)

    by ites ( 600337 ) on Wednesday February 26, 2003 @09:41AM (#5385928) Journal
    Any specified Turing Test can be defeated in much the same way as a lock-pick can defeat any specified lock, so perhaps we should move up one level of abstraction. I propose the "Meta Turing Test" which is as follows: specifying the conditions of the Turing Test (ability to lie, sense of humour, etc.) should allow a true human to design an automaton that fools the turing test, while a computer will not be able to do so.
    Alternatively, why not just abandon the myth that human intelligence is some kind of mystical cloud, and see it for what it is, namely a set of thinking organs each designed (or adapted, if you prefer the 'evolution is a passive process' concept) to solve specific problems, in the same way as my hand is adapted to handling objects. Then, test each of these tools carefully. Anything - computer or human - that passes the tests can be defined as 'human'. Many beings that we today consider human will probably fail. Borg borg.
    • by Anonymous Coward
      I think that the prize should go to the robot that can differentiate between a human and another bot
      • The prize goes to the robot that can convince a human to do something not otherwise in its best interest.

        "Go call your girlfriend dirty names."
        "No."
        "I'll give you a candy bar..."
        "Um... No."
        "Jennifer Anniston is right here. She'd think it was really funny."
        "Um..."
        "Hehe. She's laughing already."
        "Iduno..."

        The real test in my book, isn't when a robot can beat a human 50% of the time. I mean, that would be interesting, certainly. That would indicate that AI can properly imitate morons. The scary thing is that eventually, if AI could model the tester's intuitions, the AI might eventually win 75%... 80%... 90% of the time. We could build something that seems more human than a human. Rob Zombie would piss his pants.
    • Wouldn't a better Meta Turing Test be to not reveal the conditions of the test at all?

    • Any specified Turing Test can be defeated in much the same way as a lock-pick can defeat any specified lock

      No it can't. Why then has no-one won the gold Loebner Prize yet?

      The specification can be extremely simple. Here's mine: Take a panel of 10 computer scientists, a human volunteer and 11 computers. The volunteer and the AI software must both attempt to convince the panel that they're human, in IRC chat or something.

      Most AI programs would be exposed as frauds in about 30 seconds or less.

      That's why the Turing Test is so good. It's hard - because it's general, not specific. If you think it's specific to a certain task I think you have the wrong idea about what the Turing Test is.

  • by sfled ( 231432 ) <sfled AT yahoo DOT com> on Wednesday February 26, 2003 @09:42AM (#5385929) Journal
    Why are Minsky and Shiber so upset that a sex-addicted pothead is sponsoring an A.I. prize, when the Father of Dynamite [northpark.edu] sponsors a Peace prize?

    Loebner can do whatever he wants with his dough. No one is being coerced into entering his contest.
    • Alfred Nobel indeed invented dynamite which has today much more civil uses (building tunnels, starting pre-emptive avalanches, etc.) than military ones.
      But The Peace Noble Prize is only one of much more prizes.
    • What's wrong with being a sex-addicted pothead?
    • Loebner can do whatever he wants with his dough. No one is being coerced into entering his contest.

      Nah, the thing that set Marvin off was the pompous set of rules for the prize.

      17.The names "Loebner Prize" and "Loebner Prize Competition" may be used by contestants in advertising only by advance written permissionof the Cambridge Center, and their use may be subject to applicab leicensing fees. Advertising is subject to approval by representatives of the Loebner Prize Competition. Improper or misleading advertising may result in revocationoftheprizeand/or other actions.

      Basically Loebner was using his prize for cheap self promotion.

      What is amazing is that Salon can recycle an eight year old Usenet flame war I watched firsthand (and posted in some of the threads even) as news.

      As usenet flamewars go it wasn't even that good of a flame war.

      Incidentally if you think the Loebner and Nobel prizes are a farce, how about MIT accepting prize money from 'inventor' Lemelson who principal talent was bogus patent claims? Fortunately Lemelson is now stone cold dead so we can speak the truth about him.

      • by manyoso ( 260664 ) on Wednesday February 26, 2003 @01:05PM (#5387507) Homepage
        That is not fair. Minsky might have percieved that this was the case, but it doesn't follow that it was. Loebner gave a perfectly good explanation for the clause (See below) and it seems pretty hypocritical that Minsky fumes that Loebner uses his name as co-sponsor in advertising ;)

        From: loebner@ACM.ORG (Hugh Loebner)
        Newsgroups: comp.ai
        Subject: Minsky Co-sponsor of Loebner Prize!
        Date: 8 Mar 1995 16:48:36 GMT
        Organization: ACM Network Services
        Lines: 63
        Message-ID:

        In Message ID Minsky writes:

        >In article loebner@ACM.ORG writes ....
        >>17.The names "Loebner Prize" and "Loebner Prize Competition" may be used by
        >>contestants in advertising only by advance written permissionof the Cambridge
        >>Center, and their use may be subjecttoapplicableicensingfees. Advertising is
        >>subjecttoapprovalbyrepresentativesoftheLoebn e r Prize Competition.Improper or
        >>misleading advertising may result in revocationoftheprizeand/or other actions.

        >[Some words concatenated to enforce the 80-character line length
        >convention.]

        >I do hope that someone will volunteer to violate this proscription so
        >that Mr. Loebner will indeed revoke his stupid prize, save himself
        >some money, and spare us the horror of this obnoxious and unproductive
        >annual publicity campaign.

        >In fact, I hereby offer the $100.00 Minsky prize to the first person
        >who gets Loebner to do this. I will explain the details of the rules
        >for the new prize as soon as it is awarded, except that, in the
        >meantime, anyone is free to use the name "Minsky Loebner Prize
        >Revocation Prize" in any advertising they like, without any licensing
        >fee.

        1. Marvin Minsky will pay $100.00 to anyone who gets me to
        "revoke" the "stupid" Loebner Prize.

        2. "Revoke" the prize means "discontinue" the prize.

        3. After the Grand Prize is won, the contest will be
        discontinued.

        4. The Grand Prize winner will "get" me to discontinue the
        Prize.

        5. The Grand Prize winner will satisfy The Minsky Prize criterion.

        6. Minsky will be morally obligated to pay the Grand Prize
        Winner $100.00 for getting me to discontinue the contest.

        7. Minsky is an honorable man.

        8. Minsky will pay the Grand Prize Winner $100.00

        9. Def: "Co-sponsor": Anyone who contributes or promises to
        contribute a monetary prize to the Grand Prize winner .

        10. Marvin Minskey is a co-sponsor of the 1995 Loebner Prize
        Contest.
        -------------
        BTW

        The language that Minsky finds so offensive was added
        by the Prize Committee because of a possible mis-representation
        regarding the contest made by an annual prize winner.

        No fees have been requested of any winner, nor do I anticipate
        of any fees ever being requested. Rule 17 merely protects the
        Loebner Prize from misrepresentation in advertising.


  • Bloody-Mindedness (Score:4, Interesting)

    by handy_vandal ( 606174 ) on Wednesday February 26, 2003 @09:46AM (#5385955) Homepage Journal
    "... extremely annoyed AI Researchers ..."

    Perhaps "extremely annoyed" is what distinguishes human intelligence from machine intelligence?

    In John Brunner's non-novel Stand on Zanzibar [sfreviews.com], cranky sociologist Chad Mulligan declares that supercomputer Shalmaneser is now intelligent because Shalmaneser has displayed the quality of "bloody-mindedness". Not the same as "annoyance", of course, but in the same emotional realm ....
    • Absolutely. Shalmaneser's absolute refusal to accept the data on Beninia is the best "waking up of the computer intelligence" moment in sf. (And of course the command that forces Shalmaneser to accept whatever data he's given without running it through his private litmus test is pretty funny, too.)

      The best overall sf ai story, though, has to be Golem XIV. http://www.cyberiad.info/english/dziela/golem/gole mpl.htm

      • I'm not convinced that Shalmaneser's "I won't accept the data" moment actually defines his awakening as an intelligent being. Rather, it defines the first evidence available to people of Shalamaneser's wakeup. Shal may have been intelligent sooner, without people recognizing his awareness.

        I think the funniest moment in the book is at the end, when Shal thinks the same thing as drug-addled Bennie Noakes: "Christ, what an imagination I've got!"
    • Brunner makes another good point, in his novel "A Maze of Stars".

      What do we want from AI? Two contradictory qualities:

      (1) Independence of thought (not pre-programmed solutions)
      (2) Obedience to our will

      And what do we call a being which has independence of thought, yet obeys our will?

      A slave.
      • Except that a large number (though probably not a large percentage) of people don't expect that an AI that wakes up will remain obedient to our will. And also consider that it would be quite dangerous to attempt to force that condition.

        Check out the concept of "Friendly AI". Quite a different proposition than "slave".

  • Missing the point (Score:4, Interesting)

    by Anonymous Coward on Wednesday February 26, 2003 @09:48AM (#5385957)
    I think that people who focus on the Turing test are missing the point, this isn't really AI and probably doesn't have much of a use outside advertising to via IRC/personal messaging etc.

    The real interesting areas of research in AI are for example: in dye-master processes, where AI replaces a highly skilled human, or automating the driving of cars. These are all AI and, IMHO, much more impressive than glorified Eliza, Turing test stuff...
    • I disagree, these bots are made to parse language and make sense out of it. They can be (if people think outside the realm of IRC bots) important research in developing a conversational interface to a computer or robot.
    • by Rocketboy ( 32971 )
      I have the distinct feeling that "worthy" AI objectives are defined by the AI community as "those things we think we can do reasonably well at the moment." In my opinion, the AI community disparages Turing Test-like objectives because they've been unsuccessful at achieving them. To me that makes AI less like science and more like selling Florida time-share condos. Kinda tough on high-profile PhD's, but what the hell: I don't actually know any of them anyway.
      • Bots consistently pass the turing test. Real humans consistently fail it. It's disparaged because it's not much of a test, and it has no real scientific value.
      • by King Babar ( 19862 ) on Wednesday February 26, 2003 @11:45AM (#5386790) Homepage
        I have the distinct feeling that "worthy" AI objectives are defined by the AI community as "those things we think we can do reasonably well at the moment."

        Not hardly. As it turns out, one of the more frustrating aspects of AI is that once some particular computation that would appear to be correlated with intelligence can be performed, then it invariably doesn't count as AI anymore. So there are lots of practical systems out there today that can prove theorems, do symbolic algebra, play chess better than 99.999% of all people, a whole bunch of stuff. But hardly any of this strikes us as AI anymore. On the other hand, there are lots of horribly difficult problems out there whose solutions we really can't expect to get at within 10 years, and those are all "good" AI problems. Now, one thing that makes them good problems is that we know they contain many different thesis-sized projects that correspond to sub-goals for the "real" problem, and because it is possible that knocking off some of these subgoals could yield some real insights.

        Now the interesting thing to notice here is that Turing was a *very* smart guy, and any program that successfully passes the strong version of the Turing Test has almost by definition solved every hard problem that confronts AI, and all of the subproblems that compose those problems, and... It's a truly gargantuan task, and one where even your most advanced programs are almost guaranteed to look really bad in competition.

        Having said that, I do still think there is some point in holding contests like the Loebner, not for what they will tell us about the state of how fast AI is progressing, but because the programmers who compete at this point really are trying to scam the system and "get away with" producing a program that is NOT intelligent but that might LOOK like they are intelligent. Understanding how clever these deceptions can be, and why we fall for them, is itself an interesting by product of the competition. So the importance of ELIZA in the end was not that it was a great piece of code or introduced techniques that we could build on directly, but because it taught us a *lot* about people's implicit assumptions about a conversational partner, and how you could generate conversational situations that could finesse the hard stuff. So people don't go out to talk to ELIZA with the goal of determining that it is just a program; they don't go looking for the disconfirming evidence. That's a pretty key point in itself.

        • I think that you are missing one basic point. Most "human intelligence" is also a "trying to scan the system". Not all, but most. The remainder is the part that coordinates the scams, and figures out which one to use. And the tiny part that tries to figure out new scams.

          So this contest may seem silly, but it's helping put together the pieces out of which a real intelligence can be built.

          Now the hard part: "How do you design it with motives that will work in a world with lots of people, and will also allow the people to continue to exist?"

          That's a hard part that had better be answered by shortly after the point at which it can figure out what a person is.

      • by jcast ( 461910 )

        In my opinion, the AI community disparages Turing Test-like objectives because they've been unsuccessful at achieving them.

        That's not an opinion; that's an incorrect factual statement. As for why it's incorrect: the Turing Test implicitly defines intelligent as `indiscernable from a human'. By Leibniz' principle, this means `a human'. So, a computer can never acheive the Turing Test's definition of `intelligence'. Of course, the AI community believes computers can be intelligent, so they have to reject the Turing Test, in much the same way that the practitioners of any field have to reject standards that implicitly outlaw their field. To give an analogy, requiring AIs to hold up under Turing Test conditions would be like requiring theories of evolution to satisfy hard-core Bible-thumpers. Scientists (quite rightly) don't accept those conditions, but no one says that ``makes biology less like science and more like selling Florida time-share condoes''.
  • by Rik Sweeney ( 471717 ) on Wednesday February 26, 2003 @09:49AM (#5385963) Homepage
    Let's face it, if the designer of the program doesn't have a decent sense of humour then his program is likely to fail. I take this opportunity to remove a fair chunk of the contestants by posting "The world's funniest joke" If you don't laugh then don't enter:

    Two hunters are out in the woods when one of them collapses. He doesn't seem to be breathing and his eyes are glazed. The other guy takes out his phone and calls the emergency services.

    He gasps: "My friend is dead! What can I do?" The operator says: "Calm down, I can help. First, let's make sure he's dead." There is a silence, then a gunshot is heard. Back on the phone, the guy says: "OK, now what?"
  • by mccalli ( 323026 ) on Wednesday February 26, 2003 @09:50AM (#5385964) Homepage
    Computers passed that test years ago. I mean, who can forget the classic:
    keyboard not found, press F1 to continue

    Cheers,
    Ian

    • or

      god help us

      god is not currently logged on.

    • I saw it's updated cousin earlier this week when I had to go on-site at a client's server room to see why a rebooted firewall hadn't come back online.

      The error message from POST? "A keyboard error was detected. Use the arrow keys to select your choice of actions, then press ENTER."

      I was more than a little amused. That stupid message is now a full-blown curses-style widget. Ahh, how far we've come.

  • Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent.

    Turing's 'imitation game' is now usually called 'the Turing test' for intelligence.

    Hmmm. I'm pretty sure that there already are computers that would seem more intelligent than some of the people I've had talked to while playing CS.

  • by __aahlyu4518 ( 74832 ) on Wednesday February 26, 2003 @09:56AM (#5385987)
    the program that alters the test to fit its own capabilities. That is cheating? How more human can it get ? Humanity is constantly adapting it's surroundings to fit its own needs...
  • Consciousness (Score:5, Interesting)

    by ChristopherAltman ( 555791 ) <christaltmanNO@SPAMartilect.org> on Wednesday February 26, 2003 @09:58AM (#5385995) Homepage

    Physics of Consciousness

    Building a machine to pass the Turing Test is one thing, but the nature of consciousness itself is the more profound question here. Rodney Brooks asked this question in a relatively recent Edge Online interview [edge.org].
    What are we missing in our computational models of living systems?

    Chris

    http://www.umsl.edu/~altmanc/ [umsl.edu]
    http://www.artilect.org/ [artilect.org]
  • by hqm ( 49964 ) on Wednesday February 26, 2003 @10:01AM (#5386009)
    The author of the article appears never to have read the article by Turing where he described the so-called 'test'. It is clear that Turing was a deep and subtle thinker way ahead of his time. If you read what he is saying in context, he is arguing that first and foremost, thought can be automated in the sense of a universal computer which can compute anything that a brain can. To his critics who said that this was somehow impossible, he created a reducto-ad-absurdum argument; he said look if you are talking to this machine and it is composing sonnets which are like Shakespeare, and you *still* can't say it's intelligent, then you are an idiot. He was not proposing that this was an objective test or a desirable thing to do, he was poking fun at idiots like the author of the Salon article.
    • by redragon ( 161901 ) <codonnell@NOSpAM.mac.com> on Wednesday February 26, 2003 @10:21AM (#5386138) Homepage
      *ding ding*

      Turing wasn't looking for a UNIVERSALY INTELLIGENT MACHINE, he was looking at how machines could act intelligently. We're not talking about human in a computer, we're talking about can a computer act intelligently. If you think it's impossible, tell that to people that can be "fooled" by bots on IRC or MUDS for weeks or more.

      Seriously, we're obsessed with the idea of human intelligence, which is often times an oxymoron, but that's what we want...
  • ... I believe Uncle John McCarthy is the father of Artificial Intelligence. Though both men deserve the title.
  • Sundman (Score:3, Informative)

    by tcdk ( 173945 ) on Wednesday February 26, 2003 @10:02AM (#5386014) Homepage Journal
    John Sundman who has written this article has also written a quite interestion book called Cheap Complex Devices (he mentions is in the article).

    It's kind of wierd and strange - the idea is that the novel was one of two novels written by a computer program.

    I've reviewed it here [sfbook.com].
  • It shouldn't be too hard,
    1: word play, shouldn't be too hard
    George walks into a resurant and asks for a quickie, 'sir' replied the waiter, 'that says quiche'.

    What does george michel and a pair of wellies have in common?
    they both get sucked off in bogs.

    2: parody, again this should be easy (ish)
    3: in soviet russia
    in soviet russia jokes tell you.

    Other types of humor are a lot harder, an AI wouldn't say somthing like
    What do you do when you've finished fucking a three year old girl?
    Turn her over and pretend it's a three year old boy.
  • by RobotWisdom ( 25776 ) on Wednesday February 26, 2003 @10:06AM (#5386034) Homepage
    I was active on comp.ai at the time Minsky made his offer [Google query] [google.com], and I'm convinced the real reason academic AI hates the Loebner Prize is that it shows up how little they've managed to accomplish.

    I agree that the entries are really bad-- one recent winner just said the same things no matter what the human asked. But one winner, unmentioned in Salon, was Thom Whalen [dgrc.crc.ca], whose design was a genuine advance in the art. (Regrettably, Loebner changed the rules to exclude his approach in the future.)

    What Whalen did was limit his domain to one topic, and compile a set of general answers to likely questions, which he matched by spotting keywords. So even if the answer wasn't a perfect match, it was general enough to be useful. This design should be better known and more widely used, and the Loebner contest would have been a good launchpad to bring it to people's attention if the academics weren't so prejudiced.

    But the top academics get six-figure salaries for generating lots of jargon and no useful products, so a level playing-field is the last thing they want.

    • Well, wasn't that then just a simple, 'one-pass' expert system? It is nothing new altough perhaps the only really useful thing that AI research came up with. (And, of course, it has little to do with REAL AI :)
      • (Regarding Whalen's Loebner-winner) Well, wasn't that then just a simple, 'one-pass' expert system?

        Not an expert-system in any way (those involve a knowledgebase of logical rules). Whalen said he'd gone further than simple keyword-matching, but I never found out how.

        It is nothing new although perhaps the only really useful thing that AI research came up with.

        The design was new, and clever, and useful.

        (And, of course, it has little to do with REAL AI :)

        I hope that smiley means you're joking, because that's what the academics claimed, but their arguments were purely self-serving.

    • I think you have hit the nail on the head...
      Worse, the article reported, "Dr. Epstein, in a speech after the event, noted that he had learned from the day's proceedings that 'little progress has been made in the last twenty-five years.'" It had to be said, and Epstein said it: The emperor had no clothes. After decades of government-funded research by the brightest minds in computer science, A.I. programs still stank, and the National Science Foundation and Sloan Foundation had just spent $80,000 to demonstrate this sad fact to the world.
      All of the Federal and academic monies put into A.I. research have produced very little progress in all the years since Turing's test was introduced.

      The "dirty little secret" of the research world is out. "We are getting paid handsomely to produce nothing".

    • What Whalen did was limit his domain to one topic, and compile a set of general answers to likely questions, which he matched by spotting keywords

      This isn't really AI though, and it's also been previously used in others ways. Anyone could do this with a regexp and a dictionary.

      Example: Old Sierra games. You type in a command, it parses words relative to the current situation and chooses any ones that match

      AI isn't so much the ability to run memorized commands as it is the ability to learn or anticipate. I wouldn't mind an initially dumb chatbot, if it were able to grow "smarter" over time, and process input in a meaningful manner so as to "learn".
  • by arvindn ( 542080 ) on Wednesday February 26, 2003 @10:19AM (#5386120) Homepage Journal
    Turing defined the test more than 50 years ago. Considering that there were barely any machines at that time that we would call computers today, his prescience was remarkable.

    Turing stipulated in the Turing test (TT) that the "interrogator" specifically has the goal of trying to determine which of the contestants is human and which is the machine. Unfortunately, the way the Loebner contest is conducted, this important requirement is completely ignored (at least in the default $2000 prize). As a result, the results of the contest are completely irrelevant from the point of view of the Turing test. Claiming otherwise is incorrect and misleading, and Loebner fully deserves all the criticism he gets.

    The TT is still fully valid today. We are very far from building bots that will pass it. (though Turing predicted that by 2000 we will have machines that will pass TT). In fact, the whole direction of work on the bots participating in the current day Loebner contests is irrelevant from the TT point of view. They work mostly by building enormous databases of statement-response pairs and doing minimal reasoning. Turing would have died laughing if he had known people would take this approach to passing the TT. Let me illustrate why the database idea is insufficient by itself: for a bot to pass the real TT, it would have to answer questions like "what is the integral of e^x dx". Remember that the interrogator is actively trying to find out if it is a human or a bot. The objection "but two humans in conversation wouldn't ask such question" is invalid, and this is precisely why the Loebner contest is stupid.

    The reason why today's bots are so unsuccesful is not far to seek. It has long been known in the AI community that get anywhere near passing the TT, a bot would need what is known as "world knowledge". To build world knowledge, you need memory approximately the capacity of the human brain: estimated to be the order of a petabyte. And processing power to match: the brain runs something like a billion threads in parallel, and is 10^7 times as energy efficient per computation as today's computers. Of course, we aren't there yet. Thus, contrary to what most people would feel the thing that is holding AI up is hardware.

    Similar to today's bot craze, there have been crazes in the past when people thought they were close to building truly intelligent machines ("expert systems" comes to mind.) However, they inevitably came up short because the hardware power wasn't there. In about 20-30 years, assuming there continue to be breakthroughs in storage technology to keep up the doubling, computers will be matching the brain's capacity, and then we'll be talking.

    Summary: to hell with people who apparently popularize science and end up giving the real researchers a bad name.

    • Interesting. A few years ago, I started a bot project, but never completed it. I went through various forms of the "knowledge database" approach that you describe, as it seemed like the most straightforward approach. Unlike other popular bots at the time though, I wasn't building simple question/answer pairs ad infinitum, I was just making a database of knowledge itself. Really simple stuff like "I am a human" and "humans have two legs" and "red is a color", etc.

      The plan was to build this immense database, then add an inferrence engine that could draw conclusions based on the available knowledge, and some sort of NLP on top to provide the input.

      Anyway, in the midst of populating this database, I lost interest. It's refreshing to know now that aparently I was on the right track and that had I kept it up, the hardware would have stopped me before the limitations of my theory.
    • the brain runs something like a billion threads in parallel, and is 10^7 times as energy efficient per computation as today's computers.


      +IMAGINE+ +A+ +BEOWU+


      +BEO+


      +BFoW+


      Error 211 Divide by zero. Application terminated

    • > Thus, contrary to what most people would feel
      > the thing that is holding AI up is hardware.

      Uh? Not only the hardware!
      Let's suppose that you have a computer as powerfull as a brain: I give it to you and say now try to pass the Turing test, would you be able to do it?

      No, because you would be missing:
      1) the software 2) the database.

      We have very little clue about how to do the software right now.
      And even if you had a software which could be interesting, you'd still have to build a HUGE database if you want to have an interesting result..
      And the funny thing is that to really know if your software is interesting or not, first you have to invest a lot of time and money to build the database..
      And if a computer is better than another (with the same hardware to simplify comparison) would it be because it has a better software or a better database?

      Also I disagree with you that making a competition with the Turing test is only to give researchers bad name: human vs computer chess competitions existed also back when human beat computers without effort and nobody protested that it was giving AI researchers a bad name.
      Of course in the end, it seems that beating human has been made thanks to advance in computer power but caused very little progress in AI researches.

      I hope that Go competitions between man and machine will be more interesting for AI researchers.
    • To build world knowledge, you need memory approximately the capacity of the human brain: estimated to be the order of a petabyte.

      Wouldn't Google be of immense use there? An AI capable of utilising the OED, Britannica, and Google would be impressive indeed :)
  • The Best Part (Score:4, Interesting)

    by Anonymous Coward on Wednesday February 26, 2003 @10:42AM (#5386282)
    In 1995, about a year after the publication of Shieber's article, Marvin Minsky, the father of artificial intelligence, posted a notice on the comp.ai and comp.ai.philosophy Usenet newsgroups. In it he drew attention to a clause in the Loebner contest rules to the effect that using the term "Loebner Competition" without permission could result in a revocation of the prize.

    Minsky wrote, "I do hope that someone will volunteer to violate this proscription so that Mr. Loebner will indeed revoke his stupid prize, save himself some money, and spare us the horror of this obnoxious and unproductive annual publicity campaign. In fact, I hereby offer the $100.00 Minsky prize to the first person who gets Loebner to do this. I will explain the details of the rules for the new prize as soon as it is awarded, except that, in the meantime, anyone is free to use the name "Minsky Loebner Prize Revocation Prize" in any advertising they like, without any licensing fee."

    (Minsky did not respond to e-mails requesting an interview.)

    If the CACM article marked Loebner's fall from grace, the Minsky note on comp.ai marked his utter banishment into the wilds of A.I. quackery.

    Can you imagine, for example, being a graduate student in computer science at a big-name school in 1996 and telling your major professor that your goal was to win the Loebner? Loebner was more "out" than Liberace.

    But Loebner did not take his snubbing meekly. Loebner immediately wrote back that the best way for Minsky to get Loebner to revoke his prize was to win it. Of course Minsky had already hinted that Loebner had never made clear what the rules for winning the prize were, so that was not a very satisfactory rejoinder. But then a few days later ("while taking a nice hot bath, drinking a fine wine, about an hour after smoking a really fat joint"), Loebner came up with a more considered and clever response, one that still rattles Minsky nearly a decade later.

    Minsky had announced that he would give $100 to whoever made Loebner stop his contest. But Loebner would only stop his contest when somebody won the gold medal. Therefore, Loebner reasoned, Minsky, being an honorable man, would give $100 to whoever won the ultimate Loebner competition. Therefore, Marvin Minsky was a cosponsor of the Loebner competition, simple as that. It was delicious!

    Loebner promptly issued a press release saying that Marvin Minsky was now a cosponsor of the Loebner Prize, by virtue of his announcement of the "Minsky Loebner Prize Revocation Prize." What made this development so delightfully ironic was Minsky's own statement that anyone was free to use the name "Minsky Loebner Prize Revocation Prize" in any advertising they liked, which made it nearly impossible for Minsky to prevent Loebner from doing just that. Which is why Loebner continues to cite Minsky as a cosponsor of his event every chance he gets.

    The image that comes to my mind whenever I think of this development is from the sublime cartoons of the late, great Chuck Jones, with Hugh Loebner in the role of Bugs Bunny, and Marvin Minsky, the father of artificial intelligence, in the role of Yosemite Sam, stamping his feet, with smoke coming from his ears. In fact, Minsky is still listed as a cosponsor of Loebner's prize on the Web site, and, as we'll see, Minsky is still stamping his feet.
  • My favorite quote from the article...

    The A.I. establishment has for more than a decade put more energy into explaining why the Turing test is irrelevant than it has into passing it.
  • AI is a fraud (Score:5, Insightful)

    by Anonymous Coward on Wednesday February 26, 2003 @10:51AM (#5386335)

    I worked in a research lab that shared a building with MIT's artificial intelligence laboratory. And I have to agree with the article. The AI field is a fraud. Again and again, there would be big placards in the lobby announcing gala media events up in the AI Lab. (We lesser mortals dutifully clomped upstairs to eat the expensive, catered food.)

    And yet *nothing* *ever* *happens* in the field.

    Every now and then a new "hero" emerges. For a while it was Minsky. In recent years, it has been Rodney Brooks. Regardless, you can see the current hero on TV all the time, commenting on matters as an "AI expert". They don't tell you that Brooks' course is widely viewed as a complete crock; a few puerile algorithms, some linear differential equations, some finite automata, and THAT'S IT. The rest is all blabbering with no substance.

    The AI community uses rotating hero-worship in lieu of progress. But it isn't like any of these guys is an actual "AI expert". There are no "AI experts", because there is no such thing as artificial intelligence in this world. They are no more experts on AI than I am an expert on Martian fruit exports. In this field, you don't need real research; an Australian accent and good sense of humor suffice.

    True artificial intelligence would be amazing. But the field has made essentially zero progress in the last fifty years. Obviously, it is a really hard problem. On one hand, the AI guys do what other fields do when they're stuck (since they *must* continue to pump out graduate students, attract grants, etc.), they keep trying to change the question. But the pathetic thing is that many completely denigrate the most obviously fair benchmark-- the Turing test.

    Coincidentally, a benchmark showing the complete failure of the field.

    • Re:AI is a fraud (Score:4, Insightful)

      by MxTxL ( 307166 ) on Wednesday February 26, 2003 @12:09PM (#5386993)
      Dude, sharing a building with AI types an expert on AI does not make.

      Examples of advances in AI:

      1. Computer programs able to spank all but the best humans playing chess.

      2. Computer programs able to spank your ass playing even more complex games like CIV 3, C&C, etc.

      3. Google saying "Searching 3,083,324,652 web pages" and "Results 1 - 10 of about 1,500,000. Search took 0.07 seconds"

      There's been huge advances in AI with such things as Genetic Algorithms and Fuzzy logic. The applications are very specific and are not the far reaching HAL 9000 that people traditionally think of when you say AI. There is no 'singular consciosusness' that is going to pop out of your computer. That is NOT what AI is about. AI is about solving problems. More specifically, it's about finding methods for a computer to solve problems without brute forcing them.

      For example, it would be easy for a computer to beat a chessmaster if the computer had the whole search tree available. The out come of every move of every game would be available, and it would be trivial to steer it towards a victory. But since the tree is HUGE and would take many hundreds of years to generate, the problem of computers playing chess is to get them to figure out a 'smart' way to beat the chessmaster. Alpha-beta tree pruning and things like that are the results. Don't underestimate the power of these.

      There are great things coming out of AI research all the time, but you will not be seeing HAL 9000 any time soon.
  • the last page (http://www.salon.com/tech/feature/2003/02/26/loeb ner_part_one/index4.html) is so damn funny that you HAVE to read this article.
  • by dfenstrate ( 202098 ) <dfenstrate&gmail,com> on Wednesday February 26, 2003 @10:57AM (#5386366)
    that has a chance in hell, I'm placing my bets on Cyc [webfin.com]

    It's basically a computer program that a bunch of researchers have spent 60 million dollars trying to teach it common sense. And they've had some impressive advancements. Previous slashdot story here [slashdot.org]
  • Warning! Gratuitous self-promotion below...

    While I think some of the chatterbox work is important to NLP, I've been working to get computers to learn & understand language based on visual perception. More info here [osforge.com].

  • Nethack AI (Score:2, Interesting)

    by gklyber ( 5133 )
    How about designing a bot to play Nethack until it ascends.
  • by slimemold ( 646213 ) on Wednesday February 26, 2003 @11:27AM (#5386635)
    While the last thing I want to do is defend A.I. researchers, they have gotten a raw deal in one respect. Whenever a program performs a human-like endeavor (e.g. playing chess) at human-level-or-above ability, the first thing people ask is "How does it work?" The programmers then proudly explain their algorithms (e.g. adaptive n-ply search with a heuristic evaluation function emphasizing piece mobility blah blah blah).

    Lo and behold, what first appeared to be intelligence is now just an elaborate sequence of if-then statements. Anyone could have done it. It's not intelligence at all. It's just following a blueprint. You call this intelligence?

    In other words, the lay public expects A.I. to have creativity and strokes of genius, which is much more than they expect of most humans. Or they expect it to be furry with big eyes that makes cooing noises when you pet it. As soon as one realizes that A.I. consists of a computer program, any notion of intelligence evaporates.

    • The assumption is that human intelligence and human minds are really nothing more than a program in of themselves. The criticism that these AI computer programs are 'simply following a mindless program' would be responded with, "Well of course it's just following a program! So am I except that instead of code my brain is following a program of interactions among neurons and chemicals based upon the laws of physics and chemistry!"

      -- An assumption of the field of AI is that all human mind and intelligence is essentially a computer program, or if not that it is a machine of some sort.

      -Romanpoet
    • by zzyzx ( 15139 )
      The problem isn't so much that it is if-then statements as that AI solutions to problems tend to be based around brute force. "I checked every possible series of actions and this one seems to be the best." That is far different from the way that we seem to make decisions, so it doesn't seem intelligent to us.
    • It may come down to whether or not an analog device like the human brain can be sufficiently modeled my a digital simulation. Penrose doesn't seem to think so (The Emperor's New Mind), but I'm not sure. There's an argument that if the human brain can me modeled my rules then one could create a book similar to the "Turn the page to xx if you want to slay the dragon" types. By Following the rulesets, you could have an "intelligent" book by essentially turning pages. Of course, this is hypothetical because such a book would contain millions of pages, but it could exist.

      The second argument is the "walk like a duck, talk like a duck." If you create a program/device that's on appearance indistinguishable from human intelligence, then does it matter how it works? The arguments for this run back to Descartes and argues that there is no mind-body dualism. The brain is the mind. The brain is a physical device. We can use digital devices to model an analog device. Therefore we should be able to model the brain with a sufficiently powerful computer.

      In other words, we could use a digital computer to model a neural cell. If something occurs, perhaps on a quantum level, that prevents us from doing so, then perhaps it's not possible. Otherwise it's only a matter of time.
  • you know, the company that apparently everyone wants to DIE ? what other online magazine would/has run this ?
  • by AlecC ( 512609 ) <aleccawley@gmail.com> on Wednesday February 26, 2003 @11:38AM (#5386739)
    This just shows that we don't actualy know what we mean when we say "Intelligence". It just meant "What I am thinking about when I say Intelligence".

    The Turing Test is not a pass-mark to achieve intelligence, it is an outside limit to stop argument. If something passes, completely, the Turing test, then you know you have intelligence. But that is asn extremely high benchmark. It is like saying that if you can outrun all known vehicles, I have to grant you are a fast runner. You *may* still be a fast runner when when you run a lot slower than that - but we will have to enter into a discussion about how fast is fast. Turing just set an endpoint - it it passes his test it is certainly intelligent.

    There are two ways the Turing Text could be passed. One is via a special purpose machine to pass it - a human simulator. While of research interest, because building such a machine would tell us a lot about how we actually do work, this is unlikely to be a very useful machine, because it will replicate our weaknesses as well as our strengths. Why spend billions building what half an hours funa and a nine month wait can build. (One-way trips to the stars, perhaps?).

    The other way is a general purpose machine which has learned how to copy humans perfectly. By any definition I can think of, this would be an awesomely intelligent machine because it would have learned to understand, and simulate, our minds by the power of pure intellect. Something like playing all the instuments in the orchestra at the same time.

    While I think that the first class of machine may well be built in the fullness of time, It will not be very useful. I don't know whether the second class will ever be built - I doubt it.

    Which brings us back to the "sub-Turing" class of intelligence. If Turing represents an upper limit to the grey area of where intelligence starts, there must be levels of achievement which would be regarded as intelligent by most, if not all, peoples judgement.

    I then ask the question: what use is sub-Turing intelligence? Well, there are lots of tasks which we regard as needing intelligence which we would like to automate. In fact, some of them have already been automated. But when we automate them, we say "we know how that automaton works, so it can't be intelligence". Chess, for example - once regarded as the last test before the Turing test, now regarded as a nifty but essentially unimportant achievement.

    We don't actually *know* what we mean when we say "Intelligence". Turing knew that, and provided an empirical rather than analytical test. However, I would say that "Intelligence" bears the same relationship to "Computer Science" as "Magic" does to "Technology" in Clarke's Law: "Any sufficently advanced technology is indistinguishable from magic".

    "Any sufficiently advanced Computer Science is indistinguisahable from Intelligence" - Cawley's Law.

    Or, to put it another way, Intelligence means "I don't understand how you thought that".

    Which explains how Joe Luser thinks his computer is intelligent, whereas Bill Slashdot doesn't.
    • One of the problems with the Turing test is that it requires the computer to be indistinguishable from the human. Many have argued that this means that it should not answer questions such as, "What is the sin of 15 to the twentieth digit?" A question like, "What's the weather like?" should be answered with, "It's hot, but dry," rather than, "25 degrees Fahrenheit, chance of rain in the evening."

      There's also the problem that non-AI entertainment software (Eliza, for one) can often do a remarkable job of mimicking human response without actually being "intelligent".
  • by TheLink ( 130905 ) on Wednesday February 26, 2003 @11:45AM (#5386794) Journal
    Funny the AI researchers seem to be upset with the contest.

    But I find it strange that various people keep trying to either:
    1) Take part.
    2) Stop the contest.
    3) Tell the contest sponsor how to run the contest or spend his money.

    Are they really so hard up for Loebner's money? If their stuff really works I'm sure they can get money from other people.

    As far as I know none of the AI entrants so far deserve the main prize.

    It's almost as if the tailors are upset that someone every year points out the emperor is naked. If indeed the emperor isn't naked why get upset?

    Or they admit the emperor is naked and they are just tired of hearing about it? Well so far has any of them admitted that?
  • by TheOrquithVagrant ( 582340 ) on Wednesday February 26, 2003 @11:48AM (#5386813)
    My favorite quote about the Turing test comes from Jaron Lanier:

    "Only a fucked-up gay Englishman being tortured with hormone injections could possibly have supposed that consciousness was some kind of social exam you had to pass."
  • by RembrandtX ( 240864 ) on Wednesday February 26, 2003 @12:12PM (#5387022) Homepage Journal
    I particurly like how Loebner out foxed Marvin Minsky with the ammunition Minsky gave him.

    Sure the guy may be a pot head, might not want a lasting relationship with a woman, and is probally a horribly annoying git from hell.

    He did however, manage to outthink the 'brightest' mind in AI research. Maybe the reasons he did were purile .. but he still did it.

    As a programmer I know I was taught to think in small steps, think ahead to the probable issues my code might cause, and to double check my work before dropping it on a production box.

    Apparantly Minsky forgot he was a computer scientist when he wrote that news group response.
    I'm sure it was just a flame mail, a very human response to frustration and irritation. But as one of the Leading names in AI research, he should have known better.

    So, if for nothing else, my hats off to the 'Disco-Floor-Maker' for out thinking one of the 'leaders' in AI research.

    Its always nice to watch an acidemic geek get smacked down by someone who lives with the rest of society.
  • I don't know if it is the case in this instance but the Turing Test rubs some the wrong way because it is a pretty lousy test for intelligence. The turing test measures the performance of something not it's competence.

    What we see is what the computer does and not what goes on behind the scenes, which many people believe is important in positing intelligence in a agent. One of the major problems with behaviorism was that it initially took into account only how an animal performed and not what it was thinking. Sure the rat could learn the maze when it is rewarded for running thorught it, but it could also learn the maze (competence) by being pulled through it on a little cart or when it was completely sated. The performance of something may be important in judging its intelligence but it is far from the only factor. Imaginge a person in a paralyzed state, they have the competance but lack the ability to performance.

    Like I said this may not be the issue as discussed in the article, but it is one caveat to the Turing Test.
    • by vrmlguy ( 120854 ) <samwyse AT gmail DOT com> on Wednesday February 26, 2003 @02:56PM (#5388427) Homepage Journal
      The turing test measures the performance of something not it's competence. [...] Imaginge a person in a paralyzed state, they have the competance but lack the ability to performance.

      I'm not sure what you mean. The two sentences that I quoted seem to indicate that Christopher Reeves couldn't participate in a Turing Test. Turing's insight was that performance is the only measure that we have of intellegence. His paper actually included several hypothetical ways by which performance isn't the only measure. For example, parapsychological effects: you look at a Rhine Card and ask the testee what you're looking at. If humans consistently guess better (or worse!) than computers, then the Turing Test is invalid (and a whole new field of scientific study has opened up).

      On the other hand, you could ask Chris Reeve (or a computer) to play chess with you. Either could say, "Sorry, I don't have a board handy, how about tic-tac-toe?"

      As you read this, are you evaluating my competance or my performance? How do you know that I'm not really a bot from Cycorp [cyc.com]?

  • by profBill ( 98315 ) <punch AT cse DOT msu DOT edu> on Wednesday February 26, 2003 @01:02PM (#5387484) Homepage
    The following is an excerpt from an article by Drew McDermott about the "Red Herring Test". I always thought it pointed out quite well why the Turing Test seems like such a waste of time.
    What confuses most people is that they mistake Turing's attempt to avoid the question for an attempt to answer it. But anyone who believes that Turing's test is an interesting test for intelligence is guilty of behaviorism, not a crime in itself, but shameful in anyone who believes in cognitive science, the antithesis of behaviorism. Of course, it is probably true that a system that could fool a trained panel of experts into believing it intelligent would in fact be intelligent, but it is blatant waste of experts' time to have them sit on such panels, when they should be inquiring about how minds actually
    work.

    Compare the following hypothetical case: Human explorers land on a planet whose inhabitants are somewhat technologically backward. The locals are impressed by human gadgets, especially radio. They decide to try and understand it, so they rustle up some philosophers in order first to arrive at a criterion for something's being a radio. Their first cut is that a radio is a device that emits sounds whenever similar sounds are made in the control room of the earthlings' spaceship. But others object that this criterion does not rule out ordinary telephony, so the criterion is modified. Perhaps they arrive at something like, ``A radio is a device that emits sounds similar to those made in the earthlings' spaceship while suspended from the ceiling by a nonconducting string.''

    This is all amusing, but a waste of time if the aliens really want to understand radio. No one needs an ironclad behavioral criterion for ``radiohood,'' assuming that there are plenty of indisputably genuine radios around to study. Such a study might eventually lead to a deeper definition of radio as ``A receiver of signals encoded as modulated electromagnetic waves,'' but by the time the definition was available it would be relatively unimportant, when stacked up against the theory of electromagnetism.

    Similarly with intelligence. If we ever have a theory that explains it, we will no longer care about distinguishing bogus understanding from the real thing. We will have a rich theory based on concepts we can now barely imagine, just as radio is based on something as unlikely as invisible electromagnetic waves.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...