Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

George the Next Generation AI? 108

smileytshirt writes to mention a story on the News.com.au site about George the AI, the latest in a line of chatbots intended to mimic real human behavior. What makes AI George different than, say, ALICE is the recent addition of an avatar: a Flash animated body that reacts mostly in real time to the emotional impact of the conversation. From the article: "One can now have an oral discussion with him over the Internet, 'face to face'. George appears on the website www.jabberwacky.com and takes the form of a thin, bald man with yellow glasses who wears a white turtleneck sweater. He can smile, laugh, sulk and bang his fist on his virtual table. He can turn on the charm and wax romantic. But he can also turn coarse at times. It isn't as if George only learned good manners. "
This discussion has been archived. No new comments can be posted.

George the Next Generation AI?

Comments Filter:
  • by 80 85 83 83 89 33 ( 819873 ) on Monday September 25, 2006 @05:43AM (#16182775) Journal
    A prestigious Artificial Intelligence (AI) prize has been won for the second year running by a British company.

    Icogno scooped the 2006 Loebner Prize Bronze Medal after judges decided that its AI called Joan was the "most human computer program".

    The competition is based on the Turing test, which suggests computers could be seen as "intelligent" if their chat was indistinguishable from humans.

    The gold medal, which goes to an AI that fools the judges, is unclaimed.

    The prize is awarded after judges hold a conversation with the AI, asking questions to determine its "humanity" and the quality of its responses.

    Joan is a "26-year-old budding writer" who exists only on a set of computer servers. Last year's Loebner prize went to Icogno's AI called George.

    'Big difference'

    Joan and George's creator Rollo Carpenter said: "The big difference overall between the two AIs is that Joan has learnt from the general public and has grown a huge amount in the last year and is keeping ahead of the competition."

    Joan is one of the firm's so-called Jabberwacky bots, which develop their conversational skills by speaking to the general public online.

    English writer Ariadne Tampion worked with Mr Carpenter to help develop Joan's persona by chatting to the AI online.

    Those conversations helped develop the majority of the 16,000 lines that distinguish her from a generic AI. In all, the bot has more than five million lines of conversation available to it.

    "She helped create a much more conversational ability for Joan than George last year.

    "It does not mean the AI only ever uses her words. Any amount of teaching is not quite enough for a realistic conversation."

    He added: "So the AI uses other things said by other people to it that fit the conversational model."

    The underlying technology that powers George and Joan is the same, said Mr Carpenter.

    "In terms of her ability to retain her character Joan is more sophisticated than George.

    "The technology has moved on a bit during the course of the year but there's been no revolution."

    The technology which powers Joan resides on Windows server machines. The core of the AI is on a Dual Xeon machine with lots of memory, said Mr Carpenter.

    He said there was a very direct relationship between the computing power available and the sophistication of the AI.

    'Rate of learning'

    "Investment in a lot more equipment could accelerate the rate of learning," he said.

    "If I had Google's hardware behind it and more hits than Google receives, the AI's range of utterances could grow by a million every single day." he added.

    Mr Carpenter has said he expects a machine to fool people that they are speaking to a human being and therefore pass the Turing test by 2016.

    Icogno is now working to commercialise the technology of the AI.

    "We are working with an organisation that wants web avatars for use in marketing and entertainment purposes.

    "Long term we see the use of AIs in call centres, support services and even for sales."

    One day in the future when you pick up the phone to seek help with your broadband connection, the person you are speaking with may be George or Joan, AIs residing on a computer somewhere in the world.

    -----------------------

    The following is an extract of a conversation between one of the Loebner judges and the AI Joan:

    Judge: But can't a machine think?
    Joan: It may be possible.
    Judge: And conscious?
    Joan: Not after I'm done with you.
    Judge: What are you going to do to me?
    Joan: Nothing. I just wanted to talk.
    Judge: What about?
    Joan: Being tested like this.
  • by dark-br ( 473115 ) on Monday September 25, 2006 @06:52AM (#16183033) Homepage
    The Turing test is not a well defined test. Whether a robot passes the Turing test or not, it greatly depends on the intelligence of the human partner. A chatbot may fool a 10-year old, but it may fail with a 20-year old. So in fact, we already have many chatbots that pass the Turing test - it all depends on how you look at the issue.

    Hint - most chat bots do not have memory, they do not remember what you talked about 5 minutes ago with them. They just react to the current input, they cannot do more. So, if you ask the chatbot to tell you what you talked about a few minutes ago, it won't be able to do so. That's the dead give away of a chatbot.

    Just my 2p, as I live in the UK ;)

  • by quigonn ( 80360 ) on Monday September 25, 2006 @06:54AM (#16183039) Homepage
    I friend of mine, who himself does quite a lot of research in the field of AI, recently told me after attending a conference that most of the researchers in this field approach most problems with the attitude and the naivity of the 1970s. He also told me that the current lack of willingness to approach problems with new tactics and to combine existing AI concepts with other IT topics makes it a lot easier for him to develop kinds of AI systems (he's active in the area of computer linguistics) that haven't been developed before and to produce real innovation.
  • by Jekler ( 626699 ) on Monday September 25, 2006 @07:20AM (#16183175)

    I think we could develop a "next generation AI" even without answers to difficult philosophical questions. We have barely scratched the surface of what is theoretically possible given the information we have.

    We could probably develop an AI that could hold factually and grammatically correct conversations without needing philosophers. That would be a huge improvement considering the current generation of AI is prone to spout gibberish even given a simple question.

    Our current best-of-breed AI cannot discern when context is and is not important. If they are programmed to consider context, each answer strings from the last answer/response set, and non-sequitors confuse it. Conversly, if you ask a bot with no sense of context it has difficulty parsing pronouns.

  • by Yvanhoe ( 564877 ) on Monday September 25, 2006 @07:46AM (#16183335) Journal
    Prestigious ? The Loebner Prize ?
    Agreed, this is the only publicized contest of Turing tests, but in the AI community, it is subject to hot debate (and flaming). Rules and scoring systems are known to change from year to year, and its result are really unimpressive. If you take the logs of the contest, you'll see that the winner bots are often those who constantly (and consistently) insult the user, disregarding his questions. They are not mistaken for a human but get a higher grade as they behave "more humanly" (that is at least what happened one year, I hope it changed)

    Most contestants (and winners) are remakes of ALICE : it is a database of generic questions and sentence formula to recognize and to react. For instance if you say it "I think X" it will answer you "Why do you think X ?" or, to score more points , "Why should I care, mothaf...r ?!". By pure luck, a coherent thread of conversation can happen, but the bot doesn't try to make sense of the user's sentence in order to react to it, it just tries something that "could probably sound good".

    Some chatbots can display interesting behaviors, learning some things in the conversation, but this prize simply doesn't encourage the emergence of these behaviors.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...