Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Robot Dogs Evolve Their Own Language 200

bab00n writes According to this article at The Engineer Online, researchers led by the Institute of Cognitive Science and Technology in Italy are developing robots that evolve their own language, bypassing the limits of imposing human rule-based communication. The technology, dubbed Embedded and Communicating Agents, has allowed researchers at Sony's Computer Science Laboratory in France to add a new level of intelligence to the AIBO dog. The robot dog has learnt to see a ball and tell another one where the ball is, if it's moving and what colour it is, and the other is capable of recognising it.
This discussion has been archived. No new comments can be posted.

Robot Dogs Evolve Their Own Language

Comments Filter:
  • Hmm... (Score:5, Interesting)

    by ThinkingInBinary ( 899485 ) <<thinkinginbinary> <at> <gmail.com>> on Friday June 23, 2006 @12:04PM (#15589979) Homepage

    This seems a little hard to believe. I could believe that they programmed it to be able to speak and hear statements that are directly connected to thoughts, but I just can't see an AIBO learning, much less inventing, the syntax to be able to say something like "The red ball is behind you, rolling to the right." It just seems a little far-fetched.

    What the article doesn't explain is at what level the language system is attached to the brain. Does it talk about raw thoughts, or specific ideas (like the ball)? Do AIBO's have "raw thoughts", or can they only think about what they were programmed to know about?

  • Also like children, the AIBOs initially started babbling aimlessly until two or more settled on a sound to describe an object or aspect of their environment, gradually building a lexicon and grammatical rules through which to communicate.

    How does this work? Is it a neural network, where sounds are associated with objects? That would make sense for the first part, but how does a neural network represent more complex ideas like "the red ball is behind the blue ball"? Or do the AIBO's not have thoughts that complex?

  • by whitehatlurker ( 867714 ) on Friday June 23, 2006 @12:09PM (#15590013) Journal
    Something that was interesting from FTA was the "babble" stage, which was compared to human children. This experiment might teach us more about human linguistics as well. Learning languages, how languages "mutate" over time, how cultures mix when two communities with different languages are placed together, the group mind boggles ...

    Very interesting.

  • Robot Swarms (Score:3, Interesting)

    by dbc001 ( 541033 ) on Friday June 23, 2006 @12:35PM (#15590265)
    This is really exciting but the prospect of swarms of any kind of robot is a bit scary - hopefully designers will build in a simple, easily exploitable flaw so that an out-of-control swarm could be easily deactivated.
  • by SirClicksalot ( 962033 ) on Friday June 23, 2006 @12:40PM (#15590314)
    This type of language/vocabulary development experiments has been done before.
    You should take a look at the talking heads experiment [csl.sony.fr].
    This page [vub.ac.be]has some related publications.
  • by uniqueUser ( 879166 ) on Friday June 23, 2006 @12:48PM (#15590392)
    Was his Aibo able to teach another Aibo what it knew?
    Yes. TFA said that the dogs could learn new tricks then teach them to other dogs. This is truly something new. How do you program both learn and teach? I would like to see some of the code!
  • by Theovon ( 109752 ) on Friday June 23, 2006 @01:00PM (#15590502)
    This article is all fluff. They don't say anything really interesting. Ok, they can communicate. If that's so, then engineers can record it and perform analysis on the lexicon and gramatical structure. I want to know something about that! I'm sure it won't match up well to human language, but that's okay, because human languages are themselves very diverse in the way things are represented. Would it kill them to give a few examples of 'words' (even if they're described in terms of musical notes or whatever), what they mean, and how they go together to form sentences?
  • by jtogel ( 840879 ) <julian@togelius.com> on Friday June 23, 2006 @01:16PM (#15590661) Homepage Journal
    I don't agree. We basically only need to reward the robots appropriately (just like the zombies in "day of the dead") for artificial evolution to create the intelligence for us. "Wants" and "needs" are just words we use to label certain cognitive mechanisms, out of similarity with how we perceive our own thinking. If having "wants" and "needs" leads to better fitness (higher rewards) for the robot, evolution will come up with those things.

    I just wrote a post [blogspot.com] describing the general idea behind this approach to artificial intelligence - check it out!
  • by yfnET ( 834882 ) on Friday June 23, 2006 @01:44PM (#15590908) Homepage
    Technology Quarterly [economist.com]

    How to build a Babel fish
    Jun 8th 2006
    From The Economist print edition

    Translation software: The science-fiction dream of a machine that understands any language is getting slowly closer

    IMAGE [economist.com]

    IT IS arguably the most useful gadget in the space-farer’s toolkit. In “The Hitchhiker’s Guide to the Galaxy”, Douglas Adams depicted it as a “small, yellow and leech-like” fish, called a Babel fish, that you stick in your ear. In “Star Trek”, meanwhile, it is known simply as the Universal Language Translator. But whatever you call it, there is no doubting the practical value of a device that is capable of translating any language into another.

    Remarkably, however, such devices are now on the verge of becoming a reality, thanks to new “statistical machine translation” software. Unlike previous approaches to machine translation, which relied upon rules identified by linguists which then had to be tediously hand-coded into software, this new method requires absolutely no linguistic knowledge or expert understanding of a language in order to translate it. And last month researchers at Carnegie Mellon University (CMU) in Pittsburgh began work on a machine that they hope will be able to learn a new language simply by getting foreign speakers to talk into it and perhaps, eventually, by watching television.

    Within the next few years there will be an explosion in translation technologies, says Alex Waibel, director of the International Centre for Advanced Communication Technology, which is based jointly at the University of Karlsruhe in Germany and at CMU. He predicts there will be real-time automatic dubbing, which will let people watch foreign films or television programmes in their native languages, and search engines that will enable users to trawl through multilingual archives of documents, videos and audio files. And, eventually, there may even be electronic devices that work like Babel fish, whispering translations in your ear as someone speaks to you in a foreign tongue.

    This may sound fanciful, but already a system has been developed that can translate speeches or lectures from one language into another, in real time and regardless of the subject matter. The system required no programming of grammatical rules or syntax. Instead it was given a vast number of speeches, and their accurate translations (performed by humans) into a second language, for statistical analysis. One of the reasons it works so well is that these speeches came from the United Nations and the European Parliament, where a broad range of topics are discussed. “The linguistic knowledge is automatically extracted from these huge data resources,” says Dr Waibel.

    Statistical translation encompasses a range of techniques, but what they all have in common is the use of statistical analysis, rather than rigid rules, to convert text from one language into another. Most systems start with a large bilingual corpus of text. By analysing the frequency with which clusters of words appear in close proximity in the two languages, it is possible to work out which words correspond to each other in the two languages. This approach offers much greater flexibility than rule-based systems, since it translates languages based on how they are actually used, rather than relying on rigid grammatical rules which may not always be observed, and often have exceptions.

    Examples abound of the ridiculous results produced by rule-based systems, which are unable to cope in the face of similes, ambiguities or bad grammar. In one example, a sentence written in Arabic meaning “The White House confirmed the existence of a new bin Laden tape” was translated using a standard rule-based translator and became “Alpine white new presence tape registered for cof
  • by goat_roperdillo ( 984552 ) on Friday June 23, 2006 @02:38PM (#15591361)
    There is no "one" language developed. If you rerun the experiment the robot dog culture will most likely develop a different language. So indeed as you suspect the language is not a known human language but it has all the aspects of human language: syntax, semantics, grammar, vocabulary, etc.

    And so in the experiments new words are created; old less useful words decline in use. At any time, there may be multiple words for the same thing in the population, but eventually one of those words mostly "wins over" the other words (although the older word may continue to be used by a small part of the population).

    BTW what's an IPC? [See, here we're attempting to resolve terms in our separate "ontologies" (dictionaries): doing what the W3C's Semantic Web cannot do and what the enhanced AIBOs _can_ do].

This file will self-destruct in five minutes.

Working...