Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Babybot Learns Like You Did 107

Posted by Zonk
from the i'm-still-working-on-not-knocking-things-over dept.
holy_calamity writes "A European project has produced this one-armed 'babybot' that learns like a human child. It experiments and knocks things over until it can pick them up for itself. Interestingly the next step is to build a fully humanoid version that's open source in both software and hardware."
This discussion has been archived. No new comments can be posted.

Babybot Learns Like You Did

Comments Filter:
  • AI Learning (Score:5, Interesting)

    by fatduck (961824) * on Saturday May 06, 2006 @02:34AM (#15275838)
    From TFA: "The goal is to build a humanoid 2-year-old child," explains Metta. This will have all of Babybot's abilities and the researchers hope it may eventually even learn how to walk. "It will definitely crawl," says Metta, "and is designed so that walking is mechanically possible." Not a bad goal at all, and if it's open source they can't cheat by promoting a specific goal such as walking in the software. Reminds me of Prey where they couldn't figure out how to get the nanomachine swarm to fly so they let its AI "learn" how to do it on its own.
    • Re:AI Learning (Score:3, Insightful)

      by EnsilZah (575600)
      They may not use a simple goal like walking, but in order to learn there has to be some sort of reward/punishment system in place.
      Real babies have goals like getting their parents' attention, being fed, keeping warm.
      I wonder what sort of goals a robot baby has to have to learn in the same way a real one does.
    • by bmo (77928) on Saturday May 06, 2006 @03:23AM (#15275939)
      TFA Said: "The goal is to build a humanoid 2-year-old child"

      You said: Not a bad goal at all

      Apparently you've never been around a 2-year old.

      --
      BMO
      • And how about a 2-year old that has access to the internet, and also develops some kind of hate for all the humans that created him...? Pretty scary, huh?
      • Oh god, I just had this flash back of being so very proud, and then so very terrified when my first child learned to stand up and walk.

        Open Sores Suggestion: Independent power supply, to me, is the single biggest choke point. I would suggest that the power supply be able to survive for about 3 hours, with the "baby bottle alarm" going off at the 2 hour mark; If after 3 hours and a "feeding" has not occured, then go to "hibernation/nap mode".

        Of course BabyBot will not need diapers, or nap time. That means
    • Not a bad goal at all, and if it's open source they can't cheat by promoting a specific goal such as walking in the software.

      Yes. AI scientists have a bad habit of making implausible claims for their creations. The open approach will keep them honest and is to be commended. At the very least, such a robot needs several types of learning functions including perceptual, short and long term memory mechanisms, concept formation, pattern completion, anticipatory behavior, motor learning and coordination, operant
  • by Anonymous Coward on Saturday May 06, 2006 @02:42AM (#15275853)
    Aren't you afraid this poor open source robot will get exploited by the other robots, or do the proprietary robots have something to hide? What kind of insults can we expect? Your father was a code monkey and your mother got her card punched by a UNIVAC!
  • names (Score:3, Funny)

    by hyperstation (185147) on Saturday May 06, 2006 @02:45AM (#15275857)
    babybot? robocub? fire your marketing people!
    • babybot? robocub? fire your marketing people!

      Don't fire them, give them a bonus! If they had picked some other boring name, do you really think the article would have, e.g., made in on /.? The name might very well be the deciding factor in their getting continued funding (as sad as that may be).
      • If they had picked some other boring name, do you really think the article would have, e.g., made in on /.?

        But will Robocub want to play with its Wii?

  • May? (Score:5, Funny)

    by Kangburra (911213) on Saturday May 06, 2006 @02:49AM (#15275862)
    may mean that such machines can never become as intelligent as us

    They don't know and they're playing with it. Have they even seen the Matrix??
  • Dude (Score:5, Funny)

    by Umbral Blot (737704) on Saturday May 06, 2006 @02:50AM (#15275869) Homepage
    A one armed baby bot? That's disturbing on so many levels.
  • LOL When "babybot" goes to grap the ball [unige.it] watch how fast he gets his hand out of the way!

    Obviously babybot doesn't know it's own strength! LOL
  • So this bot is going to lie in its crib, thrashing its arms and legs, screaming at the top of its lungs, until someone picks it, gives it a full juice bottle, a cookie and walks it around trying desparately to amuse it?
  • by Flyboy Connor (741764) on Saturday May 06, 2006 @02:57AM (#15275886)
    The goal is to build a humanoid 2-year-old child," explains Metta. This will have all of Babybot's abilities and the researchers hope it may eventually even learn how to walk.

    A fun project, and potentially a good step on the road towards human-like intelligence. However, the "2-year-old" remark is again one of those far-fetched promises that is a loooooooooooooong way off. Making a robot-arm play with a rubber ducky is one thing, letting a robot understand what a rubber ducky is, is quite another. Making a robot crawl is one thing, but letting a robot crawl with a self-conscious purpose, again is quite another.

    Fortunately, one of the researcher in TFA admits that 20 computers with a neural network on each is no replacement for a human brain. But the 2-year-old remark follows later, and is evidently entered as a way to generate funding. It sounds cool, but it is not what the result of this project will be. I assume the researchers know this all too well. Or perhaps they have no children of their own.

    • But just think... (Score:1, Interesting)

      by Anonymous Coward
      Fortunately, one of the researcher in TFA admits that 20 computers with a neural network on each is no replacement for a human brain. But the 2-year-old remark follows later, and is evidently entered as a way to generate funding. It sounds cool, but it is not what the result of this project will be. I assume the researchers know this all too well. Or perhaps they have no children of their own.

      Think of how Social Services could use something like this if it can act like a 2 year-old. Do they want to make
    • > However, the "2-year-old" remark is again one of those far-fetched promises
      > that is a loooooooooooooong way off.

      Also, how do we know how a baby learns? Perhaps a more accurate description would have another comma: "It learns, like a 2 year old learns" ?
    • by Richard Kirk (535523) on Saturday May 06, 2006 @04:35AM (#15276068)
      This particular experiment is not going to create a 2-year old. We have had robots and simulations of robots that have used neutral nets to see if motor skill can be optimised using learning-like techniques. We have had recognition programs that do the same things that our eye and brain system do. This is an intelligent combination of the two.

      However, just suppose, and then suppose, and then suppose...

      So far, we can build computers that can simulate brain cells. There is nothing stopping us making a computer that has a similar complexity to the brain. We will have to mimic the strange mix of part-design, part randomness that brains are. Or maybe we can just throw more computing power, and stuff the brain doesn't have, like the ability to back up and regress. Sooner or later - probably later is my guess, but who knows? - we are going to come up with something that shows intelligence, and probably has inteligence.

      African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.

      One day, someone is going to make something intelligent, and then turn it off, and there will be an outcry. Is anyone doing the thinking on the ethics of making it before making it?

      • One day, someone is going to make something intelligent, and then turn it off, and there will be an outcry. Is anyone doing the thinking on the ethics of making it before making it?

        Yes, of course people are thinking about this. Philosophers, cognitive scientists and AI researchers all frequently discuss such subjects. But why would turning an "intelligent" computer off cause an outcry? A truly intelligent agent will likely need a substantial amount of memory. This suggests to me that it will involve a per

        • But why would turning an "intelligent" computer off cause an outcry?

          I guess it would if you turned the computer off without its consent. The question is how much say a computer has in determining what is done to it. I give a surgeon permission to turn me off for a while if an operation must be performed on me. If I am going to add extra memory to an intelligent computer for which it needs rebooting, I am going to politely ask if it would not mind being turned off for half an hour or so. And I expect the c

        • This is fine if you promise to wake it up again. But young children can get scared of going to sleep because they do not feel in control of when they wake up again. My ones never did, but I am told it happens.

          Ever read the original 'Frankenstein'? In particular, the bit were the doctor meets the 'monster' on the glacier, and the 'monster' demands that Frankenstein - who he regards as someone who has taken of the role and therefore the responsibilities of a creating god - finishes the job properly and give

      • And they are Not Like Us, so it's OK to keep them in cages.

        Because we don't keep anyone *like us* in cages maybe?

      • African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.


        But we keep ywo year olds of our own species, in cages. Haven't you watched "Rugrats"?? they were kept in cages!
      • African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.

        Most people keep small children in cages, they just normally refer to them as cribs, cots or playpens. Oh and don't get started on swaddling, okay that is only up to sbout 5 months.

      • The people who think about ethics are too busy thinking to actually invent something.
      • The difficulty is coming up with a consistent ethical policy that is reasonable, and works when relating to bacteria, plants, animals, humans, superior aliens, and machines. It seems obvious that all life including bacteria can't be given human rights. But where do you draw the line between bacteria and humans? If you decide that rats can be killed, experimented on, eaten, etc, then how do you argue that aliens or super intelligent machines shouldn't declare humans insignificantly better than rats, and deci
    • I question:


      What happens when machines reach human level thought speech or better yet surpass it? What then about us becomes obsolete?
    • A fun project, and potentially a good step on the road towards human-like intelligence. However, the "2-year-old" remark is again one of those far-fetched promises that is a loooooooooooooong way off. Making a robot-arm play with a rubber ducky is one thing, letting a robot understand what a rubber ducky is, is quite another.

      How do we know the 2 year old does understand what a ruber ducky is?

      Of course their brain may understand the rubber ducky is "that yellow thing... that feels a certain way... has that c
    • potentially a good step on the road towards human-like intelligence.

      Dude, we are so far from a human-like AI, it's like taking a step towards the east and saying "it's potentially a good step on the road towards Moscow". I may be exagerrating a little tho.

  • Neural Networks (Score:5, Insightful)

    by EnsilZah (575600) <.moc.liamG. .ta. .haZlisnE.> on Saturday May 06, 2006 @03:04AM (#15275900)
    The story mentions that the AI is made using neural nets.
    I think it's amazing how such simple data structures can generate such complex behaviour.

    In case anyone is interested, there's this pretty easy to understand tutorial on neural nets here:
    http://www.ai-junkie.com/ann/evolved/nnt1.html [ai-junkie.com]
    • Linked page has blue links on a blue background... Sometimes it'd be nice to encounter some Natural Intelligence.
    • I, too, am astonished by some of the results of extremely simple "algorithms". It's called "emergent behaviour" (see http://en.wikipedia.org/wiki/Emergence [wikipedia.org]). My favorite is the shoal of fish.

      Neural nets running on a cluster of computers is quite a lot more complex. I can only hope that they're looking to improve the ANN paradigm to take us that little bit closer to real AI, rather than just using existing techniques to prove a point.

      Anyway, I'm going to hunt around for more data on this. It looks intere
    • I think it's amazing how such simple data structures can generate such complex behaviour.

      I am amazed that you are amazed. Simple behavior is at the root of _all_ complex systems: simple interactions between molecules give rise to climate. Cells in a finite state machine produce complex emergent behaviour.

    • I think it's amazing how such simple data structures can generate such complex behaviour.

      Me, on the other hand, think it's pretty amazing how simplistic behaviour these basic models can recreate and still be at the forefront of academic research. Simple statistical models outperform AI-techniques on most classification problems any day. They bloody well shouldn't!

      • Re:Neural Networks (Score:3, Interesting)

        by arrrrg (902404)
        I'm an AI grad student, and I can tell you that (rather complex) statistical learning methods, which are considered part of AI, blow most simple methods (and neural nets) out of the water on most classification problems these days. In fact, I'm procrastinating from my project involving SVMs [wikipedia.org] right now to write this comment.

        Perhaps by AI you're referring just to neural nets? While people get them to do some cool things, these (in the for you're used to seeing them in) are at the very very "dumb end" of A
        • Re:Neural Networks (Score:3, Insightful)

          by hyfe (641811)
          I'm an AI grad student, and I can tell you that (rather complex) statistical learning methods, which are considered part of AI,

          That's what I said :)

          Perhaps by AI you're referring just to neural nets?

          By AI I'm referring to something that is not inheretly (too) bound by the abstractions required to make it work. EG; how easily transferable is the experience from numbers to actualc concepts. Various forms of regression analysis and stuff sure do wonders, but to be honest, they feel so inheretly limited

          • Is that AI in the sense that a bayesian filter doesn't need to know what is trying to sell me stuff and what isn't, it just learns? Or would the data set need to be able to be transferred to things other than plain text?
          • Re:Neural Networks (Score:3, Insightful)

            by Helios1182 (629010)

            I think we, the AI community, are making actual progress. The problem is that the problem is much harder than people thought it would be back when it first emerged.

            Statistical models have done wonders for a lot of things. Classification, mentioned above, is one of the most obvious successes. Natural language processing is another surprising success of statistical methods. The use of hidden markov models has solved a number of problems that were difficult using symbolic approaches (mostly dealing with

        • If you are still reading this thread, 2 quick Q's:

          Which school?

          and

          Would you recommend it?

          (B.S. shopping for grad schools)

        • Wouldn't Logistic Regression [wikipedia.org] be faster and produce equally good results? At least, with text classification that usually seems to be the case.
  • How long until it learns how to frag?
    • How long until it learns how to frag?


      Never. The ultimate quake setup requires two hands - one on the mouse and other on the keyboard.
  • Is this the offspring of Data and Tasha Yar?
  • Wow. (Score:4, Interesting)

    by Dare (18856) on Saturday May 06, 2006 @03:47AM (#15275987)
    I wonder what happens when this bot discovers that it's a physical object, and can try and manipulate itself.

    (... yeah, baby robot masturbation... but no, seriously...)
  • From TFA:

    "Everything about it will be open source, including the hardware, so anyone can use it in their own work," Metta says.

    I'm unclear on this concept. Do they mean off the shelf commodity parts? Blueprints so that you can machine the parts yourself, if you have a lathe? Or is open source going to become a euphemism like "five finger discount"?

    Seriously, what is Open Source Hardware, if it's not just a sorry misuse of a buzzword?
    • I'm guessing they'll release the blueprints.
    • Seriously, what is Open Source Hardware, if it's not just a sorry misuse of a buzzword?

      Valid point, but please don't let that detract from the benefits of this. As a part-time "tinkerer" myself, I for one am happy to know that not *everyone* in this world is patent-obsessed.

      After all, how can we stand on the shoulders of giants when those same giants keep stepping on the little guy?

  • It doesn't need diapers and doesn't cry during the night. Put a second arm on it and tell me when it hits the market, I'm buying one!
  • Ok call me a scifi nut but who on /. isnt? But can you say Cylon? Him first we start with Babybot, then crawlingbot, then a Walking Chrome Toaster, then 12 new human like models. All beleiving that there creator is flawed and is now believing in our God or Gods pending your religion.....
  • by vagabond_gr (762469) on Saturday May 06, 2006 @05:29AM (#15276165)
    It experiments and knocks things over until it can pick them up for itself.

    You don't need an advanced AI to do that, the algorithm goes like this:


    while(1) {
        throw_toy();
        while(!toy_is_back())
            cry_loud();
    }

    • while(1) {
      throw_toy();
      while(!toy_is_back())
      cry_loud();

      if (mom_leaves) {runsilent();}
      }

      Trust me. Robot or not, its the oldest trick in the book.

    • With true AI, it learns based on example and stores such memories as algorithms. Over time, such algorithms can be modified and honed for specific skill sets. While you could design something that acts like AI with a dictionary of predefined algorithms, it's still not AI...it's an illusion of AI. If you ask me, that defeats the purpose of AI research.

  • Have anyone seen the video?

    I have seen 2 (all?) of them and I have noticed that the bot had to rest his hand on the surface everytime he fails the task before attempting again. Why does it have to do that?

    Also. At first I have noticed that the bot drops objects into the hand of the researcher. But later I have noticed that it just drops it in the particular place (second video, pile of objects on the right at the level of the babytable). I guess the reasearcher sticks his hand so the object drops into his h
    • I'm just guessing, but it's likely b/c you have to do certain things for safety when working with robots, even (especially?) in research. Getting positioning like that is very, very hard without constant homing and range checking. I imagine it would also be difficult to "learn" unless you tried it the same way until you got it right.
  • by thewiz (24994) * on Saturday May 06, 2006 @07:25AM (#15276371)
    "The goal is to build a humanoid 2-year-old child," explains Metta.

    There is a far easier and more pleasant way to create a child.
    Unfortunately, it requires 2 years, nine months, and three minutes.
  • Kiss my shiny metal ass!

    -- had to. :-P

  • the next step is to build a fully humanoid version that's open source in both software and hardware."

    You mean, one where the microcode for any processor included in it is published openly, and the masks used at the chip foundry are also openly published? Or if it's a FPGA 'Free Hardware' design, all design details of the FPGA silicon are disclosed, and all of the code for the FPGA development software is open source (good luck)?

  • ...does it have the memory of an elephant [londonist.com] ??
  • ouch (Score:3, Funny)

    by icepick72 (834363) on Saturday May 06, 2006 @08:29AM (#15276523)
    That baby would be tough on the birth canal.
  • Pain (Score:2, Interesting)

    by Onuma (947856)
    I don't believe they'll truly make a human-esque robot until they can make it understand pain.

    Sometimes a child needs to have a hand across his/her hiney to teach him. What if the bot touches a hot stove and melts the crap out of its hand - without pain it would not know the difference.

    Let a robot go through that, and then they might truly begin to learn like a human being.
  • you insensitve clod.
  • A disturbing number of murders have occurred in the LIRA labs at the Genoa University. Victims appear to have been strangled, but a lack of fingerprints makes identification of the suspect problematic.
  • I applaud their work towards an open-source model. The model this is derived from--aka "human"--has been closed source since its creation almost 6000 years ago. The copyright expired long ago, but its Creator is unwilling to open its source. Many people cannot find the Creator, and some even doubt He is still around to release the source.

    The human model has proven difficult to reverse engineer. We need its source to help fix bugs. For example, it's susceptible to viruses in its current state.

    So, I welc
  • But will it run Linux?
  • Why did they give it Mick Jagger's lips and Keith Richards' eyes?
  • This seems like a good idea. I've always wondered why AI researchers want to try making AI that begins its existance near the level of an adult, with an understanding of language, "commons sense", etc... I understand it in that language recognition is an important piece of the AI puzzle, but researchers who want to make a "human-like" robot seem to aim too high.

    Even the human brain, extremely advanced compared to where we're at in the creation of intelligence, starts out nearly helpless and takes years

"No job too big; no fee too big!" -- Dr. Peter Venkman, "Ghost-busters"

Working...