Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Babybot Learns Like You Did 107

Posted by Zonk
from the i'm-still-working-on-not-knocking-things-over dept.
holy_calamity writes "A European project has produced this one-armed 'babybot' that learns like a human child. It experiments and knocks things over until it can pick them up for itself. Interestingly the next step is to build a fully humanoid version that's open source in both software and hardware."
This discussion has been archived. No new comments can be posted.

Babybot Learns Like You Did

Comments Filter:
  • AI Learning (Score:5, Interesting)

    by fatduck (961824) * on Saturday May 06, 2006 @02:34AM (#15275838)
    From TFA: "The goal is to build a humanoid 2-year-old child," explains Metta. This will have all of Babybot's abilities and the researchers hope it may eventually even learn how to walk. "It will definitely crawl," says Metta, "and is designed so that walking is mechanically possible." Not a bad goal at all, and if it's open source they can't cheat by promoting a specific goal such as walking in the software. Reminds me of Prey where they couldn't figure out how to get the nanomachine swarm to fly so they let its AI "learn" how to do it on its own.
  • by Flyboy Connor (741764) on Saturday May 06, 2006 @02:57AM (#15275886)
    The goal is to build a humanoid 2-year-old child," explains Metta. This will have all of Babybot's abilities and the researchers hope it may eventually even learn how to walk.

    A fun project, and potentially a good step on the road towards human-like intelligence. However, the "2-year-old" remark is again one of those far-fetched promises that is a loooooooooooooong way off. Making a robot-arm play with a rubber ducky is one thing, letting a robot understand what a rubber ducky is, is quite another. Making a robot crawl is one thing, but letting a robot crawl with a self-conscious purpose, again is quite another.

    Fortunately, one of the researcher in TFA admits that 20 computers with a neural network on each is no replacement for a human brain. But the 2-year-old remark follows later, and is evidently entered as a way to generate funding. It sounds cool, but it is not what the result of this project will be. I assume the researchers know this all too well. Or perhaps they have no children of their own.

  • But just think... (Score:1, Interesting)

    by Anonymous Coward on Saturday May 06, 2006 @03:22AM (#15275937)
    Fortunately, one of the researcher in TFA admits that 20 computers with a neural network on each is no replacement for a human brain. But the 2-year-old remark follows later, and is evidently entered as a way to generate funding. It sounds cool, but it is not what the result of this project will be. I assume the researchers know this all too well. Or perhaps they have no children of their own.

    Think of how Social Services could use something like this if it can act like a 2 year-old. Do they want to make sure you would be a good parent? They'll give you the robot for a week and based on the data they can then tell if you can be trusted (obviously assuming the robot is unhackable, or at least knows if it was hacked). If that doesn't generate government funding then I don't know what would!
  • Wow. (Score:4, Interesting)

    by Dare (18856) on Saturday May 06, 2006 @03:47AM (#15275987)
    I wonder what happens when this bot discovers that it's a physical object, and can try and manipulate itself.

    (... yeah, baby robot masturbation... but no, seriously...)
  • Re:Neural Networks (Score:3, Interesting)

    by arrrrg (902404) on Saturday May 06, 2006 @05:38AM (#15276179)
    I'm an AI grad student, and I can tell you that (rather complex) statistical learning methods, which are considered part of AI, blow most simple methods (and neural nets) out of the water on most classification problems these days. In fact, I'm procrastinating from my project involving SVMs [wikipedia.org] right now to write this comment.

    Perhaps by AI you're referring just to neural nets? While people get them to do some cool things, these (in the for you're used to seeing them in) are at the very very "dumb end" of AI, in that they don't exploit any of the prior knowledge about a problem. They're easy to understand and quite general, but for most specific problems there are much better AI techniques out there.
  • by vertinox (846076) on Saturday May 06, 2006 @07:42AM (#15276403)
    A fun project, and potentially a good step on the road towards human-like intelligence. However, the "2-year-old" remark is again one of those far-fetched promises that is a loooooooooooooong way off. Making a robot-arm play with a rubber ducky is one thing, letting a robot understand what a rubber ducky is, is quite another.

    How do we know the 2 year old does understand what a ruber ducky is?

    Of course their brain may understand the rubber ducky is "that yellow thing... that feels a certain way... has that certain shape... and squeaks when i squeeze it..."

    But do they really understand what it is in relationship to other things or true understanding. I mean... Its relationship to where we got it from. We bought it at a store... It was made in china... Its made of rubber or some type of synthetic... It floats because of physical properties... And bears a resemblence to a real life duck (a child of 2 year old might not grasp that key concept yet... think of it like captcha).

    At that level a child's pattern recognition is quite limited, but is quite at the stage where it will basically explode with ability to relate verbal words to objects and actions and people.

    Still... Understanding until you are older is more or less... This [object] is [this]. Later we learn [object] is [this] and does [action] which causes [result]. And then relationships of [object] with other [objects]. That is what usually throws machine intelligence into a loop. It can recognize patterns, but it can't relate those patterns to other patterns like a human can (at least right now).

    Still, I certainly didn't have cognitive memories until I was older than 5 or even 7 where I started asking those annoying parental questions like "Why is the sky blue?" and "Where do people go where they die?".
  • Pain (Score:2, Interesting)

    by Onuma (947856) on Saturday May 06, 2006 @08:49AM (#15276595)
    I don't believe they'll truly make a human-esque robot until they can make it understand pain.

    Sometimes a child needs to have a hand across his/her hiney to teach him. What if the bot touches a hot stove and melts the crap out of its hand - without pain it would not know the difference.

    Let a robot go through that, and then they might truly begin to learn like a human being.

If an experiment works, something has gone wrong.

Working...