Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Babybot Learns Like You Did 107

holy_calamity writes "A European project has produced this one-armed 'babybot' that learns like a human child. It experiments and knocks things over until it can pick them up for itself. Interestingly the next step is to build a fully humanoid version that's open source in both software and hardware."
This discussion has been archived. No new comments can be posted.

Babybot Learns Like You Did

Comments Filter:
  • Neural Networks (Score:5, Insightful)

    by EnsilZah ( 575600 ) <.moc.liamG. .ta. .haZlisnE.> on Saturday May 06, 2006 @04:04AM (#15275900)
    The story mentions that the AI is made using neural nets.
    I think it's amazing how such simple data structures can generate such complex behaviour.

    In case anyone is interested, there's this pretty easy to understand tutorial on neural nets here:
    http://www.ai-junkie.com/ann/evolved/nnt1.html [ai-junkie.com]
  • Re:AI Learning (Score:3, Insightful)

    by EnsilZah ( 575600 ) <.moc.liamG. .ta. .haZlisnE.> on Saturday May 06, 2006 @04:13AM (#15275916)
    They may not use a simple goal like walking, but in order to learn there has to be some sort of reward/punishment system in place.
    Real babies have goals like getting their parents' attention, being fed, keeping warm.
    I wonder what sort of goals a robot baby has to have to learn in the same way a real one does.
  • by Richard Kirk ( 535523 ) on Saturday May 06, 2006 @05:35AM (#15276068)
    This particular experiment is not going to create a 2-year old. We have had robots and simulations of robots that have used neutral nets to see if motor skill can be optimised using learning-like techniques. We have had recognition programs that do the same things that our eye and brain system do. This is an intelligent combination of the two.

    However, just suppose, and then suppose, and then suppose...

    So far, we can build computers that can simulate brain cells. There is nothing stopping us making a computer that has a similar complexity to the brain. We will have to mimic the strange mix of part-design, part randomness that brains are. Or maybe we can just throw more computing power, and stuff the brain doesn't have, like the ability to back up and regress. Sooner or later - probably later is my guess, but who knows? - we are going to come up with something that shows intelligence, and probably has inteligence.

    African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.

    One day, someone is going to make something intelligent, and then turn it off, and there will be an outcry. Is anyone doing the thinking on the ethics of making it before making it?

  • Re:Neural Networks (Score:3, Insightful)

    by hyfe ( 641811 ) on Saturday May 06, 2006 @06:49AM (#15276198)
    I'm an AI grad student, and I can tell you that (rather complex) statistical learning methods, which are considered part of AI,

    That's what I said :)

    Perhaps by AI you're referring just to neural nets?

    By AI I'm referring to something that is not inheretly (too) bound by the abstractions required to make it work. EG; how easily transferable is the experience from numbers to actualc concepts. Various forms of regression analysis and stuff sure do wonders, but to be honest, they feel so inheretly limited I don't see much hope for them. It's mathemathicians playing with maths, like scripts emulating AI in games are programmers playing with programming, getting neat/good enough results; but still not making actual progress.

    I guess all it means is that AI is hard, and I have way too much faith in the people that are supposed to be more intelligent than me.

  • Re:Neural Networks (Score:3, Insightful)

    by Helios1182 ( 629010 ) on Saturday May 06, 2006 @10:42AM (#15276846)

    I think we, the AI community, are making actual progress. The problem is that the problem is much harder than people thought it would be back when it first emerged.

    Statistical models have done wonders for a lot of things. Classification, mentioned above, is one of the most obvious successes. Natural language processing is another surprising success of statistical methods. The use of hidden markov models has solved a number of problems that were difficult using symbolic approaches (mostly dealing with syntax). The natural language understanding is still a long ways away of course.

    Partially observable markov decision processes have also been used a lot in learning in uncertain environments with good success -- another technique from stats.

    The problem with AI as a whole is that there is so much knowledge. It is really incredible how much we know. Not even in an academic sense, you know things will fall, how to balance, and all sorts of "common sense" knowledge. Modeling this in a symbolic way is very difficult because of the large amount of information. It is also hard to express. Formalisms such as first order predicate calculus are often used, but they have limitations.

    Statistical models are appealing because we do not have to manually write down knowledge. The machine can learn by itself (to some extent). This is probably why machine learning is one of the hottest topics right now.

    So keep faith in the smart people trying to work on AI -- just don't expect true intelligent machines for some time yet. Advances are constantly being made in smaller domain-specific areas though.

  • by mrcaseyj ( 902945 ) on Saturday May 06, 2006 @07:51PM (#15278930)
    The difficulty is coming up with a consistent ethical policy that is reasonable, and works when relating to bacteria, plants, animals, humans, superior aliens, and machines. It seems obvious that all life including bacteria can't be given human rights. But where do you draw the line between bacteria and humans? If you decide that rats can be killed, experimented on, eaten, etc, then how do you argue that aliens or super intelligent machines shouldn't declare humans insignificantly better than rats, and decide to eat us. The best policy I've come up with is that we should respect the rights of anything that asks for its rights to be respected, and understands what it is asking. The asking part keeps bacteria and plants out of the protected class and the understanding part keeps tape players out. This policy provides grounds for a truce to prevent conflict between intelligent entities. I would also add some safety precautions to the policy, like protecting the rights of all humans from birth, whether they can ask for or understand their rights.

Genetics explains why you look like your father, and if you don't, why you should.

Working...