Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

A.I. and Robotics Take Another Wobbly Step Forward 102

CWmike writes to tell us that artificial intelligence and robotics have made another wobbly step forward with the most recent robot from Stanford. "Stair" is one of a new breed of robot that is trying to integrate learning, vision, navigation, manipulation, planning, reasoning, speech, and natural language processing. "It also marks a transition of AI from narrow, carefully defined domains to real-world situations in which systems learn to deal with complex data and adapt to uncertainty. AI has more or less followed the 'hype cycle' popularized by Gartner Inc.: Technologies perk along in the shadows for a few years, then burst on the scene in a blaze of hype. Then they fall into disrepute when they fail to deliver on extravagant promises, until they eventually rise to a level of solid accomplishment and acceptance."
This discussion has been archived. No new comments can be posted.

A.I. and Robotics Take Another Wobbly Step Forward

Comments Filter:
  • People perception (Score:2, Interesting)

    by jellomizer ( 103300 ) on Monday January 26, 2009 @06:44PM (#26614415)

    The generals population of AI is the Data, or Terminator. Some how superior to us humans who will not make mistakes. However real AI the computer makes a lot of Mistakes, and learns from them. But being that a standard computer has the brain power of a bug, it isn't surprising that AI meets the hype.

  • Hype? (Score:5, Interesting)

    by AnthropomorphicRobot ( 1460839 ) on Monday January 26, 2009 @06:52PM (#26614521) Homepage

    This article both points out the problems of over-hyped advances in robots, while also claiming this robot has transitioned away from narrowly defined domains?

    The voice recognition & language processing component alone would be years ahead of anything else if it worked well outside of a "narrow, carefully defined domain". It seems like they are yet again over-hyping new research.

  • Re:People perception (Score:5, Interesting)

    by Chabo ( 880571 ) on Monday January 26, 2009 @07:11PM (#26614751) Homepage Journal

    Even my AI professor in school pointed to Data as really the end-goal of AI research (as well as a character from Battlestar Galactica, though I don't watch that show). I think many people are aware that modern AI has roughly the intelligence of an animal. That's much improved on AI from when the character Data was made, where the intelligence was more like that of a single-cell organism.

    Of course, considerations must always be made for disaster... [xkcd.com]

    I'm always amazed how broad a field AI really is; algorithms started in AI theory for moving robots around a room can be applied nearly everywhere.

  • Re:People perception (Score:3, Interesting)

    by Nigel Stepp ( 446 ) on Monday January 26, 2009 @09:05PM (#26615959) Homepage

    Then again, some single cell organisms are pretty smart [abc.net.au].

    Seriously though, I don't think AI has yet reached the point of being as smart as your typical animal (which means low-level mammal I'm assuming). Not without substantial loans of intelligence on the part of the AI operator/designer.

  • by MichaelSmith ( 789609 ) on Monday January 26, 2009 @09:22PM (#26616105) Homepage Journal
    I think AI is mainly used in spamming these days. Maybe William Gibson is right. It will be a crime to conspire to enhance an artificial intelligence.
  • Asimo (Score:1, Interesting)

    by Anonymous Coward on Monday January 26, 2009 @10:03PM (#26616465)

    I will like to know how is better than Asimo, or by the way, of any of the advanced Japanese robots

  • Re:People perception (Score:2, Interesting)

    by sean4u ( 981418 ) on Tuesday January 27, 2009 @12:12AM (#26617633) Homepage

    You said something unpopular about AI. It's a good job there's no -1 sceptic modpoint, or I wouldn't even have seen your comment.

    As far as I can see, AI has reached the point of being as smart as a snail that's really, really good at chess.

    ...if I've offended any snail slashdot readers, I apologise profusely.

  • Re:The three types (Score:3, Interesting)

    by Simon Brooke ( 45012 ) <stillyet@googlemail.com> on Tuesday January 27, 2009 @02:31PM (#26626227) Homepage Journal

    A machine does not have to reproduce the mechanisms of the human mind in order to display intelligence; it has to emulate the performance. If the inputs are similar and the outputs are similar what happens in the middle is unimportant.

    There is this general faulty reasoning that _understanding is the property of a representation_. That's just wrong. Just as temperature is not a property of molecules, understanding is not a property of a representation. It is a property of a process. In order to display the same "intelligent" behavior we do, machines have to go through the same process.

    There is no ghost in the machine. The human brain is at best a Turing complete computing engine - at best, because we can prove that is not possible to be more. And we can prove that (modulo limited store, which is also an issue for human brains) our computers are also Turing complete. So it is not possible that our computers cannot do what a human brain can do - although admittedly we don't yet know how to program them to do it.

    But we will find out, and when we do, I predict we'll look at the trivial little programs and say to ourselves 'is that really all?'

    In the mean time I suggest to you that, apart from the purely academic interest of finding out how people tick, is isn't nearly as useful to program machines to do what people can do as to do what people can't do.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...