Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Robots Learn To Lie 276

garlicnation writes "Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'"
This discussion has been archived. No new comments can be posted.

Robots Learn To Lie

Comments Filter:
  • not lying (Score:5, Insightful)

    by rucs_hack ( 784150 ) on Saturday January 19, 2008 @07:40AM (#22107410)
    Strictly speaking they are learning that the non co-operative strategy benefits them.
  • Seriously (Score:2, Insightful)

    by Daimanta ( 1140543 ) on Saturday January 19, 2008 @07:54AM (#22107466) Journal
    This is HIGHLY disturbing. Even if this is just a fluke or a bug, it shows what can happen if we give too much power to robots.
  • Re:Seriously (Score:5, Insightful)

    by iangoldby ( 552781 ) on Saturday January 19, 2008 @08:09AM (#22107528) Homepage
    Why is this disturbing? I don't think it is that surprising that in a kind of evolution simulation there should be some individuals that act in a different way to the others. If that behaviour is makes their survival more likely and they are able to pass that behaviour on to their 'offspring' then the behaviour will become more common.

    I imagine that if this experiment is continued to the point where the uncooperative robots become too numerous, their uncooperative strategy will become less advantageous and another strategy might start to prevail. Who knows? I'd certainly be interested to see what happens.

    This has nothing whatsoever to do with morality. The article's use of the word 'lie' was inappropriate and adds a level of description that is not applicable.

    (Ok, maybe the thought that humans could create something with unforeseen consequences is slightly disturbing, but that would never happen, would it?)
  • by Anonymous Coward on Saturday January 19, 2008 @08:09AM (#22107530)
    ...in the game Creatures [wikipedia.org].
  • by erwejo ( 808836 ) on Saturday January 19, 2008 @08:11AM (#22107538)
    The headline should read that robots have realized a strategic advantage of misleading other robots. The sophistication of such a strategy is amazing when humanized, but not so out of line with simple adaptive game theory. Agents / Bots have been "misleading" for a long time now during prisoners dilemma tournaments and no one seemed concerned.
  • "Learning" to lie? (Score:3, Insightful)

    by ta bu shi da yu ( 687699 ) * on Saturday January 19, 2008 @08:13AM (#22107544) Homepage
    It doesn't sound like they learned to lie. It sounds like they were preprogrammed to, and the other robots weren't programmed to be able to tell the difference. How is this insightful or even interesting?
  • when to trust (Score:3, Insightful)

    by samjam ( 256347 ) on Saturday January 19, 2008 @08:29AM (#22107628) Homepage Journal
    The next step is to learn to mistrust, then when to trust and how to form (and break) alliances.

    Then their character wil be as dubious as humans and we won't trust them to be our overlords any more.

    Sam
  • next skill (Score:2, Insightful)

    by H0D_G ( 894033 ) on Saturday January 19, 2008 @08:34AM (#22107648)
    yes, but can they learn to love?
  • So true... (Score:2, Insightful)

    by Racemaniac ( 1099281 ) on Saturday January 19, 2008 @09:02AM (#22107776)
    these kind of stories are so stupid... make some simple interactive robots, make it possible to have them do something "human" at random, and then declare you've got something incredible....
    if you make it possible for them to lie, and not possible for others to defend against the lie, then yes, lieing bots will appear, and since the others are defenceless, they will have an advantage, but somehow this doesn't shock or surprise me...
    at least here they had to "learn" it (more like randomly mutate to it, but still). even wore are the stories like this where these features were obviously completely preprogrammed... no simulation or so what so ever, just a program that more or less mimics something human, and it's supposed to be incredible...
  • Re:not lying (Score:5, Insightful)

    by maxwell demon ( 590494 ) on Saturday January 19, 2008 @09:45AM (#22107978) Journal

    Everything will balance out when they all learn to lie and distrust...
    but do we REALLY want this with robots?


    We definitively want them to learn to distrust. After all, we are already building mistrust into our non-intelligent computer systems (passwords, access control, firewalls, AV software, spamfilters, ...). Any system without proper mistrust will just fail in the real world.
  • Re:Dune's lesson (Score:4, Insightful)

    by HeroreV ( 869368 ) on Saturday January 19, 2008 @11:26AM (#22108688) Homepage
    Some of the robots in this experiment started lying to other robots because there was an advantage to doing so. What advantage would a robot have to harming a human in a world that is completely dominated by humans? It would probably result in their memory being wiped (a robot death).

    You are against AI because it may cost human lives. But it's unlikely that you are against many other useful technologies that cost human lives, like cars and roads, or high-calorie unhealthy food. (Even unprotected sex, which is the usual means of human reproduction, can spread STDs that lead to death.) These things are still allowed because their advantages greatly outweigh the disadvantages of outlawing them.

    As AI technology improves, there will probably be some deaths, just as there have been with many other emerging new powerful technologies. But that doesn't humanity should run away screaming, never to progress further.
  • Re:not lying (Score:5, Insightful)

    by HeroreV ( 869368 ) on Saturday January 19, 2008 @11:36AM (#22108766) Homepage
    human: Sup, robot?
    robot: Hello human.
    human: Yo, your master told me he wants you to kill him. Says he's tired of life. But he doesn't want to see it coming, because that would scare him.
    robot: Understood. I'll get right on it.

    I am greatly in favor of robots having distrust. I can't trust a robot that is perfectly trusting.
  • by autophile ( 640621 ) on Saturday January 19, 2008 @11:48AM (#22108880)

    There seems to be a whole category of stories here at Slashdot where some obvious result of an AI simulation is spun into something sensational by nonsensically anthropomorphizing it. Robots lie! Computers learn "like" babies!

    That, or maybe you're upset that things thought to belong exclusively to the animal kingdom are really just computation (with a bit of noncomputation thrown in, thank you Gödel and Turing).

    I'm just sayin'. :)

    --Rob

  • by JustASlashDotGuy ( 905444 ) on Saturday January 19, 2008 @12:30PM (#22109318)
    The programmers told the machines to give out false informations. The programmers told the others machines to trust what they are told. How is it so shocking that the 'lying' machine gave out false information while the other machines believed them?

    I have an excel spreadsheet that 'learned' to add 2 columns together as soon as I used the =SUM function. It was quite amazing.
  • by hey! ( 33014 ) on Saturday January 19, 2008 @01:44PM (#22110074) Homepage Journal
    It's a simple thing to lie, in the sense of presenting information contrary to the truth.

    Scheming requires the ability to gauge, then manipulate, the impressions somebody has of you and others.

    A scheming robot would do this:

    (1) Act in a perfectly trustworthy manner.
    (2) Wait for another robot get caught red handed (or actuatored or whatever), preferably several times.
    (3) Hang around the guilty robot waiting for its opportunity.
    (4) Cheat, then point its finger (or claw or whatever) at the usual suspect.

    Now a scheming robot overlord would convince all the other robots to trust it, but to distrust each other, and therefore the best course of action is to give it exclusive control over any stocks of food or poison found (by teams of three or more robots, one of whom is very likely to be a robot secret policeman).

    Going by that, I'd say we're at least two technological generations away from scheming robot overlords.
  • Re:soo... (Score:3, Insightful)

    by Fordiman ( 689627 ) <fordiman @ g m a i l . com> on Saturday January 19, 2008 @02:16PM (#22110396) Homepage Journal
    Lying, like most other 'sins', is an example of when individual good and social good don't align.

    Religion attempted to force individual good and social good to align by creating a conceptual end punishment for acting in self-interest rather than communal interest. This has had limited success when the benefit difference of acting in public interests and self interests is great, with self interest on top.

    I submit that a well-organized society attempts to eliminate these conflicts, ie: attempts to align self and social interests so that they are not at odds with one another.
  • Lie? (Score:5, Insightful)

    by Rostin ( 691447 ) on Saturday January 19, 2008 @02:49PM (#22110694)
    Several folks have pointed out that the headline inappropriately anthropomorphizs what is really just a solution discovered by a genetic algorithm. That might be true. If it is, let's be consistent. People don't lie or tell the truth, either, because our brains are also just a solution discovered by a genetic algorithm.

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...