Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Robots Learn To Lie 276

garlicnation writes "Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'"
This discussion has been archived. No new comments can be posted.

Robots Learn To Lie

Comments Filter:
  • Direct link (Score:5, Informative)

    by Per Abrahamsen ( 1397 ) on Saturday January 19, 2008 @08:06AM (#22107522) Homepage
    The submission is someone putting a spin to a story of someone putting a spin to a story based on someone putting a spin on this [current-biology.com] original scientific article.

  • Re:Dune's lesson (Score:5, Informative)

    by paskie ( 539112 ) <pasky.ucw@cz> on Saturday January 19, 2008 @08:15AM (#22107560) Homepage
    Nope - the AI itself became the trouble, not the men. This is obvious from the "apocryphal" Dune books by Anderson and Herbert Jr., but I think that it should be clear even from the canon books by Herbert itself, IIRC.
  • Evolutionary Conditions for the Emergence of Communication in Robots [urlbit.us] I had to click through 2 or 3 links to get to the actual science and past they watered-down hyped-up news media.

    I don't find it surprising at all that evolving autonomous agents would find a way to maximize its use of resources through deception.

  • by ElMiguel ( 117685 ) on Saturday January 19, 2008 @08:31AM (#22107636)

    There seems to be a whole category of stories here at Slashdot where some obvious result of an AI simulation is spun into something sensational by nonsensically anthropomorphizing it. Robots lie! Computers learn "like" babies! (at least two of the latter type in the last month, I believe).

    As reported, this story seems to be nothing more than some randomly evolving bots developing behavior in a way that is competely predictable given the rules of the simulation. This must have been done a million times before, but slap a couple of meaningless anthropomorphic labels like "lying" and "poison" on it and you got a Slashdot story.

    I frequently get annoyed by the sensational tone of many Slashdot stories, but this particular story template angers me more than most because it's so transparent, formulaic and devoid of any real information.

  • Re:Seriously (Score:5, Informative)

    by aussie_a ( 778472 ) on Saturday January 19, 2008 @09:27AM (#22107886) Journal

    The article's use of the word 'lie' was inappropriate and adds a level of description that is not applicable.
    Lying simply means telling someone (or something) a statement that is believed to be false.
  • Re:Direct link (Score:5, Informative)

    by mapkinase ( 958129 ) on Saturday January 19, 2008 @10:04AM (#22108096) Homepage Journal
    Short summary of robots

    * There is food and poison. And robots.
    * The signal only with one type of light - blue. (red was emitted by both food and poison).
    * Initially they do not know how to use light.
    * In some colonies, they learned to use it to indicate food, in some - to indicate poison
    * There are two things (among others) researchers measured: correlation between finding food or poison and emitting light, and correlation between seeing light and reacting to light

    So robots could learn either to emit light near food or they could learn to emit light near poison. It turned out that the colonies that evolved to emit light near food are more effective (that makes sense: the only thing you want to know is whether there is food or no food, the fact that "no food" might include poison or absence of it is not important. Basically, if you react on poison-light, then you still have to find food somewhere else, while if you react to food-light (blue+red in one place), then you just eat and relax).

    Now. It turned out that in some colonies significant number of robots emitted light near poison or far away from food, yet significant number of robots associated light with food. The researchers conclude that those colonies started as "blue light means it's food, not poison" colonies (thus, the correlation between blue light and positive reaction to it), but later on some sneaky individuals evolved that used blue light when they were away from food:

    An analysis of individual behaviors revealed that in all replicates, robots tended to emit blue light when far away from the food. However, contrary to what one would expect, the robots still tended to be attracted rather than repelled by blue light (17 out of 20 replicates, binomial-test z score: 3.13, p < 0.01). A potential explanation for this surprising finding is that in an early stage of selection, robots randomly produced blue light, and this resulted in robots being selected to be attracted by blue light because blue light emission was greater near food where robots aggregated.

    I have skimmed through the text and I did not find the experiment that first comes to mind: why did not they measure the correlation between seeing red light, emitting blue light and going to blue light for each individual robot. It would be interesting to know how many robots used blue light to deceive, yet believing the majority about blue light. May be it is there somewhere, I did not read really carefully.

    Hilarious quote:

    spatial constraints around the food source allowed a maximum of eight robots out of ten to feed simultaneously and resulted in robots sometimes pushing each other away from the food
  • Re:Dune's lesson (Score:2, Informative)

    by COMICAGOGO ( 1055066 ) on Saturday January 19, 2008 @11:42AM (#22108812)
    Well, it finally happened. I am going to rant about some obscure bit of scifi on slashdot.

    The rough storyline for the machine part of the dune books was: Man creates thinking machines as servants -> man becomes idle and lets the machines do all the work -> Bad men (The Titans) Create a computer virus/rewrite of the central machine intelligence (The Evermind) to take control of the machines and thereby mankind -> The Evermind is given too much control because the Titans are lazy too and it takes over for itself with the goal of making everything in the universe run in synchronized harmony (it's not allowed to kill the Titans, but it uses logic to just enslave them instead) -> Man get some religious fervor and destroys the machines (The Butlurian Jihad) (or so they think) -> Man outlaws all thinking machines -> lots of time and spice stuff happens including the breeding of super humans -> The machines, which have been hiding out beyond the range of human colonization come back to destroy/enslave man. They have also infiltrated mankind with shape changers who have re-introduced thinking machines for use in mans warships, leaving man seemingly defenseless when the machines take control of the ships -> Really super-duper-superman saves both machines and man so that they can all play nice together.

    Sorry, it just seemed like some people hadn't really even read the books at all. Also I left out most of the detail for thousands of years of the timeline.
  • Re:I robot (Score:4, Informative)

    by Fordiman ( 689627 ) <fordiman @ g m a i l . com> on Saturday January 19, 2008 @11:55AM (#22108966) Homepage Journal
    Fleming and Bell were born in Scotland. Bell only traveled to Canada when he was 23, and Fleming when he was 17.

    Of course, if you weren't so bent on taking a dick seriously, you wouldn't try to claim that which isn't yours.
  • Re:not lying (Score:3, Informative)

    by jwisser ( 1038696 ) on Saturday January 19, 2008 @01:23PM (#22109848) Homepage
    Evolution: All that's going on here is that some defective genes that have forgotten how to work the way they originally did are being artificially preserved by an environment that encourages them.

    There. I fixed that for you.

    If you read the article, you'll notice that there is selection going on here, on the part of the researchers. They're combining the "genes" from the most successful robots of each generation to create the robots of the next generation. In other words, whether the genes of a given robot get passed on is dependent on how successful it is at "surviving".

    Sounds an awful lot like evolution to me. It's no more intentional on the part of the individual robots than human evolution is on the part of Slashdotters who can't get a date, but it's evolution nonetheless.
  • by noidentity ( 188756 ) on Saturday January 19, 2008 @04:19PM (#22111456)
    Yes, I hate it when people imagine their values to exist where they don't. These bots have just found another way to make food accessible. Boulder in the way? Move it aside. Other bots in the way? Flash my light away from food and they go away. It's just another response of the environment that they make use of.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...