Robots Learn To Lie 276
garlicnation writes "Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'"
Direct link (Score:5, Informative)
Re:Dune's lesson (Score:5, Informative)
Evolutionary Conditions for the Emergence of Commu (Score:4, Informative)
I don't find it surprising at all that evolving autonomous agents would find a way to maximize its use of resources through deception.
Anthropomorphizing obvious simulation result (Score:4, Informative)
There seems to be a whole category of stories here at Slashdot where some obvious result of an AI simulation is spun into something sensational by nonsensically anthropomorphizing it. Robots lie! Computers learn "like" babies! (at least two of the latter type in the last month, I believe).
As reported, this story seems to be nothing more than some randomly evolving bots developing behavior in a way that is competely predictable given the rules of the simulation. This must have been done a million times before, but slap a couple of meaningless anthropomorphic labels like "lying" and "poison" on it and you got a Slashdot story.
I frequently get annoyed by the sensational tone of many Slashdot stories, but this particular story template angers me more than most because it's so transparent, formulaic and devoid of any real information.
Re:Seriously (Score:5, Informative)
Re:Direct link (Score:5, Informative)
* There is food and poison. And robots.
* The signal only with one type of light - blue. (red was emitted by both food and poison).
* Initially they do not know how to use light.
* In some colonies, they learned to use it to indicate food, in some - to indicate poison
* There are two things (among others) researchers measured: correlation between finding food or poison and emitting light, and correlation between seeing light and reacting to light
So robots could learn either to emit light near food or they could learn to emit light near poison. It turned out that the colonies that evolved to emit light near food are more effective (that makes sense: the only thing you want to know is whether there is food or no food, the fact that "no food" might include poison or absence of it is not important. Basically, if you react on poison-light, then you still have to find food somewhere else, while if you react to food-light (blue+red in one place), then you just eat and relax).
Now. It turned out that in some colonies significant number of robots emitted light near poison or far away from food, yet significant number of robots associated light with food. The researchers conclude that those colonies started as "blue light means it's food, not poison" colonies (thus, the correlation between blue light and positive reaction to it), but later on some sneaky individuals evolved that used blue light when they were away from food:
I have skimmed through the text and I did not find the experiment that first comes to mind: why did not they measure the correlation between seeing red light, emitting blue light and going to blue light for each individual robot. It would be interesting to know how many robots used blue light to deceive, yet believing the majority about blue light. May be it is there somewhere, I did not read really carefully.
Hilarious quote:
Re:Dune's lesson (Score:2, Informative)
The rough storyline for the machine part of the dune books was: Man creates thinking machines as servants -> man becomes idle and lets the machines do all the work -> Bad men (The Titans) Create a computer virus/rewrite of the central machine intelligence (The Evermind) to take control of the machines and thereby mankind -> The Evermind is given too much control because the Titans are lazy too and it takes over for itself with the goal of making everything in the universe run in synchronized harmony (it's not allowed to kill the Titans, but it uses logic to just enslave them instead) -> Man get some religious fervor and destroys the machines (The Butlurian Jihad) (or so they think) -> Man outlaws all thinking machines -> lots of time and spice stuff happens including the breeding of super humans -> The machines, which have been hiding out beyond the range of human colonization come back to destroy/enslave man. They have also infiltrated mankind with shape changers who have re-introduced thinking machines for use in mans warships, leaving man seemingly defenseless when the machines take control of the ships -> Really super-duper-superman saves both machines and man so that they can all play nice together.
Sorry, it just seemed like some people hadn't really even read the books at all. Also I left out most of the detail for thousands of years of the timeline.
Re:I robot (Score:4, Informative)
Of course, if you weren't so bent on taking a dick seriously, you wouldn't try to claim that which isn't yours.
Re:not lying (Score:3, Informative)
There. I fixed that for you.
If you read the article, you'll notice that there is selection going on here, on the part of the researchers. They're combining the "genes" from the most successful robots of each generation to create the robots of the next generation. In other words, whether the genes of a given robot get passed on is dependent on how successful it is at "surviving".
Sounds an awful lot like evolution to me. It's no more intentional on the part of the individual robots than human evolution is on the part of Slashdotters who can't get a date, but it's evolution nonetheless.
Re:Anthropomorphizing obvious simulation result (Score:2, Informative)