Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Software

'Tit for Tat' Defeated In Prisoner's Dilemma Challenge 356

colonist writes "Tit for Tat, the reigning champion of the Iterated Prisoner's Dilemma Competition, has been defeated by a group of cooperating programs from the University of Southampton. The Prisoner's Dilemma is a game with two players and two possible moves: cooperate or defect. If the two players cooperate, they both have small wins. If one player cooperates and the other defects, the cooperator has a big loss and the defector has a big win. If both players defect, they both have small losses. Tit for Tat cooperates in the first round and imitates its opponent's previous move for the rest of the game. Tit for Tat is similar to the Mutual Assured Destruction strategy used by the two nuclear superpowers during the Cold War. Southampton's programs executed a known series of 5 to 10 moves which allowed them to recognize each other. After recognition, the two Southampton programs became 'master and slave': one program would keep defecting and the other would keep cooperating. If a Southampton program determined that another program was non-Southampton, it would defect." Update: 10/14 15:08 GMT by J : If anyone wants to try writing their own PD strategy and see how it fares in a Darwinian contest, I'll host a tournament of Slashdot readers. Here are the docs, sample code, notes on previous runs, and my email address.
This discussion has been archived. No new comments can be posted.

'Tit for Tat' Defeated In Prisoner's Dilemma Challenge

Comments Filter:
  • Scary Stuff (Score:5, Funny)

    by mfh ( 56 ) on Thursday October 14, 2004 @10:07AM (#10523620) Homepage Journal
    FTA:
    • If you confess and your partner denies taking part in the crime, you go free and your partner goes to prison for five years.
    • If your partner confesses and you deny participating in the crime, you go to prison for five years and yor [sic] partner goes free.
    • If you both confess you will serve four years each.
    • If you both deny taking part in the crime, you both go to prison for two years.
    This sounds pretty much like the RIAA might be involved. I would deny everything if I were you!
    • Re:Scary Stuff (Score:2, Interesting)

      by parvenu74 ( 310712 )
      Or under the Patriot Act, whether you confess or not you go to jail for an undetermined period of time, during which charges may or may not be filed against you...
    • I generally hope that knowledge of the prisoner's dilemma will never become a practical factor in my life.
      • Re:Practicality (Score:5, Insightful)

        by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Thursday October 14, 2004 @10:29AM (#10523914)
        Sorry, you've probably already lost that one. The prisoner's dilemma is quite useful in normal life, or at least the thinking that gives rise to the solution is. It applies any time there is significant advantage to be gained by working together, but also much advantage to be the one 'cheating'.

        For /., try this interpretation:
        If we both share our source code, we will both will be more productive.
        If I share my source code, and you don't, you can be more productive. (Assuming you can use mine.)
        If neither of us share, we both will have to re-create other's work...
        • Re:Practicality (Score:3, Informative)

          by naoursla ( 99850 )
          Unfortunately, PD has a dominant strategy to cheat your opponent. No matter what your oppoent does, you can do better by cheating.

          Iterated PD is more interested since it lets your opponent punish you for cheating. So you get into some interesting social issues.

          How well a given strategy does depends on the strategies other in its community are using. If the population is heavily cheater based, then agents that cooperate will lose big time. However, if there are enough cooperators, then they will form a coa
          • Re:Practicality (Score:3, Informative)

            "[...]How well a given strategy does depends on the strategies other in its community are using. If the population is heavily cheater based, then agents that cooperate will lose big time. However, if there are enough cooperators, then they will form a coalition of sorts, and even though they will lose to the cheaters, in the end they will come out on top."

            I remember reading about the tournament in a "Science" magazine article, back when the original tournament was done; Tit for Tat won irrespectively of
        • Re:Practicality (Score:5, Insightful)

          by spitzak ( 4019 ) on Thursday October 14, 2004 @01:51PM (#10526628) Homepage
          Yes I agree that public domain code is very much the same as the prisoner delimma.

          The GPL is an attempt to make it *not* the prisoner delimma by forcing the other side to cooperate if you do. This eliminates the losing part of the cooperation choice and thus it is no longer a delimma.
  • by Perianwyr Stormcrow ( 157913 ) on Thursday October 14, 2004 @10:09AM (#10523644) Homepage
    In other words, an in-group can work vs. tit for tat if it outnumbers it. I'd like to see a trial with a slow trickle of immigration of tit for tats into a large population of S/M programs. That might be illuminating. I suspect the outcome would be that tit for tat still does well.
    • by Minwee ( 522556 ) <dcr@neverwhen.org> on Thursday October 14, 2004 @10:18AM (#10523754) Homepage
      I was with you up until you started talking about tits and S/M... Am I still on slashdot, or did I wander onto alt.com by mistake?
    • by Anonymous Coward on Thursday October 14, 2004 @10:20AM (#10523790)
      What's being ignored is that the total profit of all the colluding algorithms is less than that of Tit-for-Tat, which makes the solution unviable in real-world Prisoner Dilemma situations. (bidding on large construction projects under certain auction formats, etc)

      As an analogy of unprofitable collusion, I could win the World Series of Poker by hiring enough shills and paying their time and entry fees. I would lose money by doing this, probably more than I could recoup with post-tournament income via endorsements/books/whatever.

      The parent is correct. Tit-for-Tat is still superior in equal numbers, and a modified Tit-for-Tat that can spoof the recognition algorithm of colluders will trounce them.
      • That's what I thought while reading the article... also making use of that code is assuming a meta-knowledge of the game, ie some sort of way you can have prepared for that particular and specific instance of the game where the specific problem is the prisoner's dilemna with its known simple outcomes. Whereas tit-for-tat is a much more generic theory that can apply to a game which you don't know about yet (eg, say, one that has N possible outcomes rather than only 4), by simply stating "try to choose the mu
    • by Anonymous Coward on Thursday October 14, 2004 @10:42AM (#10524044)
      The length of the code is one of the largest problems to overcome. Performing any signal other than all-cooperate produces a net loss of 1 or 4 points per round for your team in traditional (0,1,3,5) IPD. Simple signalling, ie 4th round defect was very effective. While the master/slave aspect was amazingly effective in my research, the "spoiler" was not. A small population of master/slaves could invade an arbitrariliy large block of TitForTat if evolution was by duplicating winner and removing loser after n iterations. The population of "spoilers" stagnates very quickly in a large TFT population. TFT should be considered a friend, not an enemy because they are a positive growth environment. Going "spoiler" on any non-TFT/ally was quite effective as any bot not prone to cooperate posed the only real risk of "master" losing.
    • by harrkev ( 623093 ) <<kfmsd> <at> <harrelsonfamily.org>> on Thursday October 14, 2004 @10:45AM (#10524080) Homepage
      Yup! It is the "outnumbers" thing which (in my opinion) makes things unfair.

      Had this been an actual prisoner's dilemma, this winning strategy would require recruiting a large number of thugs who LIKE going to prison and are willing to "take one for the team."

      Although cooperation is not explicitly defined as being against the rules, IMHO, it goes against the "spirit" of the competition. The point is that each algorithm is supposed to act in a greedy manner.

      This will no doubt spark a LOT of discussion, but to me, they "cheated." (OK. Maybe "worked the system" is a better phrase).
      • by Q2Serpent ( 216415 ) on Thursday October 14, 2004 @11:12AM (#10524391)
        They "cheated", and the other guy didn't, so they won big! Wasn't that the whole premise?

        -Serpent
        • by kisrael ( 134664 ) * on Thursday October 14, 2004 @11:45AM (#10524850) Homepage
          They "cheated", and the other guy didn't, so they won big! Wasn't that the whole premise?

          Well, they kind of went for a win on the "metalevel", utilizing the circumstances of the competition rather than solving the originally stated issue in an abstract way. On the one hand that's cool because evolution can work like that sometimes, but on the other hand, it really isn't answering the original question any more. (the question is "what's probably the best strategy for any given individual in Prisoner's Dilemna" and they changed the question to "how can we get some individuals to be super-players with the way this prisoner's dilemna simulator is setup"
          • "the question is "what's probably the best strategy for any given individual in Prisoner's Dilemna" and they changed the question to "how can we get some individuals to be super-players with the way this prisoner's dilemna simulator is setup""

            . How very true.I doubt that this solution is applicable in real life, if only for the fact that one of the assumptions would be that a subset of a winning team consistently and repeatably wants to be defeated. Mother nature took care of those long ago.
            • subset of a winning team consistently and repeatably wants to be defeated. Mother nature took care of those long ago.

              Well, it's an interesting philisophical point. It depends on how you define "win" and "lose"...certainly some specices have formed partnerships with other, often larger species, and if you define "win" as something besides "just survive", they might be seen as subjugating themselves to the other creature, so that the partnership prospers, even if their life doesn't seem that swell.

              In othe
      • The omerta, or code of silence, is the ideal that the mob works toward when caught. If you get caught, you simply clam up and take whatever's thrown at you as a point of honor. It is instructive, however, that this of course does not apply universally (everyone knows that the mob is rife with snitches.)
      • I think this is a fairly stunning result that is able to explain, why aristocracy and monarchy proved pretty successful for thousands of years.

        If Most sacrifice themselves for a few "chosen" ones they can do better than most others against the rest of the "world". Pretty convincing.

        If this strategy succeeds, we have a fairly sound theory why and how monarchy evolved from simple tribal structures. Secret societies, hierarchies and everything else would suddenly seem logical.

        Does not leave a feelgoo
    • by jamie ( 78724 ) <jamie@slashdot.org> on Thursday October 14, 2004 @10:59AM (#10524281) Journal
      If you'd like to try your hand at it, please feel free to email me [mailto] the code any program you'd like to try. (Or just describe to me the strategy you want to take and I can probably write code for it.)

      I've written perl scripts to run Prisoner's Dilemma tournaments [mccarthy.vg] on my website (the perl source code for all the species in the last run is on that page) and new submissions for the next tournament are welcome. Since my tournament does Darwinian selection on every agent that plays in it -- agents which don't earn points eventually die off, those which earn a lot of points reproduce -- self-sacrificing pairs of strategies won't do very well.

      I haven't yet written code which can trickle in new members over time, but that really wouldn't be very hard to add (I could probably set it up in an hour if you are really interested :)

  • The rules are similar to those of the gameshow Friend or Foe [chris-lambert.com].

    Dan East
  • by Stile 65 ( 722451 ) on Thursday October 14, 2004 @10:11AM (#10523666) Homepage Journal
    ...fraternities and secret societies work so well!

    I'm off to join the Freemasons. Be back in a few.
    • Yes, actually (Score:4, Informative)

      by Perianwyr Stormcrow ( 157913 ) on Thursday October 14, 2004 @10:18AM (#10523757) Homepage
      But the proper test is really whether the master half of these programs can do better than tit for tat on a large scale basis. I suspect that the S/M program will still do less simply because it plays a pattern during the interaction phase which is likely to result in tit for tat still coming out ahead- if there is one tit for tat, it won't do so well since the costs of being tit for tat are relevant if you don't know the master sign and most of those you interact with are expecting to hear it. But that's already well known. If tit for tat's numbers start growing, it does better. You see, tit for tat has an identification mechanism too, which is simply that it always starts out nice and immediately gets nasty if it gets fucked. If the number of tit for tats increases to a reasonable critical mass, they can have enough positive reactions to do very well. In fact, they'd become a secret society within the S/Ms!

      In short, if tit for tat is isolated, it won't do so well since everyone is fucking with it. If there are just a few tit for tats out there, their power increases significantly with each one added.
      • I think youve missed the point of the S/M. It can play tit for tat with everyone except opponents identifying themselves as slaves. It will always win in this situation. Unless someone cracks the code of the slaves and starts abusing them too, or its gets its own slavebots to identify and target the opponents master. Also, it is possible for slaves to try and identify what opponents are using what strategies and communicate that to the master who can adjust to exploit it.

        Its means that a new evolution in g
    • I'm off to join the Freemasons.

      If you plan to run for the presidency, don't forget to join Skull and Bones [bbc.co.uk].


      -Colin [colingregorypalmer.net]
  • by jjeffries ( 17675 ) on Thursday October 14, 2004 @10:14AM (#10523702)
    HOW ABOUT A NICE GAME OF CHESS?

    Lameness filter encountered. Post aborted!
    Reason: Don't use so many caps. It's like YELLING. It's not at all like a TALKING COMPUTER. You are a bad man. Go away.

  • by DoorFrame ( 22108 ) on Thursday October 14, 2004 @10:14AM (#10523705) Homepage
    I mean, the whole point of the Prisoner's Dilemma is that you don't have all the information. You don't know what your partner/opponent is going to do and you have decide based entirely on what little information you have based on your history with your partner/opponent. What these people are doing is creating a pattern to be recognized by another player, and then working as a team. And, it's not like they're people where one person might change their mind and decide to defect unilaterally... they're programs. Once they've locked onto each other as the same program, that's it. They'll play to their advantage until the end.

    The real trick is to find a program that can beat other DIFFERENT programs, not beat itself. This seems really stupid, or am I missing something?
    • by Urban Garlic ( 447282 ) on Thursday October 14, 2004 @10:23AM (#10523824)
      Well, part of the interest is that these programmers found a way, within the rules, to get more information, by means of their "secret handshake". The important lesson (to my mind) is that the environment can be manipulated in surprising ways to get a desired result. That's creativity and innovation doing its thing.

      Interestingly, this strategy is also fairly "brittle", I think, in that simple rule-changes could foil it. Requiring only one submission per team, for example, or scoring teams according to the total (or average) scores of all their programs, would complicate any strategy of collusion.
      • Well, part of the interest is that these programmers found a way, within the rules, to get more information, by means of their "secret handshake"

        So in the beginning, it was essentially 'everyone for himself'. Then innovation: devise a way team members can recognise each other, and work in groups to defeat opponents.

        Next logical step: cheating. How about behaving like a member of such a team, but then double-cross them later? It would be interesting to see how such a strategy would work out.

    • That's why this is the Iterated PD Challenge, not just the classic PD. If the competitors only played ONE round of PD each, the contest would not be very interesting. Their past performance in the contest itself is part of the "history" they are using to evaluate their choices.
    • by julesh ( 229690 ) on Thursday October 14, 2004 @10:27AM (#10523866)
      I suspect you could come up with a solution that beats this system by mimicing it, then changing its behaviour suddenly.

      That's why this isn't a good answer to the problem -- not because it's somehow "cheating", but because its a strategy that only works in limited circumstances and fails spectacularly in others. Kind of like chasing down losses when gambling.
    • by Minwee ( 522556 ) <dcr@neverwhen.org> on Thursday October 14, 2004 @10:29AM (#10523905) Homepage
      I don't see it as cheating. It's a lot like Bridge -- The rules say that you can't show your partner your hand and you can't tell them what you have, but you are allowed to use prearranged bidding conventions to pass information across the table. All that the Southhampton agents did was use a bidding strategy. They did act as a team, but they had no out-of-game way of knowing that they were up against a team member. That doesn't break any rules, and it did work. The Southampton team took the top three spots in the competition. If you insist on comparing the entrants to people, consider this. They worked as a team, for the good of the team, knowing that at least some of them would win even if the others bombed. People do that kind of thing all the time outside of competitions. Why should it be so out of place here?
      • It's a lot like Bridge -- The rules say that you can't show your partner your hand and you can't tell them what you have, but you are allowed to use prearranged bidding conventions to pass information across the table.

        Actually, the rules I understand for bridge specifically disallow prearranged bidding conventions that aren't well known standard conventions. But the rules of IPD don't, so I would guess it's OK there.

        But it doesn't work, of course, because all you need to do to beat this strategy is mimi
        • With a name like BridgeBum, how could I not reply to this? :-)

          I'll address the tournament rules of bridge, rather than the Laws of Contract Bridge, which are different.

          Basically, the governing bodies in power running tournaments can and do restrict agreements (known as 'conventions') that are allowed between partnerships. However, these restrictions are not so stifling as to only allow known agreements...if that were the case, no invention could happen.

          The rules relating to conventions basically fall in
  • by Snowspinner ( 627098 ) * <philsand@ufl.eTIGERdu minus cat> on Thursday October 14, 2004 @10:14AM (#10523707) Homepage
    This seems to me to be an unfair way to "win." The point of the PD simulation is to talk about whether, in the absence of any social consequences, it is better to screw someone over for money or to work cooperatively with them. It's not a perfect model for that question, but that is still the question that makes us care about the PD in the first place.

    All this has done is make a meta-PD game in which the two programs create a meta-game in which they agree to cooperate. That is to say, this is a solution to the PD problem that relies on the cooperation of a cohort (Someone to keep choosing loyalty while you defect and get all the money). Which is exactly not the point of PD.

    So the real headline, I think, is "Trivial flaw found in definition of Prisoner's Dillema problem. University of Southhampton wastes money demonstrating flaw instead of writing a goddamn paper like a normal person would."
  • ...by cheating! (Score:5, Insightful)

    by The-Bus ( 138060 ) on Thursday October 14, 2004 @10:15AM (#10523712)
    If the program recognized that another player was not a Southampton entry, it would immediately defect to act as a spoiler for the non-Southampton player. The result is that Southampton had the top three performers -- but also a load of utter failures at the bottom of the table who sacrificed themselves for the good of the team. Another twist to the game was the addition of noise, which allowed some moves to be deliberately misrepresented. In the original game, the two prisoners could not communicate. But Southampton's design lets the prisoners do the equivalent of signaling to each other their intentions by tapping in Morse code on the prison wall. Kendall noted that there was nothing in the competition rules to preclude such a strategy, though he admitted that the ability to submit multiple players means it's difficult to tell whether this strategy would really beat Tit for Tat in the original version. But he believes it would be impossible to prevent collusion between entrants.


    Yeah, that's not the Prisoner's Dilemma. Or even the Iterated PD [wikipedia.org]. This whole "signaligng Morse code" on the prison walls is nonsense, because it was not part of the original plan. Just because it's not in the rules doesn't mean you can do it. In Chess there's no rule specifically against me bringing a SuperGrape(TM) onto the board. The SuperGrape(TM) immediately destroys all pawns on a color of my choosing.

    No, it doesn't work that way.

    While this is an interesting experiment, it's not a true victory.
    • I don't see how this is really against the rules, but it also doesn't give a very interesting solution that tells us anything more than we already knew.

      I'd say that a real test would involve players vs. 10 times as many tit for tats as there are entrants.
    • Err... what's the difference between this and iterated prisoner's dilemma?
    • Re:...by cheating! (Score:3, Informative)

      by dr_labrat ( 15478 )
      I'm not sure you read the article correctly.

      this is not cheating, what is happening is throughout the iterations the programs can experience the equivalent of morse code in the patterns of defections and co-operations in the form of the penalties.

      i.e.: 4yrs 4yrs free free 2years.

      As this is iterative, and no actual lifespan (in human terms) the pattern can become quickly evident. Kind of like a "penalty handshake".

      In this way one program can detect the intentions of what is happening, without actually co

    • Ah, you're using the UK SuperGrape rules. I think most competitions nowadays use the Russian rules (SuperGrape takes _all_ pieces except King on a given color).

      Of course in USA play the SuperGrape is almost worthless as your opponent can simply invoke the Citrus Rule next turn (which usually destroys the whole board).

    • See I was trying to bring a supergrape in to play against my grandfather and he didn't believe me.. thank god i have someone to back me up now!
    • Couldn't agree more. The whole issue behind the prisoner's dilemma is trust and payoff. The Tit for Tat wins because it changes the rules of the game; buy adding in other factories. Its like saying you won at Monopoly buy bribing two of the other three players to play poorly. It works, but its doesn't exactly count as a winning strategy.
    • Re:...by cheating! (Score:5, Informative)

      by billbaggins ( 156118 ) on Thursday October 14, 2004 @10:47AM (#10524100)
      Actually, this is exactly the sort of thing that the organizers were hoping would happen. From the FAQ, question 12:
      But we don't want to [impose limits on the number of entries] as it will be interesting to see if people can come up with strategies that cooperate with themselves within the whole population.
    • Re:...by cheating! (Score:3, Informative)

      by Kupek ( 75469 )
      The whole reason people are interested in the Prisoner's Dilemma (and Iterated) is that they are models for situations encountered in science. This isn't a comepetition like chess where we're trying to see who (as in the humans) are the best at it. We're trying to see what interesting results come of it. This is an interesting result.
  • How difficult is something like this, really? One of my graduate level friends did some sociology work on a Game theory system like this, and if you know the rules, you can really beat the system.


    Just curious, thats all. Anyone have any experience in the field?

  • by Dr. Manhattan ( 29720 ) <sorceror171@@@gmail...com> on Thursday October 14, 2004 @10:17AM (#10523739) Homepage
    From TFA:"Our initial results tell us that ours is an evolutionarily stable strategy -- if we start off with a reasonable number of our colluders in the system, in the end everyone will be a colluder like ours," he said.

    It's not clear to me how the entries determined who would be the 'master' and who would be the 'slave'. It seems that if you had lots of 'colluders' around who could be induced to 'suicide' for another's benefit, you'd very quickly get cheaters who worked to be the 'master' in all situations.

    This strikes me as a lot more reminiscient of the Hawk/Dove [washington.edu] situation.

  • Tit for Tat (Score:5, Funny)

    by alexo ( 9335 ) on Thursday October 14, 2004 @10:17AM (#10523745) Journal
    Why should Tat get all the fun?
  • Southampton's programs executed a known series of 5 to 10 moves which allowed them to recognize each other. After recognition, the two Southampton programs became 'master and slave': one program would keep defecting and the other would keep cooperating.

    Am I the only one who thinks this is just kind of obvious and silly?

    Also, what kind of "moves" can be made by a "prisoner" that can be seen by the other prisoner?

    Pat
  • Spirit of the PD (Score:5, Interesting)

    by johnthorensen ( 539527 ) on Thursday October 14, 2004 @10:19AM (#10523770)
    Not precisely cheating, as the rules are set up to play this way...but this certainly violates the spirit of the original Prisoner's Dilemma. Why?

    Real prisoners only get to choose ONCE.

    By taking advantage of the multiple-iteration aspect of the simulation with this sort of 'portknocking' strategy, the winning programs kind of take a cheap shot at the original PD.

    Of course, it's all hypothetical anyway, and come to think of it Tit For Tat technically takes advantage of the multiple-iteration aspect as well by doing whatever the opponent did the last time...

    Ah well, at least the Wikipedia entry makes a distinction between regular "Prisoner's Dilemma" and "Iterated Prisoner's Dilemma".
    • Re:Spirit of the PD (Score:4, Informative)

      by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Thursday October 14, 2004 @10:39AM (#10524007)
      Err, of course this is the iterated prisoner's dilemma. It is quite easy to do the optimal thing in the non-iterated case: defect. You couldn't make a competition out of that.
      • Re:Spirit of the PD (Score:3, Informative)

        by b1t r0t ( 216468 )
        Yep. If you know that a given move is the final move, the optimal action is to defect. When it's not iterated, there is only one move, so of course it's going to be the last move.

        I presume there is no way the entrants can know how many moves will be in a given round, or Tit-for-Tat could be slightly be beaten by a modified TfT which always defected in the final round.

  • by aug24 ( 38229 ) on Thursday October 14, 2004 @10:19AM (#10523778) Homepage
    "The result is that Southampton had the top three performers -- but also a load of utter failures at the bottom of the table who sacrificed themselves for the good of the team."

    J.
  • by nels_tomlinson ( 106413 ) on Thursday October 14, 2004 @10:21AM (#10523798) Homepage
    The article got it wrong: they compared the tit-for-tat strategy for the iterated prisoner's dilemma to mutual assured destruction. That's wrong, since nuclear war is usually considered to be a one-time game: once you've blown each other up, there is no next round. Tit-for-tat requires that there always be a following round.

    Repeated games have radically different outcomes than one-time games. It's long been known that where cooperation is possible, cooperation can beat solitary strategies in repeated games. I really don't think there's anything surprising here.

    • by Anonymous Coward on Thursday October 14, 2004 @10:36AM (#10523972)
      "The article got it wrong: they compared the tit-for-tat strategy for the iterated prisoner's dilemma to mutual assured destruction. That's wrong, since nuclear war is usually considered to be a one-time game: once you've blown each other up, there is no next round. Tit-for-tat requires that there always be a following round."

      The nuclear MAD comparison is apt, because of the time lag between launch detection and detonation. During the flight time of the first launch, there is time for several rounds to occur.

      Actually, the nuclear standoff could be considered an ongoing PD game with both sides playing Tit-For-Tat strategies. The rounds occur every few minutes with both sides asking "did the other side screw us yet" and responding "no, so we won't screw them yet". This PD game has consisted of millions and millions of turns already, with both players using historical knowledge to influence their current choices.
    • Tit for tat has a secret handshake too, but it's a code of ethics. It is robust in any iterated situation. That's what makes it neat.
  • Asian mentality (Score:4, Interesting)

    by pubjames ( 468013 ) on Thursday October 14, 2004 @10:26AM (#10523859)

    ok, here's a weird thought. In many Asian countries, the mentality is to work as a group, rather than individually, with the individual sacrificing themselves for the group if necessary. In the USA and most of the "western" world, we tend to act more as individuals. We tend to think think our system is better, but what if we're wrong? Perhaps, as this experiment shows, the Asian mentality may actually be the superior strategy?

    China has been most consistently the biggest superpower over mankinds history, and it looks like it's going to be that way again in a couple of decades. Perhaps these things are related...
    • Re:Asian mentality (Score:4, Insightful)

      by Perianwyr Stormcrow ( 157913 ) on Thursday October 14, 2004 @10:31AM (#10523932) Homepage
      China has usually been cultured but not powerful. Chinese history is a long sequence of conquests by powerful outsiders (Manchurians, Mongols, Europeans.)
      • Re:Asian mentality (Score:3, Insightful)

        by Anonymous Coward
        Actually they have been powerful, many, many times. But always at the hight of the current dynasty.

        The cycles usually work as such: There's a period of chaos (warring states, etc), usually ending up with some external power coming in and conquering. Then a majority kingdom is established (didn't always own all of what we now call China, or sometimes more than present day, but whatever). Then there'd be a period of hightened trade. The influence of external nations would prompt both an interest in othe
    • Re:Asian mentality (Score:3, Insightful)

      by Jerf ( 17166 )
      Perhaps, as this experiment shows, the Asian mentality may actually be the superior strategy?

      Oh, this is a bad time to get all multicultural.

      Sure, it works out great for the Masters, who get to the winners circles on the backs of their Slaves.

      Meanwhile, if you want to call Tit-for-Tat the Western strategy, everybody mostly wins after a while, even though few do really well.

      I don't believe either categorization. I'm just pointing out that if you're going to base your argument on this article, you are sa
    • Re:Asian mentality (Score:3, Insightful)

      by mdfst13 ( 664665 )
      "In many Asian countries, the mentality is to work as a group, rather than individually, with the individual sacrificing themselves for the group if necessary."

      But that isn't what this system does. Individuals do not sacrifice themselves for the good of the group; the group sacrifices itself to build up individuals. It is more like a feudal joust. If the king enters, all his opponents withdraw, making him the defacto winner.
  • It is not the first (Score:5, Interesting)

    by Flyboy Connor ( 741764 ) on Thursday October 14, 2004 @10:28AM (#10523898)
    Axelrod never claimed that Tit-for-Tat was the best PD-playing program. He just stated that Tit-for-Tat would play well against any other combination of programs. Actually, IIRC, in the second tournament he organised Tit-for-Tat came in second. There was a different program that managed to exploit faults in other programs.

    It is easy to score better than Tit-for-Tat in Axelrod's (original) tournament. He included a program that played random moves. It is not difficult to recognise this program after, say, ten moves have been played. You can always defect against random, because its moves are unrelated to its history. So, a program that plays Tit-for-Tat by default, but always defects against Random, scores better than Tit-for-Tat.

    Does this dillute Tit-for-Tat's accomplishment? Of course not. Tit-for-Tat still plays well. And it is such a simple strategy that it can be programmed in two lines ("C on move 1, then copy opponent's previous move"), which none of the other programs achieve. Tit-for-Tat is simple, elegant, and strong. It's beautiful.

    Southamptom entries, on the other hand, are complex, sneaky, and cheating against (perhaps unwritten, but nonetheless agreed-upon) rules. They're ugly. They only prove that backstabbing cheating bastards may defeat just-and-fair if the referee is looking the other way for a moment.

    • by jaaron ( 551839 )
      Southamptom entries, on the other hand, are complex, sneaky, and cheating against (perhaps unwritten, but nonetheless agreed-upon) rules. They're ugly. They only prove that backstabbing cheating bastards may defeat just-and-fair if the referee is looking the other way for a moment.

      Sorry to be the one to break it to you, but sometimes life is just that way. :)
  • by CyberGarp ( 242942 ) <Shawn@Garbett . o rg> on Thursday October 14, 2004 @10:34AM (#10523955) Homepage
    Communication between secret partners has been one of the most undefeatable stratgies in cards for a long time. Didn't take a computer to figure that out. Someone just figured out how to do in the rules given for this competition.
  • What I want to know is:
    What is tat?
    Where do I get it?
    And how do I exchange it for the other thing?

    --Dennis Miller (IIRC)

  • by Baldrson ( 78598 ) on Thursday October 14, 2004 @10:35AM (#10523964) Homepage Journal
    This is a clever demonstration of kin selection among groups of competing algorithms.

    A mathematical treatment of population genetics in groups was given by W. D. Hamilton in "Innate Social Aptitudes of Man [geocities.com]". In the last sentence of that paper, Hamilton, the originator of modern kin selection theory, states:

    One hears that game theorists, trying to persuade people to play even two-person games like 'Prisoner's Dilemma', often encounter exasperated remarks like: 'There ought to be a law against such games!' Some of the main points of this paper can be summarized as an answer to this comment: that often, in real life, there is a law, and we can see why, and that sadly we also see the protean nature of this Dilemma, which, when suppressed at one level, gathers its strength at another.
    What Hamilton is referring to is the fact that in any structure of components vs composite, there is the opportunity to defect. An individual gene can defect against the organism within which it resides via, say, meiotic drive [ucl.ac.uk]. An individual may defect against his tribe made up of his close relatives. A tribe may defect against the others making up a nation. A nation may defect against others making up a geographic race. A geographic race may defect against others making up humanity as a whole.

    It is indeed a dilemma but it isn't without a rigorous treatement within genetic theory.

    Steve Sailer [isteve.com] has written an an excellent review [vdare.com] of the politically touchy issue of ethnic nepotism given from Hamilton's group selective perspective.

  • by G4from128k ( 686170 ) on Thursday October 14, 2004 @10:36AM (#10523976)
    This story illustrates the power of groups and societies to coordinate to the detriment of individuals and outsiders. The Southampton team used a "secret handshake" to recognize members of the society and discriminate against outsiders. It is a natural explanation for people's fear of closed/secret societies -- people fear the group's ability to break the rules of individualistic "fair play."

    If the agents in the game were capable of higher order reasoning and could see these coordinated actions between members, then they would become paranoid -- all the Southampton team members were "out to get them."
  • by jafiwam ( 310805 ) on Thursday October 14, 2004 @10:39AM (#10524017) Homepage Journal
    Except Tit for Tat is more robust than other plans, deals well with a wide variety of opponents, and is easy for opponents to "figure out" and is "forgiving" so it does not get caught in endless loops of mutual punishment easily.

    Being that, beating Tit for Tat isn't that big of a deal. Doing BETTER than Tit for Tat consistently _IS_ a big deal.

    The game is a positive sum game, so it pays off to end up in a cooperative (or semi-cooperative) sequence over repeated "defections".

    For some good reading on the Prisoner's Dilemma Game and how it fits in some biological systems read;

    "The Evolution of Cooperation" by Robert Axelrod (and newer books)

    "The Selfish-Gene" by Richard Dawkins

    There may be more recent books too, it's been while since I studied the subject.

    Having one plan that can beat Tit for Tat
    • Being that, beating Tit for Tat isn't that big of a deal. Doing BETTER than Tit for Tat consistently _IS_ a big deal.

      I think you missed some points of the research - the idea was to find "how many colluders do I need to beat Tit for Tat?"

      Tit for Tat is quite possibly the best "single person" strategy. Against any other opponents not working together, Tit for Tat will typically win in a long enough iteration.

      Now, we know that Tit for Tat isn't the best in cooperating environments - as someone else here p
  • by Wind_Walker ( 83965 ) on Thursday October 14, 2004 @10:41AM (#10524030) Homepage Journal
    Wouldn't an algorithm that defected every time (call it Traitor) beat the Tit for Tat program?

    1st iteration - Traitor defects, TfT cooperates, TfT loses and Traitor wins.
    Nth iteration - both defect, minor losses for both

    Thus Traitor beats TfT... What am I missing?

    • That's right (Score:4, Informative)

      by Anonymous Coward on Thursday October 14, 2004 @10:52AM (#10524163)
      That's right, traitor (hawk) beats TfT in any given trial.

      BUT, in an environment made up of a few players playing each strategy, then you have the following matchups:

      Hawk vs Hawk. Horrible horrible loss for both of them.
      TfT vs Hawk. Hawk wins, but only by a single round.
      TfT vs TfT. Both TfT 'win' - neither betray the other.

      So, overall, TfT does better than hawk.

      The interesting part isn't beating TfT (which, as you point out, isn't THAT hard to do) but in doing consistently better than it against a wide variety of programs. Which is what TfT has long been the baseline for.
    • You're right that in a one-on-one matchup, always defect would beat TfT. However, the point of TfT isn't that it would win every single one-on-one matchup but that it does extremely well versus any number of other strategies. Your "always defect" would beat TfT, but if you played Grim Trigger then you wouldn't do that well, whereas TfT would do very well playing with Grim Trigger (Grim Trigger is a strategy that cooperates until the opponent defects, and then it defects forever).

      As has been stated in thi
    • This isn't a masochistic game, you win by scoring above 0, and lose by scoring less than 0. In Tit for Tat v Traitor, they both lose.
  • Since the two Southampton programs work together, I would say they both earn half of their combined winnings. From that point of view, Southampton lost big. Of course, if you are allowed to completely discount your losing entries, it is easy to defeat Southampton: simply create one master with TWO slaves.

    Once I "won" a PD-tournament by fiddling with the organising engine. The friendly engine supplied the programs with the recent histories, and I simply inserted all cooperates in my opponent's table. The s

  • In an organism as ancient and lowly as the slime mold, a genetic feedback mech evolved so that cheating would balance with altruism...You should hope your species hangs around as long as the slime mold [> 1 billion years!] see this article at BetterHumans [betterhumans.com] and elsewhere.
  • So...

    In a championship of iterated games where I got to play against myself occasionally, I was able to win the aggregated championship score due to my "insightful" ability to optimize playing against myself.

    Did I get that right?

  • by jamie ( 78724 ) * <jamie@slashdot.org> on Thursday October 14, 2004 @10:50AM (#10524141) Journal
    It's pretty trivial that if two or more Dilemma agents are able to recognize each other, they have an advantage over those which cannot. I've got a Prisoner's Dilemma simulation [mccarthy.vg] running on my website -- I wrote some code for it over the summer and have been playing around with it on and off.

    Once I experimented with letting the agents recognize which "species" they were in and which "species" their opponent was. The runaway winner, of course, was the one which always cooperated with itself, and was less nice to every other species. (In my version, "less nice" meant playing Tit-For-Tat, but the idea's the same.)

    Being able to do this is like having the teacher's edition. If recognizing which species other agents belong to is allowed, that's a pretty trivial strategy. It's not called cooperation. It's called xenophobia, or to put it into the most familiar anthropomorphization, racism.

    (The life lesson, if I may go out on a limb, is that in an environment where some recognize a quality called "race" and discriminate based on it, being unable to see that quality is a liability. Being truly color-blind means you are unable to recognize not only race but racism, which means you will be taken advantage of.)

    When I ran my first tournament [mccarthy.vg] and got some interesting results based on this, I realized that knowledge of what "species" an agent belongs to is too powerful, it throws a monkey wrench into the works. So I scrapped it and moved on to stuff I found more interesting [mccarthy.vg].

    But the winner of this PD tournament was even craftier; he submitted a ton of entries, all of which were xenophobic in this way, except that they all recognized one "species" as the top dog. The other "species" essentially committed suicide to give the highest score to the top dog. That wouldn't have worked in my tournament, since they literally would have committed suicide (my agents starve to death if they don't score high enough) and that would have shaped the resulting environment. Every tournament is artificial in some way, and the human submitting entries to this one was clever enough to take advantage of these particular artificialities.

    Since it's now been shown that inter-agent communication is possible, that's going to be fair game for every tournament from now on. The next step is going to be designing tournaments to work with this trick, not against it. As I wrote to this tournament's organizers:

    Since that's such a powerful strategy, I think the next step in PD tournaments is not to try to overcome it, but to embrace it: allow agents to communicate, not just with their own species, but with whoever they're playing against. My guess is that mere xenophobia would be eclipsed by the much more powerful strategy of joining the ongoing discussion about which agents can and can't be trusted. That's the next big feature I want to try.

  • Hmm, super-cooperation. They are cooperating outside of the problem to achieve a goal outside of the problem. I think this is just a cheat. The programs are not out for personal gain at all and so are not truly participtaing.
  • Curse these researchers, now black hats will be using this technique to let exploit code escape from chroot prisons!
  • by bigHairyDog ( 686475 ) on Thursday October 14, 2004 @10:52AM (#10524171)
    Tit for Tat is outperformed by "Tit for Two Tats", because it is better at avoiding long runs of damaging mutual recrimination. That was 5 years ago. The performance of any of these strategies is only determined by the opponent strategies that they face, which is arbitrary. It is therefore meaningless to talk of one strategy being 'better' than another - most advanced strategies can beat Tit for Tat given the right opponents.
  • I don't really get this, looking at the rules it seems the best outcome is to screw people over. Cooperation gives small wins for both... is this just to illustrate a zero-sum game or is this supposed to actually model real-life social interaction? I do not believe that in reality if everyone cooperated there would be a small payoff for all, and that the secret to life is to screw unto others. I know the situation is two prisoners being interogated, which is not the same as say... two neighbors fighting
  • Southampton's programs executed a known series of 5 to 10 moves which allowed them to recognize each other
    The whole idea behind the prisoner's dilemma is that neither party is privy to what the other party is currently doing.

    By using this "recognition system", the program is capable of "knowing" in a deterministic fashion what some of the other programs will do in advance.

    In other words, at the very least, a cheat.

  • Not a cheat (Score:3, Interesting)

    by Carmody ( 128723 ) <slashdot@@@dougshaw...com> on Thursday October 14, 2004 @11:21AM (#10524506) Homepage Journal
    I've been giving talks on the Prisoner's Dilemma for a few years. (No original research, just following the thing and explaining the game to the Youth)

    It is kind of an orthodoxy in the literature: Tit for Tat always ties or loses by a little bit, but in tournaments, it is the best strategy.

    Well - it ain't. Someone found a way around it. Instead of urging rule-changes to prevent this new challenger, we should all be happy and excited that PD tournaments have just got MORE INTERESTING.

    I can't wait to see what happens next - what new programs will emerge to have the advantages of Tit for Tat but also the ability to defend against Master-Slave programs that communicate with each other.

    The game has changed - now let's leave it alone and watch.
  • nested bogosity (Score:3, Interesting)

    by nusratt ( 751548 ) on Thursday October 14, 2004 @12:13PM (#10525259) Journal
    Everyone is posting about how this is bogus because it's really not the same game as PD.

    But even if you don't agree with that view, another important question is:
    in what meaningful sense is this new strategy a "victory"?
    After all, it achieves "victory" for half of the cooperators, at the cost of sacrificing the other half.

    To use one nuclear-war analogy, it's a choice between strategy "A",
    where you acquiesce to the death of half of your populace, with the reward that the remaining populace is completely unaffected --
    and strategy "B", with the guaranteed result that no one dies but everyone is injured.

    Which populace would *you* choose to join on the eve of war?

  • The next game (Score:3, Interesting)

    by Gyorg_Lavode ( 520114 ) on Thursday October 14, 2004 @12:22PM (#10525408)
    I think what will become more interesting is that, now that we know the best lone player (tit for tat) can be defeated by players playing together, can we write our players to look for a player trying to communicate to another player so as to take advantage of it. Can my player play tit for tat against normal players, but, when it sees a S/M player, convince the S/M player to play slave for my gain?
  • For all of those saying, "Isn't this just cheating?" I say this:

    Creativity is "just cheating." Creativity is breaking the rules in a novel way that sheds new light on reality. And isn't that the holy grail of AI?

    So, was this just cheating? Hell yes. And it's fantastic.
  • Missing option... (Score:5, Insightful)

    by balaam's ass ( 678743 ) on Thursday October 14, 2004 @12:51PM (#10525895) Journal
    I agree that this defnition of the "Prisoner's Dilemma" is no more than a "meta-game," and not really a problem of philosophical ethics (though it may appear to be to some people.)

    What I find disturbing this is the way that the problem is framed presupposes no underlying system of ethics. To wit....
    * If you confess and your partner denies taking part in the crime, you go free and your partner goes to prison for five years. * If your partner confesses and you deny participating in the crime, you go to prison for five years and yor partner goes free. * If you both confess you will serve four years each. * If you both deny taking part in the crime, you both go to prison for two years. What do you do?

    How about: Tell the truth? Regardless of what your partner does, tell the truth. I find it disturbing that the problem is framed in a way that the actual truth of the matter is irrelevant. (i.e. the problem would be unchanged if I replaced "You and your partner have committed a crime and are caught" with "You and a friend have been accused of a crime which you may or may not have committed.")

    I'm not trolling or off-topic here. I'm dead serious. This formulation of the PD is ethically doomed from the get-go, and thus the results of the experiment may be of interest to mathematical game theorists of this particular game, but I find it unwise to think the results make any significant implications about ethics (or anything else for that matter).

    Someone will counter that since this is a "Prisoner's" dilemma the person involved must be a criminal with no "ethical" principles other than an interest in self-preservation (i.e. the person is already debased as can contribute nothing meaningful on the subject of ethics! ;-) ). I'd say that just because someone committed a crime does not mean they necessarily want to continue committing crimes...
  • by DrRobin ( 33359 ) on Thursday October 14, 2004 @01:09PM (#10526135)
    As a microbiologist with interest in evolution, I have followed this field from afar for years. Looking over the results, I was surprised at how relatively poorly "Pavlov" (win-stay lose-shift) did, since it performs so strongly in noisy, evolutionany, versions of the game. [see:
    http://www.ncbi.nlm.nih.gov/entrez/query.fc gi?hold ing=npg&cmd=Retrieve&db=PubMed&list_uids=8316296&d opt=Abstract
    It was also a bit dismaying to see how well "Grim" (hold a grudge forever) did in both games. In evolutionary versions of the game, Pavlov helps keep down the population of "suckers" (thereby decreasing the food supply for more predatory and parasitic strategies) while still rewarding "provokable" cooperators (thereby increasing the total aggregate "reward" of the ecosystem.
    Also, one essential part of the payoff structure that deserves emphasis is that the payoff for cooperating has to be more than half the average of the winner and loser's payoff for defection, else one benefits by simply alternating each turn. This is a little bit like the winners did here, where they got the top spots at the cost of a lower total take for their "team". One real world example of slashdot interest where this might make sense is if you take these losses in order to eliminate your rivals from the game and then reap monopoly benefits once you control the game (not to mention any names...).
    Maybe someone who has analyzed the results in more detail could comment on how the various well known strategies fared and why.
  • by BobaFett ( 93158 ) on Thursday October 14, 2004 @02:04PM (#10526767) Homepage
    While entering a team into a tournament scored for individuals and then sacrificing the whole team for one player is by no means a new idea, what makes it so remarkably successfull here is the existance of a "guaranteed draw" strategy (in this case, always defect). The best individual response to "always defect" is to defect yourself, anything else is a suicide, so if you always defect you can force a draw. Then all your team loses to one team member, and he is the winner.

    Compare this with, for example, a chess tournament. You could secretly enter a team and have them all lose to you. While this will keep you from ending last, it won't assure victory, unless all players are roughly equal. If there is a very strong player, he'll win against all your team, yourself included. So you can cheat by redistributing players of comparable strenghts, but at least you can't rob a clear champion of his deserved victory.

    This is not the case in the PD tournament. But let's redefine the problem slightly: say, if both sides cooperate, each gets a dollar. If then defect, each pays a dollar. Sucker's reward is paying 10 dollars. Now the Southampton team's strategy boils down to using the tournament to give all their money to one player, while paying a hefty tax in the process. There is a cheaper way to do this, just give all money to one guy outside the tournament :) But now we can gauge any strategy: enter one player or a team, recognize your own team members or not, transfer money between team members as you wish, but can you make money, overall, from this tournament?
  • by Thangodin ( 177516 ) <elentar&sympatico,ca> on Thursday October 14, 2004 @02:18PM (#10526927) Homepage
    Optimistic Tit-for-Tat models human behaviour well in a social setting--we give others the benefit of the doubt, and continue to cooperate when others do. When someone violates our trust, we stop trusting them and punish them, but if they act beneficially towards us again, we might be willing to forgive. Most notable, OTFT produces the best overall score, which in competition between social groups is the deciding factor.

    The Southampton strategy is dependent upon large numbers of people who will sacrifice all for the good of the other, and not for the good of their community (the collective performance is worse than OTFT.) I can see sacrifice for the greater good, but this is sacrifice to another person without hope of recompensation or an increase in general wellbeing. This does happen in human societies (I think it's happening now in some political systems), but only when the winner has managed to convince the losers that its all in everyone's best interest. What Southampton has added to this mix is a capacity for extreme self-delusion that directly contravenes the economic assumption of informed choice and self-interest. For purposes of economic modelling, Southampton should probably be disqualified, or these assumptions dropped. But this should also tell you something about what could happen to those nice economic models when they hit the messy world of human beings, who for the most part aren't very informed and often work against their own best interests as a result.

    The consequence for a societal group running Southhampton against an OTFT group would be the defeat of the Southhampton group every time. Selection works at individual AND group levels. So the challenge should probably be two-tier: run the programs individually against each other, and run them as tribes against each other.
  • Kobiyashi Maru (Score:3, Insightful)

    by Bugmaster ( 227959 ) on Thursday October 14, 2004 @02:35PM (#10527155) Homepage
    So, essentially, the winning program(s) hacked (or exploited, if you prefer) the game in order to win ? That's pretty clever, but does this count as a true victory ? It's sort of like what Captain Kirk did to rig his Kobiyashi Maru scenario. Sure, he won on a technicality, but in doing so he missed the whole point of the challenge.

Successful and fortunate crime is called virtue. - Seneca

Working...