Please create an account to participate in the Slashdot moderation system


Forgot your password?
Supercomputing Toys

Cracking Go 328

prostoalex writes "IEEE Spectrum looks at current trends in artificial technology to crack the ancient Chinese game of Go, which theoretically has 10^60 potential endings. Is conquering the game via exhaustive search of all possibilities possible? 'My gut feeling is that with some optimization a machine that can search a trillion positions per second would be enough to play Go at the very highest level. It would then be cheaper to build the machine out of FPGAs (field-programmable gate arrays) instead of the much more expensive and highly unwieldy full-custom chips. That way, university students could easily take on the challenge.'"
This discussion has been archived. No new comments can be posted.

Cracking Go

Comments Filter:
  • by techpawn ( 969834 ) on Wednesday October 10, 2007 @04:15PM (#20931621) Journal
    A computer will never beat a kid possessed by the ghost of an Edo period Spirit! []
  • But don't expect to finish the game yourself.
  • Exhaustive? (Score:4, Insightful)

    by SavedLinuXgeeK ( 769306 ) on Wednesday October 10, 2007 @04:17PM (#20931665) Homepage
    Using an exhaustive method is silly in a realm of 10^60 possibilities. Obviously human players, with much more finite resources (computationally that is) can do a much better job. Why not spend more time and being able to analyze the board better, and mimic the decision making patterns of the masters. I realize this has already been tried, and is still being worked on, but exhaustive search just seems like a waste of time and effort. My .02
    • Re:Exhaustive? (Score:4, Interesting)

      by UbuntuDupe ( 970646 ) * on Wednesday October 10, 2007 @04:26PM (#20931791) Journal
      Ditto. As long as we're going with gut feelings, my "gut feeling" is that this is a futile endeavor.

      So you can brute force a victory on a game with 10^60 possible endings? Okay, then I'll invent a game with a much bigger space to search and a bigger decision tree. Then what?

      You've found a way to weed out fruitless branches, so it collapses into a problem that requires only 10^12 flops to solve? And this method isn't in use yet? Submit it to a journal, and add it to the corpus of Go algorithms.

      So what's the breakthrough here?
    • Re:Exhaustive? (Score:5, Informative)

      by smallfries ( 601545 ) on Wednesday October 10, 2007 @04:32PM (#20931869) Homepage
      Oddly enough this very question forms the basis of the article that you haven't read... :)

      The basic asnwer is that we can optimise doing something simple and replicate lots, far, far better than optimisiing something complex and replicating it a few times. Which if you think about it is the approach that the brain / evolution of the brain opts for.

      The "brute-force" approach described is actually mid-way between a pure brute-force (exhaustive) search and a "smart" search using quite sophisticated techniques to prune the tree. The author talks about alpha-beta pruning, null-move generation and hypothesis testing. These techniques are still pure-search (brute force) techniques - ie you could apply them to any game tree, but their effects are similar to how a smart (tailored to the specific game) approach would work.
      • I'd like to feed this problem to one of the new IBM System S machines and give it a crack. This is exactly the kind of problem-set the Sys/S boxes were designed for...
        • That sounds interesting but kind of hard: the basic search is expanding out a tree as you test different moves to yield different positions. Doesn't stream computing work better on datasets with fixed dimension? We've got a blue gene upstairs that would be sweet to try this out on, but there's a long queue and somehow I doubt I would get priority :(
          • Actually the System S, as described to me, was designed to take non-linear and dimensionless (rather, undefined raw data without predefined dimensions) and do heuristic analysis looking for trends. The trick would be to describe the technical rules and the potential end game results, and let the System run for a while playing against itself in order to come up with the basic analysis, then let it watch a bunch of games played by humans and try to predict the results as the game plays.

            Just curious - a Blue
    • Re:Exhaustive? (Score:5, Insightful)

      by eniac42 ( 1144799 ) on Wednesday October 10, 2007 @04:38PM (#20931949) Journal
      but exhaustive search just seems like a waste of time and effort

      Not so. The exhastive search approach provides a testable benchmark for true AI to exceed before it can prove it is productive AI. The famous Chess 4.6 program demonstrated that a simple "technical" tree search program could perform better than selective "intelligent" algorithms that tried to prune the search tree aggresively. With Go, this strategy fails - but the "intelligent" pruning mechanism is still eluding everyone..

      There is still a golden prize in game-search algorithms - solve this, and other problems in AI can be solved too..
      • Re:Exhaustive? (Score:5, Insightful)

        by CodeBuster ( 516420 ) on Wednesday October 10, 2007 @06:31PM (#20933483)
        but the "intelligent" pruning mechanism is still eluding everyone.

        It is not the pruning mechanism itself which is still eluding everyone, but rather the difficulty in formulating a sound evaluation function for use in the alpha-beta pruning [] optimization of the brute force search. The game of Go [] is interesting because it is often the case that a particular board configuration does not give much concrete information about its future potential. In other words, a seemingly disadvantageous board in the short run might actually lead to a better endgame than a seemingly more valuable board in the present. A small situation in one part of the board might effect the larger game in unexpected and interesting ways dozens of moves into the future. It is precisely this complexity which makes Go intriguing for players and difficult for computers.
    • Re:Exhaustive? (Score:4, Interesting)

      by zeromorph ( 1009305 ) on Wednesday October 10, 2007 @04:44PM (#20932043)

      but exhaustive search just seems like a waste of time and effort.

      ...and is just awfully boring. The whole chess/go playing thing, as far as I am concerned, is a mean to get new intelligent solutions to decision making/inference/... problems.

      A not so successful program that would actually incorporate traditional strategic concepts [] would be an interesting solution. For example, if a program would actually be able to handle a concept like aji [], roughly (latent) potential - now, that would be elegant, that would be interesting.

    • I read an article to this effect in an AI class. Apparently the Latest Thing in solving things like Go is to take a good random sampling of just a few thousand (or tens of thousands of) possibilities in the future, and try to see what sort of move looks better, rather than trying to be exhaustive about the entire thing. I thought this was Quite Interesting. Meanwhile, the exhaustive search is really the least interesting way you could possibly do it, and won't likely provide you with much insight on Go, or
      • by pclminion ( 145572 ) on Wednesday October 10, 2007 @06:34PM (#20933517)

        Meanwhile, the exhaustive search is really the least interesting way you could possibly do it, and won't likely provide you with much insight on Go, or related matters. *yawn*

        Exhaustive search may be boring, but it is also the only theoretically sound method. Aggressive forward pruning of game trees always has a danger factor (you might prune a move that looks bad at depth 5 but is actually a win at depth 10), and this danger is massively increased in games with large branching factors like Go. Basically you are greatly amplifying the horizon effect.

        Methods such as Prob-Cut and Multi-Prob-Cut have proven useful in games with extremely high branching factors where the evaluation function has strong correlations between depths (games like Amazons, which has an initial branching factor of 2176, far larger than Go's 361). However there is always risk associated with statistical pruning methods.

        Really, the only theoretically sound method of forward pruning is to define a specific evaluation function and then produce proofs that certain board configurations cannot cause an increase in the evaluation. But these proofs are hard or impossible to construct and often they only apply in positions which are already clearly bad.

        In my opinion, it's not that computers are terribly bad at Go. It's that humans are unusually good at it. I am regularly whomped by my own Amazons-playing program and that program can't even look to ply 2 at the opening of the game. On the other hand, I destroy most of the simpler Go programs, even though I absolutely suck at Go.

    • by phantomfive ( 622387 ) on Wednesday October 10, 2007 @04:59PM (#20932293) Journal
      Read the article. Essentially, he has been spending all his time searching for ways to prune that tree to a reasonable level. He thinks that he has enough pruning techniques, and that computers have sped up enough, to solve the game of Go within the next 5 years.

      Unfortunately for him, he is working for Microsoft labs in China, and since 2008 is well known to be the projected year of linux on the desktop, Microsoft won't be around long enough to continue funding his project. Sigh.
    • Using an exhaustive method is silly in a realm of 10^60 possibilities.

      If it can be done at a reasonable time, exhaustive methods are actually hard to beat if they are done completely.

      Keep in mind even though no human nor average computer today could come up with all the possible plays of Go in a reasonable amount of time, doesn't mean that it will be possible or isn't already possible given the appropriate hardware.

      The thing is... If you know all possible moves in the game, you will know "the next best move
    • The way humans deal with data is just different from how computers do. Tasks we find trivial they find hard and vice versa. For example if you show a human pictures of cats and dogs and ask them to classify them according to which animal they are, that's easy for us. For a computer, it's a very hard task. However a computer has no problems dealing with an analyzing a 50,000 item long table of numbers, whereas a human needs them converted to another form to make sense of them.

      Well, when it comes to solving g
  • yay! (Score:5, Insightful)

    by apodyopsis ( 1048476 ) on Wednesday October 10, 2007 @04:17PM (#20931667)
    My favorite game.

    First time I played online I had my ass handed to me by a precocious 12 year old. Ah well, the memories.

    *Alot* more complex and tactical then Chess, believe it.

    See the rules at: []

    Play online here: []

    I recommend installing glGo and having a go vs. computer (on the easiest setting there is).

    Enjoy, I do.

    • Re:yay! (Score:5, Insightful)

      by zippthorne ( 748122 ) on Wednesday October 10, 2007 @05:02PM (#20932359) Journal
      Pure number of game states does not equate to richness of experience.

      Would you say that "Diplomacy" is more or less tactical than "Axis and Allies?"
      • Game complexity (Score:5, Interesting)

        by Michael Woodhams ( 112247 ) on Wednesday October 10, 2007 @05:47PM (#20932955) Journal
        On complexity: perhaps the best definition (for matching our intuitive feeling for how complex a game is) I've come across for game complexity is this: Imagine a novice player who avoids glaringly wrong moves (e.g. sacrificing a queen for no benefit in chess) but otherwise randomly choses from the legal moves available to them. Call this a 'level 0' player. Now a player who is smart enough to beat the level zero player 2/3 of the time is a level 1 player, someone who beats a level 1 player 2/3 of the time is level 2 etc. Then the game's complexity is the level of the best human player. If I recall correctly, this gives naughts-and-crosses/tic-tac-toe a complexity of 1, chess was about 20 and go was about 40. Unfortunately, this is all from memory, I haven't found the right magic words to feed Google to get more information.)

        So yes, the game-state complexity can be a very poor measure of actual complexity, but go still has a very strong claim to being the most complex common human game. (An extreme example of high game-state complexity but trivial actual complexity would be a large game of "Brussel Sprouts" [].

        On the Diplomacy/Axis and Allies comparison. The three games being compared are very different in nature (although they are all zero-sum.)

        Go is two player, complete information, deterministic (as are chess, checkers, hex, naughts-and-crosses/tic-tac-toe) Such a game can, in theory, be completely analysed.

        Diplomacy is multi-player, deterministic, incomplete information because moves are simultaneous. I'm not sure if or how this could be analysed.

        Axis and Allies is (effectively) two player, probabilistic. It could (theoretically) be completely analysed as a very large dynamic programming problem. From a game-theoretical point of view, it resembles Yahtzee.

        • Re:Game complexity (Score:4, Interesting)

          by evanbd ( 210358 ) on Wednesday October 10, 2007 @07:48PM (#20934269)

          This argument is intuitively appealing, and imho it has a lot to say for it. However, it has one major shortcoming:

          Define the game of 3-go to be playing 3 consecutive games of go; the winner of the 3-go match is the winner of the majority of the go games. If player A is 1 level better than player B at go, they will be more than one level better at 3-go, since their winning probability is up to 74% instead of 67%. So, by this definition 3-go is noticeably more complex than go. However, I can think of no meaningful way in which this is actually true.

          Also, if you count steps from a human player who knows the rules and has played a couple games to a human expert, you get 40-50 ranks. But that human player is easily 30 ranks about the true "random but no suicides" computer player. It doesn't take much work to build a computer program 20 ranks stronger than the random player, but that is still a truly *awful* program.

          In the early days of go programming, a very simple algorithm was defined. This is from memory, so it's probably wrong, but it went something like this. If you can capture an opponent's stone, capture a group chosen at random from the largest capturable groups. Failing this, if you have a group in atari that you can rescue from atari, rescue the largest such group. Failing this, if an opponent has a group with less than 4 liberties, play one of its liberties. Failing this, play a random legal move. This turned out to be remarkably potent against early programs, and will absolutely destroy a random player -- yet still lose to even the weakest of human players.

      • Re: (Score:3, Interesting)

        by Laxitive ( 10360 )

        True, but the richness of Go comes from the interplay of the moves permitted by an extremely SIMPLE set of moves. There are exactly 2 major rules and one minor rule in Go:

        1. you take turns placing pieces.
        2. pieces or groups of pieces "surrounded" by other pieces are captured.
        3. the board state cannot be repeated in two successive moves.

        From this simple specification emerges an amazingly beautiful game. There are particular structures and patterns which emerge with concrete properties, advantages and disad
  • Not Any Time Soon (Score:5, Informative)

    by eklitzke ( 873155 ) on Wednesday October 10, 2007 @04:18PM (#20931675) Homepage
    I'm not an active Go player anymore, but I used to me. And the current state of artificial Go intelligence is pretty sad. The best junior players in the world (10-13 or so, IIRC) can beat the very strongest computer Go programs, and they're literally orders of magnitude worse than the best human players.

    Playing Go is more than just searching a lot of positions. The game is _very_ subtle. To even understand how a professional 9 dan player (i.e. the very highest ranked players) plays every move in a game, you really need to be at least a 7 dan or so -- anything less than that and the moves just won't make a lot of sense. I have the feeling that it will be at least a decade (but probably much longer) before computer opponents can beat professional players, and even longer than that until they can beat the very best human player.s
    • Most (if not all games) can be boiled down to mathematical models (this does not apply to things like soccer, football, etc. of course). Right now, we simply don't have the computing power to evaluate all the possible Go game branches fast enough. Give it 5-10 years. Just as IBM's supercomputer was able to go head to head with a grandmaster, soon processing power (as well as memory and other support systems required) will push past what is needed for Go to be evaluated extremely fast in silicon.
      • Most (if not all games) can be boiled down to mathematical models (this does not apply to things like soccer, football, etc. of course)

        Sure it does. Both at the game and player levels. Where the confusion lies is that sports are not deterministic, IE, there are not 64 squares on a football field like there are on a chessboard. But there are most certainly mathematical models.
        At the game level: Statistically, you can look at each of the players' statistics (not just hard stats but soft stats like life ev
      • by afabbro ( 33948 )
        (this does not apply to things like soccer, football, etc. of course)

        Sure it does. A game of football is nothing more than a few quintillion molecules moving around.

    • Re: (Score:3, Interesting)

      My favorite story is about Janice Kim (professional 3-dan). Back in 1997 the programmer of Handtalk was apparently boasting about the program's capabilities when Janice Kim stated flatly that he was wrong about it and she could defeat it easily with any handicap. Apparently this developed into a bit of pissing match and Janice offered a TWENTY FIVE STONE handicap (the largest handicaps generally used in human player are nine stones, by comparison). Things looked bad at first with such an enormous handica
      • Re:Not Any Time Soon (Score:5, Interesting)

        by asparagus ( 29121 ) <koonce@gma[ ]com ['il.' in gap]> on Wednesday October 10, 2007 @04:59PM (#20932299) Homepage Journal
        In the last year the UCT algorithm has really come of its own. It's a monte-carlo variant with weighting for winning lines. On 9x9 boards the programs play at the dan level. 19x19 requires roughly 32x the computing resources for the same level of performance: current machines simply can't handle the load. But MC scales nicely with additional computing power. 32x is only 5 doublings by Moore's law: it's not unreasonable to think that within a decade computers will be giving the masters a run for their money.
    • Re: (Score:3, Insightful)

      by ajs ( 35943 )

      Playing Go is more than just searching a lot of positions. The game is _very_ subtle.

      This is provably incorrect. Searching a lot of positions gets you a solution to the game. The problem is that "a lot" is defined as 10^60, which though it looks like a nice, friendly number isn't even remotely manageable. To give you a sense, the current U.S. national debt is around 9 trillion dollars. That's on the order of 10^12. That's a huge number, but the number of go positions is 1,000,000, 000,000,000, 000,000,000, 000,000,000, 000,000,000, 000,000 times larger than that!

      This is an incomprehensibly

      • This is provably incorrect.

        Don't do that.
      • Notice that they say there are 10^60 potential ENDINGS. Go is not like iagno etc where pieces can only be added to the board. In go, pieces are added and removed, in a sort of "tug of war" struggle, and the game only ends when both players decide the game is over. This means that there is an infinite number of paths to get to any given ending. Infinity > alot.
        • There's not an infinite number of paths if you exclude those paths which contain cycles (which you really should do, since they're pointless). The board is finite, thus there are a finite number of possible configurations of the board. Therefore, you do not need to consider an infinite number of paths, only the best move in each of a finite number of states (which is still an incomprehensably large assload). Remember a few months ago when checkers was solved? Technically, there are an infinite number o
        • Regardless of the number of endings, the board only has a finite number of states. A "solution" to the game simply maps each of these states to the optimal action. In any of your infinitely long games, there would have to be cycles - i.e. the board re-enters states it already encountered - and the cycles don't affect the outcome. That said, it is possible there is no forced win in go, so two optimal players would simply agree to a draw before the first move (a good strategy in thermonuclear warfare, it i
      • Re:Not Any Time Soon (Score:5, Interesting)

        by oni ( 41625 ) on Wednesday October 10, 2007 @05:21PM (#20932619) Homepage
        A much better comparison: A typical star contains *only* about 10^57 atoms []
        • by drDugan ( 219551 )
          another good rule of thumb for orders of magnitude is this

          our best estimate on the total number of atoms in the observable universe: 10^81

    • Very true. (Score:3, Insightful)

      The game is _very_ subtle.

      Absolutely. Last weekend -- Thanksgiving here in Canada -- my family vacationed in Northern Ontario. We didn't bring a television, but somebody did bring a Go set. I set about teaching most of my family how to play it. But what's amazing to me is how long it takes to get an adequate grasp of the basic concepts. Life, death, influence, strength, weakness, playing in the corners and sides, even scoring is hard. I don't even have a good handle on the game, and I've been playing exten

  • Checking all possible moves is unwise. What you do is design a nice directed heuristic, say, a multi objective algorithm, and train this to compute a reasonable set of possible moves from the current state. There is no need to describe an idea end state, just a decent ongoing performance requirement.

    Multi objective algorithms are fearsomely hard creatures to get right, but when working they can do great things. Use one to train a neural network with temporal properties (bases current decisions on past input
    • Re: (Score:3, Interesting)

      by madcow_bg ( 969477 )

      ... What you do is design a nice directed heuristic, say, a multi objective algorithm, and train this to compute a reasonable set of possible moves from the current state. There is no need to describe an idea end state, just a decent ongoing performance requirement.

      Let me just tell you that a stone displaced by a mere one position can lead to very, very sure loss. Besides heuristics are good when we have at better understanding of the mechanics. All top books and players about go talk about "harmony", "beauty", "balance" ... let's just fix what the bleep are we talking about and THEN teach it to a computer, shall we? The theory is more of meditation than a strategy guide.

      Multi objective algorithms are fearsomely hard creatures to get right, but when working they can do great things. Use one to train a neural network with temporal properties (bases current decisions on past inputs at all positrons simultaneously), and you have a means to achieve a decent setup. It would take a long time to get right though.

      Please give me sources for the things you mentioned, couldn't understand a thing. Go is a game

      • Let me just tell you that a stone displaced by a mere one position can lead to very, very sure loss. Besides heuristics are good when we have at better understanding of the mechanics. All top books and players about go talk about "harmony", "beauty", "balance" ... let's just fix what the bleep are we talking about and THEN teach it to a computer, shall we? The theory is more of meditation than a strategy guide

        Or as I say, "If you want to do a brute force method, how about whole brain emulation?"
      • by PDAllen ( 709106 )
        NP-complete _does_ have something to do with game theory: some (a very few) games can be interpreted as NP-complete problems.

        More commonly, though, you find that a game (assuming it's easy to extend the rules to allow play on arbitrarily large boards, which is true for Go but not chess) is either polynomial-time soluble ('bad' games, in some sense) or PSPACE-complete (Go, Hex, etc.. try Hex sometime, it's proven the first player has a winning strategy but even the first move in that strategy isn't known on
    • Re: (Score:3, Informative)

      by Llywelyn ( 531070 )
      Go is not NPC. The Japanese Rules are EXPTIME-complete. Go Endgames are PSPACE-hard, and ladders are PSPACE-complete. Under the Chinese rules it is thought that the game might be EXPSPACE-complete, but this has not yet been proven.

      We also have yet to develop heuristics that even evaluate one position very well, especially during the opening or middle game. Machine learning is not a cure-all here: there's a combinatoric explosion that happens very quickly with this game which makes most of the techniques
    • by PDAllen ( 709106 )
      Go is not NP-complete, it's PSPACE-complete (at least on arbitrarily large boards... the question doesn't make sense on fixed size boards as then it's fixed-time soluble). PSPACE is generally thought to be a much larger class than NP (which in turn is larger than P), though that's not proven.

      NP problems usually do admit some kind of heuristic 'this is good' algorithm; it goes with approximate solutions (which you can often manage in poly-time). PSPACE-complete problems do not seem to admit either, in genera
  • by shawnmchorse ( 442605 ) on Wednesday October 10, 2007 @04:23PM (#20931747) Homepage
    Let's just take the final position of a game of Go. It takes a certain level of Go knowledge merely to determine which groups are alive, which are dead, which can be taken off the board, etc. in order to determine the final score. Beginning Go players will often not be able to determine this on their own and will have to "prove" to themselves that a group really is dead. And then there are the rare cases of dual life, which beginning Go players are frequently not warned about at all. Programming this stuff is HARD. There's a reason why the scoring portion of playing a Go game online is still done interactively (each player taking turns marking dead stones)... it's because there are no algorithms as of yet that can do it accurately enough. With this in mind, I have no idea why they would believe that searching huge numbers of positions would be helpful when computers are extremely hard pressed to come up with useful information about the score of the game at any given stage.
    • by PDAllen ( 709106 )
      Because you can search the entire game tree if you really want to. Your 'final position' is when a pair of humans agree on what is dead, a computer wouldn't and would have to check the game tree from that point, but the computer will eventually get the right answer. Fine, most of those games will end at a point far beyond the time when human players would have stopped and counted up the dead stones, but you can do the search.

      Of course, it will take a very, very long time.
  • "IEEE Spectrum looks at current trends in artificial technology to crack the ancient Chinese game of Go, which theoretically has 10^60 potential endings. Is conquering the game via exhaustive search of all possibilities possible?"

    Even if it where possible to conquer the game via an exhaustive search of all possibilities, it would still be nowhere near Artificial Intelligence. Besides that, other board games-playing computer programs do not get their strength from an exhaustive search, but from a large database of previous games. In fact, even world champions of chess do not think ahead more than 5 or 6 turns.

    Wake me up when they come up with a computer program that can beat all human Go players without an exhaustive search...

    • by ctid ( 449118 )

      Besides that, other board games-playing computer programs do not get their strength from an exhaustive search, but from a large database of previous games. In fact, even world champions of chess do not think ahead more than 5 or 6 turns.

      I don't know where you're getting this information, but you are dead wrong. To take chess as example, strong computers always have an opening book but that's only used during the opening of the game. As for the "5 or 6 turns" comment, that makes no sense at all. Even if by

    • Exactly my thought.

      Plus, why is it so important to "crack" Go, anyway? These researchers have come up with a solution that is essentially a brute-force attack for each game position, requiring a supercomputer. We all know that supercomputers can do amazing things that require unfathomable amounts of calculations. It sounds like they know that this approach would work, too -- they just aren't sure that they can build a machine powerful enough to tackle the problem in a reasonable amount of time. So how does
      • This is exactly how professional chess programs work--exhaustive searches on supercomputers--with the addition of some opening/closing books. What does that solve / help in computer science? I don't know, not a lot?

        On the other hand, it could tell us some things about the game of Go--is it a "fair" game, etc.

        More to the point, why not? It's a challenge, design a program / hardware solution to beat Go. By your logic, why do anything?
    • Even if it where possible to conquer the game via an exhaustive search of all possibilities, it would still be nowhere near Artificial Intelligence.

      Wow, I don't think you really have any idea of what "Artificial Intelligence" means. For that matter, why did you capitalize it--the study of artificial intelligence involves many areas from mathematics, to computer science, to biology, neuroscience, etc--it's not some mystical ... I don't know what, but whatever you seem to think it is!

      You might be shocked, but, given the current state of things, a VERY large part of ai is exactly what you belittle--search questions.

      You're also 100% WRONG about most other

      • by SL Baur ( 19540 )

        I would also strongly disagree with your assertion that "even world champions of chess" don't think more than 5-6 ply ahead--I've seen guesses of several times that number.

        There's no need to guess. Adriaan DeGroot [] did scientific studies of it. In certain positions, masters would look ahead more than 10 moves (>20 ply). His book is fascinating reading, by the way.

        In short, I basically agree with what you say.

    • Game playing is, by definition, AI. The ultimate goal of AI is to build Lt. Cmdr. Data, but that doesn't mean other challenges are not AI.
  • by headkase ( 533448 ) on Wednesday October 10, 2007 @04:27PM (#20931805)
    It's a running joke that in artificial intelligence that to solve a problem you almost have to know the answer already. When you consider all the possible ways to be intelligent as a search space, the process of evolution has "narrowed down" what it is "almost" like to be a human and we as individuals fine-tune the rest of the answer starting from birth. The human mind is a remarkable pattern processing machine and the day a computer can play go against a human master and win is the day we truly understand how to replicate sentience in a machine.
    • I'm not entirely sure but didn't people use to say the same thing about chess?
      • chess isn't about patterns.
      • The difference is that it is much more difficult to measure whether or not a move is "good" or "bad" in go.
  • Logical Extreme (Score:3, Interesting)

    by digitaldc ( 879047 ) * on Wednesday October 10, 2007 @04:30PM (#20931847)
    Carried to its logical extreme, the tree would grow until it exhausted every legal continuation, leaving the program nothing to do but examine the end positions to see which of them were wins--that is, checkmates--and which were draws, then work backward along the branching structure to choose the line that led to the best outcome, assuming that both sides play perfectly.

    This is exactly the method that Spock would utilize while playing 3-D Chess, [] and he would always win. However, due to his reputation for winning by always going to this logical extreme, noone ever would want to play with him.
    In the end, it was deemed a highly illogical method, since it killed his ability to play the game.
    • In the end, it was deemed a highly illogical method, since it killed his ability to play the game.

      WOPR always had the same problem, but for slightly different reasons.

  • FPGAs? (Score:2, Informative)

    by g-san ( 93038 )
    Yes FPGAs are easy to program, but there is a tradeoff: speed. And since this is a brute force application, you probably don't want to use FPGAs.
    • As another poster said, if this was simply brute force, ASIC hardware would be your best bet (custom hardware, same as that which the EFF used to show DES to be extremely weak in this day and age). With FPGAs, you could not only do brute force searches, but as the algorithms involved evolved (i.e. learned better methods), the FPGAs could be reconfigured by the control software (thereby increasing the speed of analysis). The question is, would the algorithm evolve to the point where it would be faster using
    • Re: (Score:2, Informative)

      by Shalth ( 1171767 )
      Actually, if you are going for speed you do want to use an FPGA. Even though there is an inherant iteration problem associated with determining the best move for Go, you can create a very parrallel algorithm that would search many different paths at the same time. Now if you are refering to the clock speed of FPGA's, then that is not an issue either. Even though most modern dual core and quad core processors can run at 2.4+ Ghz, they are most times limited to several levels of pipelining. This limits ea
  • The fact that we're talking about trillion move/second machines rather than even attempting to do it the way humans do it ought to tell us what the current trend in artificial intelligence is.

    Basically, the current trend is the same as it was 20/30/50 years ago. This IS NO science of artificial intelligence. We don't have a freaking clue how intelligence works. I think we will someday, but it's going to take a fundamental breakthrough in theory. A "Principia Intelligentsia" (I'm probably mutilating the Latin) by some genius that throws out all the fumbling and finally Explains It All.

    • That's exactly it. This is brute force, not AI at all.
  • ...theoretically has 10^60 potential endings. Is conquering the game via exhaustive search of all possibilities possible?


    A typical star is made up of ~10^57 hyrdogen atoms. If you could store an all the information you needed about a single ending position in a single atom, you'd need storage that had the mass of thousands of stars.

    • by 26199 ( 577806 ) *

      Storage isn't a problem, particularly... you can always throw away the results and recalculate every time. There, storage problem is gone :)

      Run time, now, that's a problem...

    • by 2short ( 466733 )
      You do not need to store, (or even consider) every ending position.

      At some intermediate position, you determine that you can force victory if you make a particular move. There is no need to ever consider any other move proceeding from that position.

  • by Anonymous Coward
    This is one of the worst articles about computer-Go I've ever read. The author seems to completely overlook the recent advances in the field which made possible huge gains by replacing n-ply reading and evaluation by monte-carlo sampling of possible end positions. This is indeed some sort of brute-force but allows to avoid the proble of evaluating a partial position, which is very hard.

    Still far from the best human players, but at least improving :)
  • "IEEE Spectrum looks at current trends in artificial technology..."

    Is that in contrast to natural technology?
    • Is that in contrast to natural technology?

      The irony is that Go does use natural technology - stone and wood.

  • More like an RTS (Score:2, Interesting)

    by analog_line ( 465182 )
    I don't claim to be anything approaching good at go, but I've watched a lot of games on IGS PandaNet, and I really think the game of Go has far more in common with a Real Time Strategy game than something like chess or checkers, because of that sheer number of paths through the game (never mind the end states), and the possibility that the board can change so drastically during the game. You never really run out of stones in Go as long as you have places to put them on the board. In chess and checkers, yo
  • Here we go again... (Score:2, Interesting)

    by tiluki ( 74844 )

    It is, and never will be, a question of optimisation - no matter how stochastic, fuzzy, or hardware based. Chess was only "solved" in this way - it wasn't solved the way we actually play it.

    Go... is all about pattern matching.

    The tigers mouth, the ladder, the bamboo joint, the turn... that's how I play.

    Just as the brain makes sense of our world. We make sense of go through the patterns we see.

    And no - I don't mean just throw a ANN at it:-) And yet, something stirs in the heart of AI... Something wit

    • Re: (Score:3, Interesting)

      by igomaniac ( 409731 )
      I'd say it's about pattern matching until you reach 1. dan, then you know almost all of the patterns and it becomes a game of high level strategic thinking.
  • by Surt ( 22457 ) on Wednesday October 10, 2007 @04:52PM (#20932187) Homepage Journal
    The best fp devices top out under 10^15 operations / second. in a computer sized box.
    Lets assume we can design a go-specialized device that can outperform that by another 10^20th power (ridiculous, but let's imagine).
    We build 10^9th such devices (a billion!), and wire them all together to play go.
    It will still take 10^16th seconds (317,097,919 years) to figure out go.
    So no, it won't happen that way.
    Worse, I don't think go is that simple: []

  • Checkers was just "cracked" recently. It took over 10 years I think. I would think only games similar in magnitude would be worth computing / possible to store.

    You'd have to find a new universe to do something similar with Chess.
    Just to physically store everything would be impossible. There are more possibilities in chess than there are atoms in the known universe.

    And if GO has more possibilities than Chess, go find a another universe full of universes....and find a way to store things on a single atom.
  • A good interview [] with Will Wright that talks a little bit about Go.

    Go, at it's heart, is a game of cellular automata. From the simple local rules incredible complexity emerges. To date there have been a few academic papers that deal with creating cellular automata rules for playing, but with little success. Very strong local play, but lousy global play.

    I think the trick to a successful Go playing program is to harness the local cellular automata knowledge in a higher order algorithm. Perhaps to use the
  • As a go developer (Score:3, Interesting)

    by jshriverWVU ( 810740 ) on Wednesday October 10, 2007 @04:59PM (#20932307)
    It's definitely an interesting problem. I spent years researching Chess engines eventually ditching it for Go. If you want to really be on the bleeding edge check out the computer-go mailing list. We're all there. []
  • I'm wondering if for this one an evolving algorithm will end up getting it figured out and the original human programmers won't understand how the final product works. If the mechanics of play can be programmed and the mechanics of deciding who won can be programmed then let the algorithms mutate a hundred/thousand generations and see what progress has been made. Can the final product play better than the original, if so, keep going, if not, tweek the mutations and try again. Some of the creative ways th
  • by igomaniac ( 409731 ) on Wednesday October 10, 2007 @05:03PM (#20932381)
    I'm a reasonably strong Go player myself (4. dan) and I've studied computational complexity - my gut feeling is that computers will not beat humans at Go in my lifetime even if Moore's law continues to apply. The only possibility is that someone comes up with a completely new approach, a bit like when alpha-beta search was invented for Chess. At the moment the most interesting approach is Monte Carlo methods applied to Go which has so far lead to a gain of at least two stones strength for the current programs. The strongest Monte Carlo based program is currently MoGo by Sylvain Gelly et al. which you can find more information about at [].
  • For this type of thing, I'd probably at least look at using a software-configurable processor like Stretch's lineup. It's much, much easier to reconfigure than an FPGA (and obviously, a custom fab) would be. Once everything was locked in and if speed proved to be an issue, then I'd start moving to firm/hardware.
  • by absynthe49 ( 1163563 ) on Wednesday October 10, 2007 @05:20PM (#20932613)
    I have played a lot of go and am a programmer. This will not be solved anytime soon unless there is a very big breakthrough in AI.

    Go is challenging to program for many reasons:

    1) There is a ridiculous number of moves. As mentioned in the article.. there are 361 factorial moves as the game progresses. Many of these would never be done (a player would not move on the outer most edge of the board in the early stages). Some extra moves can happen as you kill groups and move new pieces into their positions. And of course.. both players pass well before the game is over.. so the board will never be filled up. So there are some limits on this huge number.. but whatever it turns out to be... it is crazy bigger than chess's possible moves.

    2) The rule set is much less restricted. In chess... you are very restricted.. each of the 6 types of pieces have very specific rules. All except the knight is blocked by other pieces. In go.. you can move almost anywhere you want (it may not be the best idea but there are no rules about not moving to an empty spot). I say almost because you cannot actually kill you piece by placing it into a spot that would immediately kill it. These spots however are small in number to the number of spots you can move.

    3) It is extremely difficult to decide the current state of the game with regards to one position being better than another. In the article he mentioned being able to shorten the move search tree by neglecting paths that led to worse positions. The earlier parts of the game are extremely difficult to decide. In fact.. there is probably no correct answer. This is probably one of this biggest issues with getting a computer to play effectively. There are probably spots that the computer with proper algorithms could deem as not so good.. but what about the 500,000 other positions that may turn out good.. may turn out bad. I think it would have a difficult choice when it has so many positions that it cannot grade as good or bad.

    4) The delicate balance between whole board and small area (strategy vs. tactics). He mentions in the article that the computer could focus on a small subsection of the board at a time. This is fine and dandy when the computer can magically know which of the small subsections is vital. As a 19x19 game progresses (especially earlier in the game) there are for example 20 zones of the board that could become hot very fast. If one player takes initiative in a section.. they may only play 3-8 moves there and then jump to a completely different section of the board. These to positions (potentially) will collide.. and the way that the player built these two sections and the way they collide is very important.. this will greatly hinder the computer simply focusing on a subset on the board... that would be to focus on tactics.. and strategy and tactics are constantly balancing back and forth in the human players mind.

    5) The human player makes new algorithms as they learn more about the game. One of the trickiest things about go is that there are no sets of rules that define a good move. There are "proverbs" which sometimes are philosophical about the game. Sometimes these are even contradictory. I doubt very much that people will find the golden set of algorithms that describe good play.. at least in a way that a modern computer interprets language.

    In the end... I think one day we will have a computer that will be better than people at go. The only thing is.. I am very sure that it will be a very long time from now.. or after a breakthrough happens in AI. Anyone who has played the game for at least 20 games on a 19x19 will see the amazing explosion of possible moves. They will feel how sometimes there are 10 hotspots that you just wish you could move, but you only have one. They have felt what it feels like to be focused on an area of the board and suddenly, due to their opponents craftiness, jump to a whole different area of the board and suddenly.. that area becomes vital. A single move can make a relatively safe area dreadfully
    • Speaking of how hard it is to even decide if a board position is good or bad, there are three more things I can add:

      1. One of the key concepts of Go is "influence". Stones literally radiate influence. Different groupings of stones, in different regions of the board, and in opposition to other groupings, radiate differing amounts of influence. And in a Heisenberg-ish kind of way (at least during the early stages of a game) pushing too strongly for influence will create less space and pushing too strongly for
  • I wrote Honinbo Warrior in UCSD Pascal and sold it for the Apple II. Sadly, I don't have the code anymore except for a printed listing. Anyway, it played a poor game, slowly.

    I wrote a white paper a few years ago on how I would do 'real AI' and Go (PDF file: []). Really not much content, rather, I was just laying how I would go about it if someone funded me to work on AI Go for the rest of my life :-)

Houston, Tranquillity Base here. The Eagle has landed. -- Neil Armstrong