It is instructive (and important in understanding the significance of AlphaGo in overall AI research) to know the important differences between the nature of chess and go that leads to a totally different challenge in playing it well. The most important differences are:
- It is very difficult (still impossible and will probably remain so) to hand craft a set of rules to evaluate whether a particular board position is good at go. In chess, just counting up the value of the pieces on the board (counting queen 9, Rook 5 etc.) gives a good rough estimate, that can be refined by recognition of other factors such as passed pawns, king safety and inactive versus active pieces. At go, each stone has equal value (simplistically speaking) and a small change to the position of the single stone can often make a total difference to who is winning, only via effects that occur many moves later.
- The branching factor at go is far greater than in chess, even without the challenge of knowing whether a position is good. This means that even examining all possible positions a few moves ahead becomes infeasible. At chess (especially given the previously cited relative ease of writing an evaluation function allowing pruning of obviously hopeless lines) very accurate selection of the most likely best line is possible by Monte Carlo techniques.
- Chess programs can have an opening book that records known good early moves (the same in true at go to a lesser extent). However, after that a major difference happens. At chess the position is simplified as pieces are captured. Indeed, once down to about 7 or 8 major pieces, a chess program can use an endgame database to play perfectly without the need for any further calculation. Go, in contrast, is an additive game. The position continues to increase in complexity typically for at least the first 80 moves by each player.
A chess grandmaster can, indeed, explain why a particular move is good, usually by demonstration. Even where the benefits cannot be directly shown, there is established theory known to be sound, to justify it. Actually, a grandmaster cannot improve his knowledge of chess by examining the moves of a chess program that is only superior because of greater calculation and storage capacity
Top go professionals mostly cannot explain in a clearly irrefutable way why certain moves are good. Often, they just need to say they instinctively feel a move is right. There is a 3000 year-old repository of theory (which has been upended twice before in history, first via innovations about 300 years ago, and then again around 70 years ago) but this received wisdom is not known to be totally correct. In fact, the evidence from AlphaGo's play is that much of the existing theory is wrong. The top go professionals find this extremely exciting, as they begin to understand the logic behind AlphaGo's new moves, and the play of these professionals is already changing to incorporate the new knowledge it is allowing them to learn.
There were reasons why AI and go experts believed it would be 20 more years before a go program could best the top professionals. The AI techniques that made it possible are immensely exciting because they are definitely applicable in the area of artificial general intelligence. They are mostly not go specific.