You have no idea what this machine has just done. It's leapt forward some 10-20 years in terms of computer Go-playing capability in one fell swoop. The numbers involved in Go are so huge that brute-force search, even for a limited number of moves, is absolutely impossible in the times given.
And it isn't being given programmed hints, because Go is just too complex a game for that beyond amateur play. There's a handful of hard-and-fast rules of what's a stupid move and what's not and everything else interacts SO MUCH with the rest of the board and future plays that it's almost impossible to even tell who's winning most of the time!
As such, this system, no matter the power behind it, is doing something that dumb, brute-force, play-the-game AI written by world-experts in Go, AI, and game theory wasn't expected to be able to achieve within the next decade. And it primarily gets there because it learns from information fed to it.
For those who are more involved in AI research it is not so surprising. Similar general approaches to learning have been used in the "cognitive" branch of AI research for the last 15 years or so. The buzzword changed from "cognitive" to "deep learning" recently.
The key to success of AlphaGO is the position evaluation function that is learn from data. The surprise here is that learning from the game endings of internet GO players and somewhat informed computer vs computer games is enough to train an evaluation function with the predictive power to beat the world champion. In the old days of AI an expert-designed heuristic function would be used instead and a kind of smart position tree search would do the heavy lifting. But obviously this didn't work with GO due to combinatorial explosion and very difficult evaluation in the beginning and middle stages of the game.