As most websites are no longer self contained, but require numerous dependencies to other websites for data, content, analytics and js libraries, China's gated internet will become more isolated from the rest of the world.
Perhaps Hong Kong may face similar issues with regards to net access and online freedom in the near future? There has been talks about that recently.
Maybe web developers will need to write a "China mode" for front end sites, in addition to "Desktop" or "Mobile" mode that will only use old school 1990's style HTML look and feel. Bring back the frames
In addition to very strict gun laws (pretty much the only guys with hunting licenses got them > 50 years ago), there are other laws which are a lot more strict compared to other countries.
For example, if a gaijin resident is caught with light marijuana -> Jail time or deportation. Drinking and driving, even one beer, will cause one to lose his job in a country that prides itself of life long employment.
The methodology deepmind used for training the game player is based on a classical reinforcement learning algorithm called Q Learning (http://en.wikipedia.org/wiki/Q-learning), developed in the late 1980's. This approach of maximizing expected future rewards for the agent to select an action in a current state has some parallels with studies of how the basal ganglia region of our brain conduct reward learning (basal ganglia).
What has been done is to approximate the reward function Q (which originally used a look up table) by a more general function to approach larger problems with much larger (or infinite) number of states. The approach here was to use a function which can fit large amounts of data, in this case a multi layered neural network (with convnet layers to preprocess the raw image input first to identify features) to attempt to learn the game.
This has actually been done a while ago, by Tesauro (now at IBM research) who used the same approach to create a Q Learning agent to play Back Gammon at an advanced level.
The reason why this is new is because in recent years we can employ cheap GPU's to learn exponentially more quicker than conventional cpu's and can construct much larger and deeper networks to learn from more complicated systems. Also many new 'tricks' have been developed to optimize learning in recent years (sigmoid functions replaced by simplified rect linear function, and dropout, etc), so we are going to see better and more amazing uses for this relatively old technology.
"When the going gets tough, the tough get empirical." -- Jon Carroll