It's not even a good example of image recognition, because the images to be processed don't have to be "understood" to be used. On top of that, the graphics of the games in question were very simple and primitive compared to what image recognition software deals with.
Add to that the repetitive nature of old video games that were based on 99% reaction time and 1% strategy, and you can just flat out colour me "unimpressed" with this "research".
Back in University, my AI project was a game player (a simple strategy game whose name I forget.) As it turned out, the entire game mapped down to a pre-determined set of decisions, so after playing only a dozen games, the "AI" would win every time, and that was just with a simple weighted-algorithm system of play. Some problems are just eminently suited to "AI" is all that I ended up learning from that project, but it was a useful lesson on the difference between optimizing a decision tree and actual "intelligence".
Until someone comes up with a system that can deal with bad and erroneous inputs as well as humans, I will continue to be unimpressed. Yet at the same time, I don't consider it necessary for a computer to be able to think and understand per se to be considered an "intelligence." It just needs to be able to make decisions and choose between alternatives faster than it's human counterparts in order to be useful, and to reduce the number of errors compared to it's human counterparts.
I have little faith in "neural networks." They place too much emphasis on emulating simple biological components and not enough on the "art" of understanding. Neural networks basically take the approach that "if it's big enough, we'll maybe get lucky and it will start to think." That's not "solving a problem." That's "playing the lottery."