The point I was making was that game devs of the time weren't even trying to build a intelligent, learning system that would adapt to player behavior or environmental changes, but they simply took the lazy/easy path of just peeking at player input and using asymmetrical information to appear to be smarter than they actually were.
In other words, when you slightly change the rules about how AI is supposed to work, the problems turned out so easy that the developer didn't need to bother with any formal AI approaches.
It's also worth noting that the developer solved the problem. Excessive problem description and feature creation is a notorious killer of many academic projects not just in the AI world. The business world occasionally falls prey to that as well, but as we see here, not always.
I am a little confused though, on how either of these points leads you to the conclusion that 'Academic Techniques' aren't adequate for real world problems. Some of the best and most exiting work in the 'real world' being done by big companies is built solidly on academic techniques. Go read about Google's machine translate work, for example. It is built on a neural net model, and is making some pretty amazing progress.
First, on your machine translation example, "amazing progress" compared to what? Both neural nets and machine translation have been around for decades. The "wow" factor of Google's efforts comes from the infrastructure that has been built up (being able to copy/paste something something to be translated over the internet effortlessly and throw orders of magnitude more CPU cycles at it) rather than the algorithm.
What I consider a more relevant case of doing something new with neural networks is Google's Deep Dream where one uses a neural network trained on finding certain images (say like images of buildings) to iteratively perturb images (like a mundane landscape photo) to bring out those patterns (ending up with a weird, psychedelic image with lots of buildings crammed into every part of the image).
Unfortunately, there's not a lot of academic precedent for that. The related research articles heavily emphasize classification and detection improvements not the wow of turning a boring image into piles of buildings or whatever. Going to the games genre, this would be an excellent way for a neural network to create on the fly themed maps and art for a game. Train a neural net to spot the desired sort of maps or artwork and then starting with a sufficiently simulating pile of mush, bring out the desired patterns iteratively in the mush.
Finally, if you hope that using my own opinions about the state of AI will somehow shore up your opinion of academic AI techniques, I will be the first to claim that I am a talented amateur at best. Build your arguments on my thoughs on the topic, and you are truly building a house on sand!
You made the claim that academics are at least a century out from building anything resembling human or higher level AI. That says right there that you don't think they have much to say about the subject now. This brings up my second point, Your beliefs are inconsistent. We don't need to care about any validation of my beliefs when the conflicts in your beliefs are more than ample to defeat your assertions.
The most obvious source of any AI development is completely missed here. It's not academics, CEOs, or secretive government agencies. It's computers. Once you've completely automated the creation of human or better level AI, then it's not going to need a century to get there. You might not even need a day.
Bootstrapping more sophisticated algorithms from existing one that have sufficient power to improve themselves is the great missing step here, I think. And modern AI research simply isn't going that way at present. I think at some point that will change, then we'll come up with more relevant concerns than how many more centuries we'll wait till humanity does this thing.