That echos my point, somewhat. It is pretty easy to design an AI for a lot of video games that can beat a human (without cheating).
Yes, albeit, for a slightly strained definition of "without cheating" :)
Whether you use the moniker "constraints" or call it "dumbing down",
I think the distinction is important. Dumbing it down would be deliberately sabotaging its ability to make good decisions. The constraints certainly have the same effect, but we aren't sabotaging the decision making itself, but merely restricting the information it actually has to work with access to human levels.
No perfect clock. No perfect positioning. etc.
It is odd that the hard part about making a game AI would be making an AI that isn't too competetive, but that's where we are.
Not really. It's just that we've devised a game that's difficult and interesting for humans, but is very easy and trivial for the bot; especially given that by default it has world state information we just don't have. (It knows what time it is, what its x/y/z is.
It can:
advance 4 units at 1 unit per second,
turn 28.5 degrees at 5 degrees per second
retreat 12 units at 11 units per second
turn 19 degrees a 11.4 degrees per second
advance 72 units at 15 units per second
can be arbitrarily long, is trivial for the bot to execute, doesn't require any sensory input/feedback, and can be replayed backwards.
I can't do that. No human can. We aren't that precise. We maintain position and navigate only approximately with constant correction from sensory input. And if I'm playing a game where perfect navigation is demonstrably a very valuable skill then I can't compete with a bot that is allowed to do that. (And my performance against bots drops considerably on open or lava catwalk Quake 3 maps vs closed tunnels maps due to their inhuman ability to navigate exactly where they wish to go whether they are looking where they are going or not...)
That's not "advanced AI"; that's "giving it rudimentary AI paired with superhuman ability". That's not much fun.
But here's the philosophical question: does the motivation behind someone's actions really matter, or is what they actually do the only thing that actually counts?
No it doesn't matter if they do the same thing. But they don't do the same thing. The current process of tweaking the AI's super human abilities and tossing a wrench into their decision making so its not always the optimal choice etc to make them appear more human like vs actually being human-like in response to human constraints does not result in them doing the same thing.
Instead of being truly human like. They end up acting like superhumans with narcolepsy... brilliantly efficient but occasionally they just fall asleep; and worse they usually can be relied upon to fall asleep at the wheel in a given set of circumstances that can be manufactured by the opponent (aka exploited)
For example, in RTS with harvestors a human player might trap a bunch of the AI's harvestors -- a human opponent would not be fooled once he saw what happened, and he'd build more, and suicide/free/destroy the trapped units. But the AI? The average AI will just starve, because it hasn't been programmed to recognize that scenario or how to respond when it happens. It has enough harvestors, so it won't build more... it has sent them orders to harvest...and they aren't under attack... so mission accomplished. Resources will arrive soon.... any minute now... still waiting... oh my base is under attack... hope those resources show up soon... man where are my resources... shit I lost.
All manner of path finding exploits are common in RTS games. Getting his units to stumble over themselves and get in their own way. Funnelling them into kill zones. Etc.