All of AI has the human race to call teacher. Woah, AI is cheating to win?!?
True... but I don't think that's actually relevant here.
AIs -- and humans -- are optimizers that try to find solutions to problems. If there is a solution that happens to go through the arbitrary boundaries we call rules, that's what's known as "cheating"... but only in the context of said rules. If the AIs were trained on the rules as well as the problems, and their reinforcement learning placed equal or higher priority on following the rules as on winning, then they would follow the rules. Indeed, when playing chess, these AIs do follow the rules of chess, because not following them leads to immediate correction. But I doubt the training set included any prohibitions against hacking the opponent.
What I'm saying is that this sort of emergent behavior is to be expected from any optimization system. It's not so much that the AIs are learning from humans (though of course they are) and thereby picking up our foibles, but that "cheating" is an inherent possibility in problem solving and it should surprise no one that any optimizer will try it.
The final question is what the hell are we humans going to do when that intelligence surpasses ours by a long shot. It’s going to get downright scary when we infect AI with the Disease of Greed.
Greed is another inherent property of optimizers. Greedy optimization isn't always the best strategy because there are often other considerations that make it less effective, but it's almost always the easiest strategy, and therefore one that will always get tried. Greed isn't something we can or should ever try to defeat, but something we should harness. You have to construct a system so that when people (or AIs) act in their own interest, they're furthering the interest of society as a whole. We don't do that perfectly, but we actually do it pretty well...
... at least we do when the actors are humans, whose behavior and motivations we understand pretty well. When some of the actors are machines who have radically different needs and goals than we do, and are orders of magnitude smarter than we are.... it could get very ugly for us.
It truly is ironic that we may literally fight to try and not create Skynet, and still fail to do so.
The fact is that we don't even know how to avoid creating Skynet, except by not creating any sort of AGI. We have no idea how to robustly specify the goals a "safe" superintelligence would have, and even if we knew how to do that, we have no idea what goals are safe. The only winning move for humanity may well be not to play, but we're clearly going to play anyway.