No, it's not.
Poker is an easy game to describe. You can get perfect statistics for chances and what remains in the deck (if applicable).
What you couldn't do up until now was the BETTING on poker. When you have $64k of chips in front of you, the optimal amount to bet is not obvious or easily iterated by brute force. Just sheer size of the potential "game-tree" in that betting was the only obstacle.
Go was the same - the game tree is huge, and we now have heuristics that can cull it earlier and better than before, but it's still uniterable in any reasonable time. However, we now do it well enough to beat top human players.
However, NONE of this is related to image recognition or translation, which are still as flaky as they ever were. Game-theory is almost entirely tree based on a limited set of options. When we conquer the amount of options available, the game-tree is parseable and you win enough of the time that you can't be beat, and you can reproduce the results consistently. Game-theory is a science and a mathematic.
But the other "AI" stuff is still in the realm of guess-and-train, plugging heuristics and millions of examples into an algorithm that tries to categorise and find a limited set of patterns. They are not reliable, reproducible, or even very scientific at all, and most of the AI field is software-engineering and heuristical analysis. Do not trust your car to recognising an image of a child running across the road because it WILL NEVER see any of the training images ever again, even if you perfectly reproduce the circumstances, and so it's always guesswork.
Computers - and "AI" as the movies would let you think it - are not good at that kind of thing. It's why CAPTCHAs exist (yes, you can target and beat a CAPTCHA but by having humans tune heuristics or feeding millions of example images into a simple algorithm and so it becomes non-reliable again, though it may be reliable "enough" to get you into a website, you don't want to be using it in anything important whatsoever).
And translation is still just as laughable as ever. Ask any foreigner to run something through Google Translate, you still end up with completely obvious transcription and understanding errors and get only literal translations or nonsense.
We're nowhere near a point that AI is a risk to humans unless - and this is important - we start thinking that the AI we have now is anything more than it is and start relying on it. Self-driving cars are a prime example.
We do NOT have AI. We have heuristics (human-created and tweaked rules) plugged into statistical systems, trained on a set of data that they will never encounter in real life. Something that's OBVIOUS to a human how it should be categorised is in no way guaranteed to be categorised in the same way by even the best-trained AI on the planet. It's literally never seen it before and it's answer is no better than a guess. And at any point, while it's acting on unseen-before data (which is all the time in such systems) it's actions are unpredictable and - worse - undiagnosable and unfixable. When it makes a mistake, you can't correct it, or even necessarily work out WHY it made that mistake, even with the complete source code and training data. And you can "request" but not instruct that it might want to categorise such things differently next time. And it might still just not understand no matter how many times you do that.
Think of it this way. Are you training the AI on the SHAPE of, say, a cat and an understanding of 3D space and how it transforms with movement and different viewing angles? No. You're training it on a bunch of pictures of a cat (or translated texts, or game positions or whatever) and hoping that it finds some correlation.
But you have ZERO idea what correlation it's finding. It's basically totally unanalysable in that respect. For all you know, it's adding up the number of blue-ish pixels and saying it's a cat if there aren't many. And though you might realise that and then train that trait out, it will then move on to ANY OTHER correlation it can find. The point at which it "understands" what a cat is, in terms of a 2D image of a 3D moving animal, is THOUSANDS OF YEARS later in its training. By humans. If they can tell what it's doing.
Such systems are not AI. And certainly not maintainable, predictable or reliable.
But we can win and Poker and Go because we can brute-force the game tree through some clever pruning of it. It's an entirely different kind of system.