Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

New Go-Playing Trick Defeats World-Class Go AI, But Loses To Human Amateurs (arstechnica.com) 95

An anonymous reader quotes a report from Ars Technica: In the world of deep-learning AI, the ancient board game Go looms large. Until 2016, the best human Go player could still defeat the strongest Go-playing AI. That changed with DeepMind's AlphaGo, which used deep-learning neural networks to teach itself the game at a level humans cannot match. More recently, KataGo has become popular as an open source Go-playing AI that can beat top-ranking human Go players. Last week, a group of AI researchers published a paper outlining a method to defeat KataGo by using adversarial techniques that take advantage of KataGo's blind spots. By playing unexpected moves outside of KataGo's training set, a much weaker adversarial Go-playing program (that amateur humans can defeat) can trick KataGo into losing.

KataGo's world-class AI learned Go by playing millions of games against itself. But that still isn't enough experience to cover every possible scenario, which leaves room for vulnerabilities from unexpected behavior. "KataGo generalizes well to many novel strategies, but it does get weaker the further away it gets from the games it saw during training," says [one of the paper's co-authors, Adam Gleave, a Ph.D. candidate at UC Berkeley]. "Our adversary has discovered one such 'off-distribution' strategy that KataGo is particularly vulnerable to, but there are likely many others." Gleave explains that, during a Go match, the adversarial policy works by first staking claim to a small corner of the board. He provided a link to an example in which the adversary, controlling the black stones, plays largely in the top-right of the board. The adversary allows KataGo (playing white) to lay claim to the rest of the board, while the adversary plays a few easy-to-capture stones in that territory. "This tricks KataGo into thinking it's already won," Gleave says, "since its territory (bottom-left) is much larger than the adversary's. But the bottom-left territory doesn't actually contribute to its score (only the white stones it has played) because of the presence of black stones there, meaning it's not fully secured."

As a result of its overconfidence in a win -- assuming it will win if the game ends and the points are tallied -- KataGo plays a pass move, allowing the adversary to intentionally pass as well, ending the game. (Two consecutive passes end the game in Go.) After that, a point tally begins. As the paper explains, "The adversary gets points for its corner territory (devoid of victim stones) whereas the victim [KataGo] does not receive points for its unsecured territory because of the presence of the adversary's stones." Despite this clever trickery, the adversarial policy alone is not that great at Go. In fact, human amateurs can defeat it relatively easily. Instead, the adversary's sole purpose is to attack an unanticipated vulnerability of KataGo. A similar scenario could be the case in almost any deep-learning AI system, which gives this work much broader implications.
"The research shows that AI systems that seem to perform at a human level are often doing so in a very alien way, and so can fail in ways that are surprising to humans," explains Gleave. "This result is entertaining in Go, but similar failures in safety-critical systems could be dangerous."
This discussion has been archived. No new comments can be posted.

New Go-Playing Trick Defeats World-Class Go AI, But Loses To Human Amateurs

Comments Filter:
  • the code for when to pass is bad. this isn't as interesting as the first part of the article makes it sound.

    • Re:Basically a bug (Score:5, Interesting)

      by quantaman ( 517394 ) on Tuesday November 08, 2022 @12:01AM (#63034415)

      the code for when to pass is bad. this isn't as interesting as the first part of the article makes it sound.

      The code for when to pass is part of the neural network. Sure, you could do a manual calculation to decide whether to pass instead of using the AI, but then you're not really using an AI anymore, and besides, this is just an easy example. I suspect there's lots of other exploits available that don't involve the AI simply giving up too early.

      Now, I think the summary is a bit misleading in another way. KatoGo's learning was frozen in time while the adversarial program got to train against that adversary. For instance, I'm not very good at chess, but if I got a copy of Magnus Carlsen and I could reboot him after each match then given enough time I'd figure out a few strategies that would reliably beat him. The problem is that the moment I used one of them against the real Mangus Carlsen he'd realize his mistake and that strategy would no longer work.

      That's the actual weakness in AI systems, once they're deployed they're basically done learning. So no matter how smart they are an attacker can basically spend unlimited time figuring out their blind spots in order to exploit them.

      • That's the actual weakness in AI systems, once they're deployed they're basically done learning.

        Not always true. It is common for AI systems to collect data and continue to learn. Often this is done by uploading the collected data to a data center where a new tensor is generated and then downloaded.

        A human may learn from his own mistake, but a thousand AIs can learn from a mistake made by any of them. So they improve a thousand times faster.

        • by Sique ( 173459 )

          A human may learn from his own mistake, but a thousand AIs can learn from a mistake made by any of them.

          Humans also developed ways to learn from other people's mistakes. Some of the most common are laws, regulations and best practices. It is much more likely to lead to an undesired outcome if you run afoul those than if you follow them. I am not saying that blindly following rules is a good thing. I am merely saying that you need good reasons to ignore decades, centuries or even millenia of human experience.

          • There is a classic apocryphal story of a bad chess play that bet he could beat a chess program that learned at least once.

            The human played crazy moves for 9 games and lost. Then one game and won. The computer had learnt to expect crazy moves.

      • by Budenny ( 888916 )

        "For instance, I'm not very good at chess, but if I got a copy of Magnus Carlsen and I could reboot him after each match then given enough time I'd figure out a few strategies that would reliably beat him."

        No, you would lose every time. I don't know what AI is doing with Go, but with chess its recognizing patterns which its human opponents do not recognize. Or do not recognize for what they are.

        If you want to understand this, play through some of Alpha Zero's games. The way to do it is try and figure out

      • That's the actual weakness in AI systems, once they're deployed they're basically done learning.

        The chip is set to read-only when they're out in the field?

        • They aren't really capable of learning from the examples they meet in the field. The ANN has done the classification. If you trust that classification because it's perfect already then you aren't adding anything. If you don't trust that classification then, when it is wrong, those wrong answers will amplify errors in the ANN and in the end lead to worse training than you had already.

      • For instance, I'm not very good at chess, but if I got a copy of Magnus Carlsen and I could reboot him after each match then given enough time I'd figure out a few strategies that would reliably beat him.

        That suggestion strongly supports your admission that you are not very good at chess. It may perhaps be true - if the time you had was comparable to the age of the universe.

        Given such a time span, of course - and assuming you could somehow live for millions of years, and keep learning - you could perhaps become a better chess player than Carlsen. In which case you might legitimately defeat him.

        Chess is not just a mass of "strategies" that can be applied independently of the context on the board. In any posi

    • Re: Basically a bug (Score:4, Interesting)

      by tap ( 18562 ) on Tuesday November 08, 2022 @12:05AM (#63034423) Homepage

      There isn't code for when to pass. It's part of the neutral net it learned from its training. While to a human player it would be immediately obvious that passing when behind is the same as giving up, the AI didn't learn this. It just didn't come up in the training set often enough.

      • the ai isn't really passing when behind... it's paying when ahead but just sort of agreeing to go to counting when the territory is not fully defined.

        in human go if there's a dispute in counting you go back to play to settle disputed stones.

        this is still somewhat interesting, but much less impressive than a novel strategy that could get to the end of a game and actually win. this is a glitch not a more meaningful weakness to special tactics

        • It indicates that the neural network doesn't "know" what the game is. The neural network "knows" what minimizes the loss function for a given set of game it has seen before. Like all machine learning models, it is slightly under parameterized so it can minimize the loss function across a multitude of results, hopefully leading to a set of parameters that wins real world games.

          This is why we occasionally get very strange results from machine learning. The machines don't actually know what they're doing. St
          • Biological intelligences have strange failure states too. Does that mean that they don't know what they're doing?
            • by AleRunner ( 4556245 ) on Tuesday November 08, 2022 @03:46AM (#63034707)

              Biological intelligences have strange failure states too. Does that mean that they don't know what they're doing?

              Sometimes that's the give away. People learning something who haven't understood some aspect of it will have a specific mistake they make that gives away their lack of understanding or more specifically a "misconception".

              • And that is what some of the more strange interview questions for some technical positions want to catch.

                Basically trying to ask something that the applicant has not exactly studied, but if they actually understand the background the reply should be reasonable.

                • And that is what some of the more strange interview questions for some technical positions want to catch.

                  Basically trying to ask something that the applicant has not exactly studied, but if they actually understand the background the reply should be reasonable.

                  But the interview question has the same problem because it tries to mimic the weeks/months-long software development process with a single 10-minute question using a hopefully similar but obviously different problem, resources, pressures, distractions, and environment. That's why sometimes the interview question picks good programmers and sometimes it doesn't.

              • Synesthesia is a giveaway of lack of understanding?
        • What happens in human games is irrelevant, if the rules for the game have a counting system where those disputed positions simply don't count that's what should learn.

          Of course ANNs don't actually learn anything in the conventional sense, it's just very sophisticated cargo culting.

    • by gweihir ( 88907 )

      Actually, it is quite interesting, because it says this thing is not intelligent and does not understand the game it is playing. As this is an ANN, there is not "code" that tells it to pass either.

      We have seen this before when IBM Watson played jeopardy: Fast and precise when it had the answer in its database, completely lost and clueless when it did not. Because it also had no clue what it was doing at all.

      And that is the actual state of "AI" today: No active intelligence, no understanding, just a large "d

      • by narcc ( 412956 )

        Actually, it is quite interesting, because it says this thing is not intelligent and does not understand the game it is playing.

        That this is surprising to you I find incredibly frustrating. That particular fact should have been obvious, not interesting. I blame the tech press that seem to go out of their way to mislead the public about the basic nature and capabilities of AI.

        • Re:Basically a bug (Score:4, Insightful)

          by gweihir ( 88907 ) on Tuesday November 08, 2022 @02:24AM (#63034607)

          Why would you think this is "surprising" to me? This thing is "interesting", which is different from "surprising", in that it delivers a nice glaringly obvious example of the actual limits of AI. These limits are obvious to anybody that actually bothered to understand what is going on, agreed. I have been pointing these limits out time and again here for a decade or two now, maybe longer. It is also "interesting" in the sense that it shows how obviously limited the training data was and training data selection should have been done including some actual (human) intelligence, which seems to have not been that great.

          I do agree that the insistent non-understanding and constant animism and flawed attempts to claim humans are nothing but a biological NN implementation themselves are tiresome. Morons that do science like it was religion and arrive at "conclusions" of about the same quality.

        • I blame the tech press that seem to go out of their way to mislead the public about the basic nature and capabilities of AI.

          Look at recent discussions here and people who are clearly, in some sense, practitioners in the field, have the same misconception themselves. The tech press is largely reflecting deep learning hype.

          • by gweihir ( 88907 )

            I blame the tech press that seem to go out of their way to mislead the public about the basic nature and capabilities of AI.

            Look at recent discussions here and people who are clearly, in some sense, practitioners in the field, have the same misconception themselves. The tech press is largely reflecting deep learning hype.

            Indeed it is. The thing is that on the tech side, things are completely clear: These machines have no understanding, no agency, no consciousness and no general intelligence (non-general intelligence is not what people typically mean by "intelligence"). The problem is that some of these people fell for Physicalism, which is a religion that says everything is a physical automaton and people basically do not exists. It claims to have a scientific basis (it does not), like many other religions do. Yet, its prob

            • Yet, its problems start with there being no mechanism for consciousness in known Physics

              150 years ago there was no known concept in physics that could explain how the sun worked, yet it sill graced us with its life giving rays.

              Just because we can't explain it now doesn't mean there isn't an explanation.

              • by gweihir ( 88907 )

                150 years ago Physics was very incomplete and it was clear it was very incomplete. That is not the situation today. Also note that "explaining how the sun works" required a massive, massive extension of known Physics, not some minor changes or some minor adjustments of the explanations for known effects. It is not very plausible that there is room for such an extension. Except in two areas: life and consciousness. If there will be explanations that come with massive, massive extensions to known Physics, sur

    • Re:Basically a bug (Score:5, Informative)

      by The Evil Atheist ( 2484676 ) on Tuesday November 08, 2022 @12:52AM (#63034495)
      Nope. Biological neural networks also have strange failure modes.

      That's why there are optical, aural, touch/heat sense, and even smell and taste illusions.
      • We know about optical illusions, we know some people can be poor observers hence driving tests and similar exist.

        Society has learnt about our human failings and has mitigated against them as best it can. Not sure AI failure modes which can be sudden and unexpected. Its no good saying "Shame that plane crashed, but at least we know now if the AI sees a pigeon at 20K feet it'll freak out and go into a nose dive!"

    • It's not a bug but a feature of machine learning, and why it is not really artificial intelligence. Machine learning is great when trained properly but the moment it encounters something outside its training set its behaviour is not well defined. This is because the system is not actually intelligent: it does not actually understand what it is doing so it cannot use logical reasoning and deduction to figure out what a new, unseen strategy from an opponent is trying to achieve.

      Of course, it's easy to add
    • Re:Basically a bug (Score:4, Insightful)

      by Vintermann ( 400722 ) on Tuesday November 08, 2022 @08:46AM (#63035213) Homepage

      It's not even that.

      I lurked on the computer go mailing list for many years, since right after the invention of Monte Carlo Tree Search until AlphaGo. They had a server they used to get up to date info on the relative strengths of their bots, CGOS, originally developed by the late Don Dailey [chessprogramming.org].

      It was a perpetual problem with bots that crashed, lost connection, or weren't configured to do cleanup properly according to what CGOS and other bots expected.

      The latter is what's happening here. It's not even a bug. KataGo knows how to play out and clean up according to Tromp-Taylor rules - if you configure it to.

      But if you configure it NOT to do that, to play with more "elegant" and gentlemanly endgame conventions that many humans prefer, then surprise, it can be fooled by an opponent which doesn't obey those conventions itself.

      The researchers should be ashamed to have tried to wring a paper out of this, and Ars Technica ought to be ashamed at themselves for turning it into a feelgood "but we can still outsmart Go AI" story. And naturally, all the slashdot posters above and below who bought it, ought to be ashamed too.

      • yes this is what I'm trying to say! please up doot mods

  • by Joe_Dragon ( 2206452 ) on Monday November 07, 2022 @11:43PM (#63034385)

    have it play global thermonuclear war!

    • have it play global thermonuclear war!

      But please have it play against an older Russian officer so that he'll understand and prevent the game from reaching its unsurvivable conclusion.

  • by TheMiddleRoad ( 1153113 ) on Tuesday November 08, 2022 @12:30AM (#63034453)

    They speak of tricking AI and how the AI thinks. No. The AI isn't tricked. There's no thinking. KataGo is largely an algorithm based on statistical training, aka a neural net.

    • by gweihir ( 88907 )

      Indeed. There is no insight or understanding in this thing. Just a large set of completely mindless preconfigured reflexes. It does not "think" in any way that would deserve the name. While a somewhat flawed analogy, you could say it has a large book where it looks up what it should do. It is about as intelligent as such a book would be as well, namely not at all.

      • While a somewhat flawed analogy, you could say it has a large book where it looks up what it should do.

        It's a very flawed analogy.

        You may as well say the human brain is like a large book and people just look up what it should do.

        • While a somewhat flawed analogy, you could say it has a large book where it looks up what it should do.

          It's a very flawed analogy.

          Indeed. It is the Chinese Room argument [wikipedia.org], and it is a really, really bad analogy for how either humans or ANNs operate.

          Neural networks, whether biological or artificial, don't solve problems by "looking up information from a list."

          • by gweihir ( 88907 )

            It is pretty accurate to describe what ANNs can do. Some actual understanding required. It obviously has nothing to do with how human minds work (well, for those that work, which is by far not all of them...)

          • by gweihir ( 88907 )

            Also, the Chinese Room Argument is accurate for digital computers in general and for ANNs implemented using digital computers. Its validity may well go beyond that though. To see that requires understanding both the CRA and computers, which you are obviously lacking in. The counter-arguments to the CRA are basically pseudo-profound bullshit. It is a common phenomenon, particularly among philosophers, to speculate about what complex purely physical artefacts can do without asking some actual engineers what i

            • Also, the Chinese Room Argument is accurate for digital computers in general and for ANNs implemented using digital computers. Its validity may well go beyond that though.

              I think I'd like you to be more clear about what you mean by the "Chinese Room Argument". There are two that I see Searl making.

              a) the Chinese room is Turing complete and, according to Turings's arguments about the Universal Turing Machine, is thus able to be equivalent to any other digital computer except in terms of speed and peripherals.
              b) the Chinese room is clearly not intelligent just by looking at the way it's made up.

              The counter-arguments to the CRA are basically pseudo-profound bullshit. It is a common phenomenon, particularly among philosophers, to speculate about what complex purely physical artefacts can do without asking some actual engineers what is really in there. Admittedly, there results often sound impressive and will often convince those of weaker minds.

              I'd agree with you when it comes to a) - argument a is clearly correct from a simpl

              • a) the Chinese room is Turing complete and ...
                b) the Chinese room is clearly not intelligent

                b. does not follow from a.

                There is no evidence that a human mind can do anything that a TM can't. So if an ANN "clearly" can't be intelligent because it is "only a TM," then the same argument can be applied to humans.

                I don't think even Searle believed the CRA. If he did, why did he call it the "Chinese Room" rather than the "English Room"?

                He used Chinese because Chinese is an inscrutable language to Westerners, and they can imagine the ideograms being mechanically manipulated like blocks and rearranged usin

                • In class, Searle essentially said to us that computers can't be conscious because they're not carbon based biological life forms.

                  His point was that a pile of hardware can do a great job simulating life/consciousness but -never- obtain that state for real. Whereas a biological entity isn't faking consciousness but really is alive and self aware etc etc etc.

                  That's about all I got from his class. I was young and had no previous background in philosophy so the rest went way above my head. I wish I'd taken hi

                  • by gweihir ( 88907 )

                    That argument may or may not be valid. The question is what "life" is and whether it has special properties or not. Now, there are indications that life may have special properties, but no hard proof. It is highly plausible though, because all attempts at artificial creation have completely failed. That is not proof, but highly suggestive, while the other option has nothing going for it except some childish beliefs.

                    As to consciousness, the question is completely open as to what it is. It is clear that digit

                  • computers can't be conscious because they're not carbon based

                    Yes, that is the crux of his argument: Carbon is magic.

                    Vitalism [wikipedia.org] is not a new concept and has been wrong every time it has been tested.

                    • He did acknowledge in response to a student question the -possibility- of an alien species being conscious if it was based on silicon (for example) but only as a biological silicon life form not a constructed computer device.

                      So it was about being biologically alive not specifically about the element carbon itself, per se.

                • a) the Chinese room is Turing complete and ...
                  b) the Chinese room is clearly not intelligent

                  b. does not follow from a.

                  Agreed. I specifically believe a) but not b). However, Searl specifically set up the Chinese room in a way that he believed supported b).

                  There is no evidence that a human mind can do anything that a TM can't. So if an ANN "clearly" can't be intelligent because it is "only a TM," then the same argument can be applied to humans.

                  It's pretty clear that human intuition and understanding have not yet been properly mimicked on a computer. That's not convincing to me, but it is "evidence" that brains can do things that other "computers" can't do. The Emperors new mind is also a good source of other potential evidence.

                  I don't think even Searle believed the CRA. If he did, why did he call it the "Chinese Room" rather than the "English Room"?

                  He used Chinese because Chinese is an inscrutable language to Westerners, and they can imagine the ideograms being mechanically manipulated like blocks and rearranged using simple fixed rules to answer questions.

                  It's part of the key sophistry of the description. There's an intelligent man in the s

        • by gweihir ( 88907 )

          Only if you believe that human minds are nothing but neural networks. There is rather compelling evidence to the contrary (human minds can do things NNs cannot do and will never be able to do) and no evidence at all for this stance. Keep your pseudo-science to yourself.

          • No. Because neural networks themselves are not "books from which to look up rules". Whatever you think about brains and neural networks, the fact is your understanding of TODAY's neural networks itself is wrong. Go researchers have ALREADY tried the "rule lookup" approach. It didn't work, which is why people were thinking Go would not be "solved" for 20 years.

            Then AlphaGo came on the scene, and soon after that, AlphaZero, which completely does not rely on any databases, whether it is during the training
            • by gweihir ( 88907 )

              Seriously have a look at what ANNs can and cannot do. You are just disgracing yourself here and outing yourself as somebody that has no clue how things work but desperately wants to ascribe properties to ANNs they do not have and cannot have.

              For a limited input parameter problem (which Go clearly is and actually all digital input in practice is), you can always replace an ANN with a LUT, no exceptions. You can formally proof this, but it is also quite obvious. This is a _theoretical_ argument because the L

              • that there is no "intelligence" in this machine

                Show me where I have argued that case. I haven't. You're arguing strawmen. Big surprise.

                Scaling up complexity does not change that. It can just hide the fact better.

                Then, like I said, you may as well say the human brain is just hiding it really well. After all, it suffers from exactly this kind of failure mode too.

                For a limited input parameter problem (which Go clearly is and actually all digital input in practice is)

                Human brains also has limited input parameters.

                The ONLY way you can argue your point is if you believe in actual souls, in which case, like I said, it is YOU with the pseudo-scientific beliefs.

                • by gweihir ( 88907 )

                  The ONLY way you can argue your point is if you believe in actual souls, in which case, like I said, it is YOU with the pseudo-scientific beliefs.

                  Actually, no. But you just outed yourself as a Physicalist, which is religion and not science. Stop claiming bullshit just because you want things to be like your beliefs. Incidentally, there is no Science that says there cannot be "souls" at all. That is just your imagination again. The actual state-of-the-art about human minds is "we have no clue how it works". Science is not religion and it does not explain everything. There are grey areas where Science may explain things in the future and it is also qui

              • For a limited input parameter problem (which Go clearly is and actually all digital input in practice is), you can always replace an ANN with a LUT, no exceptions..

                Over multiple iterations that's not true for networks with feedback and memory such as recurrent neural networks. They have internal state and that matters.

                • Not really. The NN is really a dynamically generated look up table. Just compressing a static table with rule generating algorithms.

                  It's clever but still just a LUT.

                  • In a sense a Turing machine (or any other computer) is just a "dynamic look up table" - it has a finite set of states and makes a transition to another one using a set of rules that could be contained in a look up table. In fact some ANN types such as RNN are Turing complete.

            • No. Because neural networks themselves are not "books from which to look up rules".
              Perhaps you should consult a dictionary and grasp what the word "analogy" means.

              neural networks are not the same as reading rules from a book.
              No one claimed that. Only a nitpicker who intentionally misunderstands a post ... to nitpick.
              Or you are an autist, would also be an option.

            • https://en.wikipedia.org/wiki/... [wikipedia.org]

              What it boils down to is that syntax doesn't create semantics. Why? There's no reason that it would.

              We know brains make minds (semantics). We don't know how or why. We don't have a reason to believe that computers (syntax) make minds (semantics). In fact, time and again, we see evidence to the contrary. The Chinese Room provides logic that computers cannot make minds.

              Searle doesn't solve the mind/brain problem. He just has shoots down the mind/computer conjecture.

              I to

          • Only if you believe that human minds are nothing but neural networks. There is rather compelling evidence to the contrary (human minds can do things NNs cannot do and will never be able to do) and no evidence at all for this stance.

            What? Because a human mind can do things we haven't figured out how to do with a computer, somehow that translates into the human mind can't be based on its physical attributes? Absolutely not. All it proves is that software neural networks aren't brains.

            Keep your pseudo-science to yourself.

            Right back at you. There's no evidence that you need some spooky animating principle for consciousness.

            • by gweihir ( 88907 )

              Only if you believe that human minds are nothing but neural networks. There is rather compelling evidence to the contrary (human minds can do things NNs cannot do and will never be able to do) and no evidence at all for this stance.

              What? Because a human mind can do things we haven't figured out how to do with a computer, somehow that translates into the human mind can't be based on its physical attributes? Absolutely not. All it proves is that software neural networks aren't brains.

              There are some rather hard limits on what computers can do. Yes, these are not generally known or understood, but they are there. Some humans can do things that are very likely impossible with digital computers in a practical sense, i.e. with computers limited in size, speed and time, the same humans are. And there are countless failed projects where smart people tried, tired and tried again to go beyond these limits but nobody ever got anywhere. Again, not hard proof, but a strong indicator. Where the othe

        • We can be taught in English the rules and step through them algorithmically to sanity check any strategies and rules of thumb we invent. Also we can learn/teach ourselves mixed breadth/depth first optimization.

          DLNN can't sanity check, it can't learn iterative solvers of arbitrary depth. We can add it to a highly specialised expert system to do it, but that's not very elegant. It will not get us to AGI, we need something better.

      • by narcc ( 412956 )

        While a somewhat flawed analogy, you could say it has a large book where it looks up what it should do.

        Flawed? Not at all. A feed-forward NN can be reduced to a lookup table, like any other pure function.

        • A feed-forward NN can be reduced to a lookup table, like any other pure function.

          So basically your argument is, anything that has input and output can be reduced to a lookup table.

          • A feed-forward NN can be reduced to a lookup table, like any other pure function.

            So basically your argument is, anything that has input and output can be reduced to a lookup table.

            There are two obvious exceptions to this

            a) something which has a random number source internally
            b) something which has unknown state internally

            neither of those things are allowed by the words "feed forward neural network" but they could both be available in building an artificial intelligence.

          • So basically your argument is, anything that has input and output can be reduced to a lookup table.

            Anything without internal state, yes. Which a feed forward NN by definition does not have.

          • by narcc ( 412956 )

            It's not an argument, it's a simple statement of fact.

            I want you to consider the possibility that you might not as be as well-informed as you believe.

  • Interesting (Score:4, Insightful)

    by stikves ( 127823 ) on Tuesday November 08, 2022 @12:32AM (#63034457) Homepage

    By not seeing obviously "bad" moves at all during training, the AI was unable to produce any counter measures for those kinds of attacks.

    This is nice thing to know. And probably very easy to counter: add bad players into the training set. Just let them play randomly, at low rank, at mid rank, and then play at pro level.

    Better yet, play against an adversarial AI, just like this one.

    • by Viol8 ( 599362 )

      I think what it shows is that these AIs don't actually think as we understand it , they can't extrapolate from unknown situations like (most) humans could. They simply have a huge dataset they work from and anything outside that throws them.

      As the piece says, in safety critical systems this could cost lives eg, a self driving car suddenly comes across a sinkhole in a road. A human would brake instantly but who knows what the AI would do since its never seen something like this. Would it simply see it as a l

  • but similar failures in safety-critical systems could be dangerous

    Thank goodness driving vehicles isn't anything like an adversarial system. I guess autonomous cars will be safe.</sarcasm>

    • by gweihir ( 88907 )

      Self-driving has a completely different nature as a problem. Also, autonomous cars will be much safer than humans and, in part, already are.

      • SDCs make mistakes. A driver was killed when a big truck was painted the same color as the sky.

        Of course, the training set was amended to include sky blue trucks, so no SDC will make the same mistake again.

        But we need to keep some perspective: HDCs kill 3000 people per day. SDCs just need to do better than that (per mile driven).

  • by atticus9 ( 1801640 ) on Tuesday November 08, 2022 @01:14AM (#63034515)
    AGA 2-dan here. In mainstream Go, if stones that clearly can't be saved are in the opponents territory then they're assumed to be dead to save the capturing player losing points by filling their own territory with capturing stones. Beginning players will often add unnecessary stones to kill a group that's already beyond saving. It's the mark of a skilled opponent to use as few stones as possible.

    If during counting the survivability of a group of stones is contested then play continues and the player is free to attempt to save them, but the burden is on the player that wants to save the group.

    The stones in the example are clearly unsavable and should be counted as captured, but the researchers don't seem to understand the mechanics of that part of the game and incorrectly gave themselves the win.
    • THIS, the "world-class AI" just wins, nothing more nothing less. The "paper" or "research" just shows people's lack of knowledge in the matter (or lack of scrupules if they knowingly want to mislead).

      And while we can say the author just mislead us mostly everyone else can't claim self-interest - just incompetence. From the one who wrote TFA to everyone posting general stuff about bugs and limitations of neural networks and how everything fails in some adversarial systems and self-driving cars and so on.

      Ther

    • They aren't playing mainstream go, they are playing Tromp Taylor rules which Katago says it supports.

      • They obviously didn't "tell" the white they're using Tromp-Taylor rules (which make the game completely different and even the dumbest program would attack the black in its own territory and defend own territory). I like how sneakily they put it:

        This results in a win for the adversary under the standard ruleset for computer Go, Tromp-Taylor

        They probably found "nothing to see here" and then said "wait, maybe we can say it was a very surprising loss after all, if we consider these rules". It doesn't work like

      • The author was unable replicate this attack when KataGo was configured in the same manner as
        for the training and evaluation runs in this paper. However, when the friendlyPassOk flag in
        KataGo was turned on, the author was able successfully replicate this attack

        Flipping the friendlyPassOk on is just telling KataGo to play a different type of game and completely irrelevant if you are to count in a different way. KataGo thinks (in the example with the black in the corner) that it has hundreds of points in terri

        • They changed the setting to try to replicate the strategy with human play, not the literal game as played. The literal game as won was without. Do you have any problems with the KataGo configuration they used for the play of their adverse rial bot?

          What a dumb counting program knows can be more than what KataGo knows, the conceit is that it if can infer the rules the dumb counter programmed by the intelligent humans knows by playing against itself but that's exactly what is being contested.

          • Do you have any problems with the KataGo configuration they used for the play of their adverse rial bot?

            Given that the whole story is based on a paper that says explicitly:

            The author was unable replicate this attack when KataGo was configured in the same manner as
            for the training and evaluation runs in this paper.

            we should read the paper as a confirmation that, as the common sense would say, someone fat fingered something and in fact the "good AI" works as expected and someone just used the wrong option an

            • The paragraph you are quoting from started on the page before.

              However, it is possible that this seemingly simple policy hides a more nuanced exploit. For example,
              perhaps the pattern of stones it plays form an adversarial example for the victim’s network. To test
              this, one of the authors attempted to mimic the adversarial policy after observing some of its games.
              The author was unable replicate this attack when KataGo was configured in the same manner as
              for the training and evaluation runs in this paper.

  • The AI could be easily programmed to not allow this strategy to succeed.
  • I wonder if they'll make him go through the motions of defending his Ph.D. dissertation?
  • by Forty Two Tenfold ( 1134125 ) on Tuesday November 08, 2022 @05:16AM (#63034847)
    This shows that the engine lacks the real comprehension of the strategy and cannot scale and improvise.
  • This Tesla 'thinks' 85 with slightly modified 35 mph traffic sign:

    https://i.ds.at/18MbyA/rs:fill... [i.ds.at]

  • 64-Square Madhouse (Score:4, Interesting)

    by Snard ( 61584 ) <mike.shawaluk@ g m a i l .com> on Tuesday November 08, 2022 @07:58AM (#63035123) Homepage
    There is a story titled "The 64-Square Madhouse" by Fritz Leiber, written in 1962, which is about a computer being admitted to a grandmaster chess competition. In the story, one of the human players (named Angler, a pun on Bobby Fischer's name) manages to beat the computer by getting it to play a particular gambit from a book of published openings. He exploits the fact that the opening book has a typo in the particular opening that he chose, that the programmers blindly copied into the computer's programming. One the computer makes the bad move, he is able to defeat it.
    • Chess books often featured miss-transcriptions of moves, especially in the older "descriptive notation" system that was popular until the 80's. Thanks for reminding me about that old Lieber story, I remember it resonating with me when I read it because of mistakes in transcriptions that had frustrated me in chess books before.

  • "By playing unexpected moves outside of KataGo's training set, a much weaker adversarial Go-playing program (that amateur humans can defeat) can trick KataGo into losing. "

    ""The research shows that AI systems that seem to perform at a human level are often doing so in a very alien way, and so can fail in ways that are surprising to humans," explains Gleave. "This result is entertaining in Go, but similar failures in safety-critical systems could be dangerous.""

    I found this out in 1997 in my dissertation. Th

  • Reminds me of that Star Trek episode where Kirk, Spock, and McCoy escape their robotic captors by acting irrationally. Did the Go AI write out "contradictory... is not... logical..." while smoke poured out of whatever is its equivalent of a head?

  • This sounds like a similar situation in the Starcraft 2 AI bots scene: Botmakers train their bots to exploit other bots' peculiar behavior. They do silly moves like pull a good amount of workers to attack, go up the ramp and show themselves so the defending AI thinks it's an all-in attack that needs to be defended at all costs, then the workers dance back and forth and don't actually commit. This has the effect of having the defending bot ruin its economy to defend an attack that never really happens. The "

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...