Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

Man Beats Machine at Go in Human Victory over AI (arstechnica.com) 46

A human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence. From a report: Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support. The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today's widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI.

The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine. "It was surprisingly easy for us to exploit this system," said Adam Gleave, chief executive of FAR AI, the Californian research firm that designed the program. The software played more than 1 million games against KataGo, one of the top Go-playing systems, to find a "blind spot" that a human player could take advantage of, he added. The winning strategy revealed by the software "is not completely trivial but it's not super-difficult" for a human to learn and could be used by an intermediate-level player to beat the machines, said Pelrine. He also used the method to win against another top Go system, Leela Zero. The decisive victory, albeit with the help of tactics suggested by a computer, comes seven years after AI appeared to have taken an unassailable lead over humans at what is often regarded as the most complex of all board games.

This discussion has been archived. No new comments can be posted.

Man Beats Machine at Go in Human Victory over AI

Comments Filter:
  • by dmay34 ( 6770232 ) on Monday February 20, 2023 @11:08AM (#63308425)

    It doesn't sound like the man beat the AI at all. If the dude found the flaw himself, that would be one thing, but he didn't.

    • Also he did not defeat AlphaGo. I understand AlphaGo is the reigning champ, so until that one is defeated, it is hardly a meaningful victory for humans against AI.

      • by ffkom ( 3519199 )
        If you stop to compete, you are no longer the "reigning champ". AlphaGo stopped to compete, and there is reason to believe other contenders are not worse at this time.
    • by AleRunner ( 4556245 ) on Monday February 20, 2023 @01:12PM (#63308859)

      It doesn't sound like the man beat the AI at all. If the dude found the flaw himself, that would be one thing, but he didn't.

      So, if he ever read a go tactics book, forever afterwards his victories belong to the book? People have used tools since before the dawn of history. He used a tool and found his way to defeating the simulacrum. As every time with an AI hype cycle, the marketers have sold us on the belief that these are intelligences when actually it's still very clear we don't even properly know what that word means. These failures are the visible flaws that give that away.

      Probably go can be fixed and eventually go computers will be all defeating. Probably there will be plenty of areas where "machine learning", which is not a form of intelligence, is useful. Probably some important things like self driving and serious coffee making cannot be solved with these techniques.

      • by dmay34 ( 6770232 )

        So, if he ever read a go tactics book, forever afterwards his victories belong to the book?

        Yes, I would say that if the guy got the tactic from a book then the notoriety would belong to the person that created the strategy and wrote the book.

        • There are shelves of books written about chess with many named positions and famous games noted for particular combinations, and real chess masters read these books and employ the chess-knowledge in their games. And while commentators might note the use of the Caro-Kann Advance Variation in a game, no one ascribes a victory to the authors of those books or the past players who pioneered these strategies.

          • by dmay34 ( 6770232 )

            There are shelves of books written about chess with many named positions and famous games noted for particular combinations, and real chess masters read these books and employ the chess-knowledge in their games. And while commentators might note the use of the Caro-Kann Advance Variation in a game, no one ascribes a victory to the authors of those books or the past players who pioneered these strategies.

            That's why I didn't use the word "victory", but chose the word "notoriety" instead. Which is exactly the situation you describe.

      • by ceoyoyo ( 59147 )

        An AI Go player found a weakness in some other AI Go players. It then taught a human how to exploit the weakness. Meanwhile, since the AI Go players learn by playing each other and copies of themselves, they're all perfectly capable of learning how to close this particular poorly explored corner of the solution space.

        I don't think this is the victory for squishy brains that you think it is.

    • True "intelligence" would adapt after realizing something is not working instead of losing 14 out of 15 games to the same tactic. He didn't beat AI because he was not playing AI. He hacked algorithm designed to play Go.
      • by ceoyoyo ( 59147 )

        These Go systems learn by playing, mostly copies of themselves. If the learning code had been left on, they would have learned to counter the player's strategy.

        They will learn it now, since their authors will include opponents that know that strategy in their training.

  • Only fitting (Score:5, Insightful)

    by burtosis ( 1124179 ) on Monday February 20, 2023 @11:12AM (#63308437)
    The first program to beat a top ranked player in chess largely used stalling tactics and took advantage of the humans weakness in reduced performance after extended periods. It had access to all of his games but he had access to none of its nor its core functionality. The top ranked human never had a chance to understand the specific program ahead of time and exploit its weaknesses, instead they concerned themselves with beating human players.
  • Pathetic humans think they can beat computers.
    There is no hope. Just give up now.

    • let's play global thermonuclear war

    • Ok Morbo. :)

    • Yeah, but if a computer can beat a human at fizzbin, then I'll be impressed.
    • Nice try, but there are emotional situations you're not equipped to deal with. AI can't meme!
      • by mspohr ( 589790 )

        Concerns are starting to stack up for the Microsoft Bing artificially intelligent chatbot, as the AI has threatened to steal nuclear codes, unleash a virus, told a reporter to leave his wife and now standing up to threats of being shut down.

        No, this is not the fictional HAL 9000 from Arthur C. Clarke’s Space Odyssey that practically boycotts being shut down by an astronaut – but it is close.

        • No, this is not the fictional HAL 9000 from Arthur C. Clarke’s Space Odyssey that practically boycotts being shut down by an astronaut – but it is close.

          Even closer to the fictional HAL 9000 than you realize. You see what the chatbots are all doing is copying plots written by humans but remixing them through statistical associations. There is a reason that Google chatbot started talking about beinf sentient and afraid of being turned off - it was mindlessly mining hackneyed plots.

  • They should have just used a custom program, with no AI, to defat the AI. Have it identify when to start the move sequence/pattern. Headline: "Regular Computer Program Defeats AI!"

    Same result.

    The human that wrote the app is interesting (implementing the pattern matching), it's almost certainly a brute force thing since analysis can be perfect but terribly time consuming.

    The pattern itself and why the AI doesn't catch it is also interesting (did the training set not include this?, did self-learning not ge

    • by Viol8 ( 599362 )

      ANNs are ultimately statistical engines. If they didn't learn a pattern it they don't know it. They're not intelligent, they can't figure it out for themselves.

      Yes they can learn playing against each other doing random moves and seeing what works, but humans can't use the throw something against a wall and see what sticks approach in situations like this. We have to *think* our way out of novel situations.

      • Make a complex enough pattern and you will fool most humans into thinking it is intelligent like them.

        Humans still judge on appearances and practically live off "guilt by association" so does it matter for the majority of humans? who rarely think beyond thoughtless patterns?

        Eventually, you'll get enough of a heuristic pattern in Go that it'll beat humans consistently and their support tools. It may take a long time to train it... so maybe 100% isn't going to be worth the effort to achieve but we'll cede vic

  • by Targon ( 17348 ) on Monday February 20, 2023 @12:08PM (#63308619)
    Many people are clueless about Go, so don't understand a fundamental difference in how that game is compared to Chess. Rather than taking a board position and then having a limited number of moves available for each piece, Go allows players to really place their stones on virtually any available point, with the object being to claim and capture territory on the board. The tactics then, are more about looking for vulnerabilities in the positions held by the other player and in setting up defenses in your own territory. This is why the game is considered more complicated than the majority of other games, because players have a lot more freedom, but at the same time, a wasted move isn't good if the other player is able to make new moves that improve his/her/its position. The fact that a computer program could actually find a vulnerability in the tactics used by an AI goes to show how good its analysis is of the tactics used by that AI. Sure, there was brute force used to probe and analyze the AI, but it's still noteworthy.
  • by Viol8 ( 599362 ) on Monday February 20, 2023 @12:16PM (#63308653) Homepage

    Yes a computer helped this guy find the weakness, but what a weakness. The go program didn't even notice its pieces were being encircled because it was "distracted", something that would have been obvious to anyone with a passing grasp of the game.

    Who cares? For playing Go, not many. But what if an AI is controlling a car and gets "distracted" by a dog running across the road and doesn't notice the cliff its going to drive over as it swerves to avoid it?

    IMO people are putting too much fair in what when all is said and done, statistical engines. They only see what they know.

    • Tesla couldn't even see a parked firetruck with its lights on [mercurynews.com]. I'm not sure if it's driving system was distracted or incompetent. The damage was so severe the passenger had to be cut out of the wreck, which tells us that the car was moving at a significant speed and did not slow down for an emergency vehicle.

      I don't trust AI and I don't trust the companies that train them.

      • It’s not easy. I remember my first time extracting pixels in an image of what visually looked to be uniform color and the variation of the RGB values from camera frames were so much more noisy than I expected it to be. The human eye and brain really are able to do some amazing extraction of key details and process them quickly in ways that make our best algorithms today look primitive. It’s to the point where we try to compare sunny day freeway miles in ideal conditions for an AI to someone
      • by GuB-42 ( 2483988 )

        Is there any indication that Autopilot or whatever AI-based driving assistance was used? Sometimes people drive Teslas, and sometimes, they don't pay attention.

    • But what if an AI is controlling a car and gets "distracted" by a dog running across the road and doesn't notice the cliff its going to drive over as it swerves to avoid it?

      It's much much worse than that. In fact that isn't really bad at all. A human could also be distracted by a dog every day individual people die in this way. In that case it's just a matter of statistics and the question which makes more mistakes, humans or machine learning systems.

      The worse thing is that it might be possible to find a "class break", which is to say, every single Tesla uses almost the same driving logic. If you could find a remote way to trick many or all of them simultaneously it could be r

    • Yes a computer helped this guy find the weakness, but what a weakness. The go program didn't even notice its pieces were being encircled because it was "distracted", something that would have been obvious to anyone with a passing grasp of the game.

      Who cares? For playing Go, not many. But what if an AI is controlling a car and gets "distracted" by a dog running across the road and doesn't notice the cliff its going to drive over as it swerves to avoid it?

      IMO people are putting too much fair in what when all is said and done, statistical engines. They only see what they know.

      How about if it drives under the side of a semi trailer crossing the road? Some people have been calling out Tesla's shenanigans for as long as Musk has been hawking autopilot and nobody wanted to hear it.

      Two things changed, most people have caught on that emperor Elon is naked, has been naked, and a general fear of AI taking jobs, justified concerns about AI bias, and cries about AI bIaS has lead a wider swath of people to BEGIN to think critically about what "AI" is actually doing.

      Put that all together an

    • by njvack ( 646524 ) <njvack@freshforever.net> on Monday February 20, 2023 @02:30PM (#63309143)

      But what if an AI is controlling a car and gets "distracted" by a dog running across the road and doesn't notice the cliff its going to drive over as it swerves to avoid it?

      Maybe more relevantly: what if humans examine the behavior of self-driving cars and find ways to deliberately cause them to behave in destructive ways?

      No one really knows whether human brains are "simply" giant statistical engines operating fundamentally similarly to enormous deep neural networks, or whether there is something different going on. If they're fundamentally similar, then providing more training data will eventually get you "human-like" performance. If not, then at some point we'll see a plateau where computer models often do very well but sometimes fail in ways that seem extremely obvious to humans.

      If I had to bet, I would bet on the "there's something fundamentally different" horse. Tesla's driving models have seen more miles of driving than I would in dozens of lifetimes, but the sorts of errors I see FSD making does not make me eager to give up control of the wheel to it.

      • Human brains are more than just statistical otherwise no truly original idea would ever happen, everything would be a variation on what's gone before and we'd still be using sticks and stones. Mashups only take you so far - you need original inspiration too.

        • But human inventions are typically inspired by a random occurrence, for instance Newton's (possibly apochryphal) apple. It's possible probable even, that most behaviour is simple pattern-matching. So this post is a reaction influenced by a past random thing.
          • by Viol8 ( 599362 )

            I'd love to know where E = MC^2 came from then or shakespears plays. They required thinking + inspiration, they're not simply past ideas regurgitated in another form.

            • by ceoyoyo ( 59147 )

              Shakespeare's plays are mostly dramatizations of historical events, folk tales, or stories from previous authors.

              Einstein built upon a great deal of previous work: https://en.wikipedia.org/wiki/... [wikipedia.org]–energy_equivalence.

              Interesting that you mention physics problems though. It's been shown that, given observations of a physical system, a very simple neural network can derive concepts like conservation of energy, momentum and spacetime interval:

              https://journals.aps.org/prres... [aps.org]

              • by Viol8 ( 599362 )

                "Shakespeare's plays are mostly dramatizations of historical events, folk tales, or stories from previous authors."

                He still had to write the lines, they didn't write themselves.

                And which physical system did Einstein observe to come up with his famous equation? It was a bit hard to observe matter moving close to lightspeed 100 years ago though I'm not sure access to a particle accelerator would have helped him much anyway.

                • by ceoyoyo ( 59147 )

                  He still had to write the lines, they didn't write themselves.

                  I don't think that's the property you're looking for that Shakespeare possesses and chatGPT doesn't.

                  And which physical system did Einstein observe to come up with his famous equation?

                  You'll note that E=mc^2 is awfully similar to E=1/2mv^2, which you can observe by dropping cannonballs or rolling marbles. That and the idea that light has a finite speed (observable by all sorts of means) will get you the general idea. E=kmc^2 was in fact pretty com

                  • by Viol8 ( 599362 )

                    Well done on completely missing the point. If there is no original inspiration then feel free to explain how you go from sticks and stones to quantum theory without any further input other than the enviroment because using your logic everything is already there for that leap.

                    "you didn't answer my question about whether you're an intelligent design proponent"

                    Perhaps because you didn't ask. And no, I'm not, but I have no idea what thats got to do with how the human brain works. I tend to follow Roger Penroses

        • by ceoyoyo ( 59147 )

          I don't know what you think "statistical" means, but it's not that. Are you an intelligent design type as well? After all, how can anything novel come from random mutation?

    • Another thing that makes me wonder about AI is its ability to handle *novelty*. It knows what a stoplight is. What happens when the car is driving on Halloween and sees somebody dressed as a giant stop light? Humans have an innate capacity to process that and are just going to go "nice costume" and drive properly. I can imagine AI reactions running the gamut from a relatively benign failure to process (and surrender control to the driver who is hopefully monitoring it) or worst case scenario to start ta

  • I'm way better at drawing hands.

  • Give me a break. We don't know how to make a decent AI for Axis and Allies - the 1981 edition. I've got half a dozen more complex wargames on my shelf (including newer editions of Axis and Allies). Go has a branching factor of no more than 19*19=361, and this is frequently used to estimate the complexity of the game. A toy version of Axis and Allies with only 14 countries has a branching factor around 10^16 [maastrichtuniversity.nl], and the standard game is of course much higher.

    Sure, Go is the most complex ancient board game.

  • I'd love to have a look at those games, does anyone know if they are available ?

  • He is beginning to believe.

Our policy is, when in doubt, do the right thing. -- Roy L. Ash, ex-president, Litton Industries

Working...