Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI

Garry Kasparov: The World Should Embrace Artificial Intelligence (bbc.com) 114

"Chess champion Garry Kasparov was beaten at his game by a chess-playing AI," writes dryriver. "But he does not think that AI is a bad thing." From Kasparov's interview with the BBC: "We have to start recognizing the inevitability of machines taking over more and more tasks that we used to do in the past. It's called progress. Machines replaced farm animals and all forms of manual labor, and now machines are about to take over more menial parts of cognition. Big deal. It's happening. And we should not be alarmed about it. We should just take it as a fact and look into the future, trying to understand how can we adjust."
Kasparov has given the issue a lot of thought -- last month he released a new book called Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. But he also says that the IBM machine that beat him "was anything but intelligent. It was as intelligent as your alarm clock. A very expensive one, a $10 million alarm clock, but still an alarm clock. Very poweful -- brute force, with little chess knowledge. But chess proved to be vulnerable to the brute force. it could be crunched once hardware got fast enough and databases got big enough and algorithms got smart enough."
This discussion has been archived. No new comments can be posted.

Garry Kasparov: The World Should Embrace Artificial Intelligence

Comments Filter:
  • Go was supposed to be a much tougher challenge, not expected to be dominated by machines for decades and I wouldn't call it an outright win just yet for the A.I.s but the pool of humans who are even capable of holding their own against AlphaGo has likely dropped to below 1000, out of 7 billion

    • by ShanghaiBill ( 739463 ) on Sunday June 18, 2017 @12:59PM (#54643059)

      Go was supposed to be a much tougher challenge, not expected to be dominated by machines for decades

      I don't think many people keeping up with advances in machine learning were surprised. There were several teams working on Go, and they were making rapid progress. The hardware was also improving rapidly, and much more historical game data was available.

      the pool of humans who are even capable of holding their own against AlphaGo has likely dropped to below 1000, out of 7 billion

      No, the number is zero. No human will ever again beat the best Go program.

      There will still be human Go tournaments, just like forklifts haven't done away with human weightlifting contests.

      • by ranton ( 36917 ) on Sunday June 18, 2017 @01:15PM (#54643117)

        I don't think many people keeping up with advances in machine learning were surprised.

        Most people even involved with Alpha Go were surprised at how quickly they were able to dominate human Go champions. From what I have read only Hassabis was confident they could do it in a few years. In most cases even AI researchers are often wrong about how quickly AI is getting better.

        Humans are not very good at comprehending exponential increases in capability, even in their chosen fields. People have been spending too much time worrying about the end of Moore's law, and ignoring that exponential increase in algorithm performance has been much faster than even Moore's law.

        There will probably be some things we assume are easy which will still elude us in 50 years (like flying cars). But most things we think will take 100 years will probably take less than 20.

        • There will probably be some things we assume are easy which will still elude us in 50 years (like flying cars).

          Flying cars have not eluded us, we have chosen not to make them.

          It is not a question of how hard the problem is, it is a question of how valuable the end result is (what is the user experience?). The designs end up being too much of a compromise or too expensive or just too heavily regulated compared to having both a car (or cars) and a plane (or planes).

        • In most cases even AI researchers are often wrong about how quickly AI is getting better.

          Perhaps correct for a particular technical problem, but AI experts since the very beginning have been predicting major advances just around the corner. Organizers and promoters of the well-known Dartmouth AI conference in 1956 thought they could solve many of the major problems of AI (natural language processing, creativity, adaptability, etc.) with just a few dozen smart people sitting around talking to each other for a few weeks. Obviously that didn't happen (though the conference was productive).

          Mean

        • There will probably be some things we assume are easy which will still elude us in 50 years (like flying cars). But most things we think will take 100 years will probably take less than 20.

          I get that you said "most". One exception sadly seems to be space exploration. I'm pretty sure if we could go back in to, let's say, 1965 and get President Johnson and the very top NASA and private industry space experts in a room and told them the following:

          "I've got good news and bad. The good news is that we're going to get men on the moon in 1969 and bring them safely back multiple times. (Sounds of cheers from the room)
          The bad news is that the last time we'll go will be 1972 and we won't try

      • When AI is applied against a human, it is daunting. But what I find worthy of investigation are those problems oriented towards assisting a human.
      • by ceoyoyo ( 59147 )

        They were. Go is quite resistant to the brute force and play dictionary techniques used in the past on checkers and chess, which is why people wax poetic about the complexity of Go.

        AlphaGo is trained using reinforcement learning, which, frankly, is such a twitchy thing that it's still surprising how well it can work.

        Kasparov was beaten by a big computer programmed to play chess. AlphaGo is a very different thing.

        • AlphaGo is a very different thing.

          Indeed. Deep Blue played chess very differently than a human, and it was very specifically programmed to play chess.

          AlphaGo plays Go very similarly to how a human plays, and what was learned about configuring and training ANNs is applicable to many other tasks.

      • and much more historical game data was available.

        Historical game data makes up a tiny percentage of the games Alpha Go trained with. By last year, most of its training was playing millions of games against modified versions of itself.

      • by houghi ( 78078 )

        There will still be human Go tournaments, just like forklifts haven't done away with human weightlifting contests.

        But not as many.
        The reason that people are afraid is that they are afraid of their job and income and the fact that they might not be able to provide for their loved ones.

        He would not need to worry about that, so he can welcome the replacement of the workworce
        If this fear is legit or not is something the future will decide. I think it is, unless there are serious social changes and those will b

    • by Anonymous Coward

      Per recent results, it has dropped to zero out of 7 billion. It demolished the best player in the world 3 games to 0. At the Future of Go summit AlphaGo was 60:0 against professional players. There's little doubt it was better than any human player in its last incarnation.

  • In 20 years people will just make a downpayment on a loan for a self driving car and then that car will drive for Uber to make money for the master, whose job will consist of keeping it in good running order. Bored? Just design some fashions, print out a batch on 3D printers in the basement and trade with neighbors. After all, robots don't care that they are exploited.... or so will keep telling outselves.

  • by dryriver ( 1010635 ) on Sunday June 18, 2017 @12:44PM (#54643013)
    The world should embrace Garry Kasparov. I like a man who gets beaten by an AI, but then embraces AI. =)
    • Maybe one day Kasparov will embrace natural intelligence and reject Fomenko.
      • Maybe one day Kasparov will embrace natural intelligence and reject Fomenko.

        Ditto. Kasparov was a great chess player but he's also nuts. A total crank. I don't anyone really wants Kasparov endorsing anything, except a book on chess.

        • I think you have Kasparov confused with Bobby Fischer. Kasparov was the sane one.

          • by chihowa ( 366380 )

            ...the sane-ish one. Overall, there's a bit of a trend here.

            "New Chronology [wikipedia.org] is a great area for investing my intellect...My analytical abilities are well placed to figure out what was right and what was wrong."

            • Re: (Score:3, Informative)

              by hazardPPP ( 4914555 )
              You should also read some of Kasparov's "geopolitical analysis". He's a Putin critic, so people give him the benefit of the doubt, but once you read it you realize he's crazy.
  • Almost everyone likes the idea of machines taking over grunt work like laundry and driving, but our society is NOT designed to distribute the benefits of AI evenly enough: many will get screwed, career-wise.

    It's not so much about AI versus jobs, but how society adjusts (or doesn't). Change can be painful, especially if done wrong.

    If the current trend continues, the owners of the technology will get really rich, and the rest will struggle or fail, fighting bitterly over the remaining scraps in ever uglier "c

    • computers and more broadly information tech and internet have been changing the workforce and economy for decades. You'll be in error if you project the present into the future where the only that changes is computers doing work. there are breakthroughs in energy production, biology, and yes even info tech that will make all sorts of new jobs even as we have robots

      • by Tablizer ( 95088 )

        that will make all sorts of new jobs

        It's pretty safe to say that most those "new jobs" will require fairly hefty education requirements. Our current education system is not up to the task.

        Bernie S. is right in that a college education (or equiv.) is now a necessity in the current economy the way a high-school education was in the recent past.

        • Correction re: "...the way a high-school education was in the recent past."

          Rewrite: "...the way a high-school education has been since the recent past."

        • oh, so there will be immense pressure to improve education and to take education more seriously by a larger section of the populace? I don't see that necessity as bad, only the failure to do so would be bad.

  • by Anonymous Coward

    Humans playing chess is like a dog riding a bicycle: it can be done, but it's not what the organism was designed for. Same is true for Go. The old AI idea of playing games was just a way to show that computers could show SOME intelligent behavior. The Turing test does not involve a game of chess, checkers, go, or tic-tac-toe. Ultimately, tightly constrained domains with well-defined rules but complex search trees are fertile for machine dominance.

    The harder problems are involved in what humans do withou

    • by alexo ( 9335 )

      Humans playing chess is like a dog riding a bicycle: it can be done, but it's not what the organism was designed for.

      The organism was not designed, it evolved.
      And the only thing it evolved for is to survive long enough to replicate under a narrow (on a cosmic scale) set of conditions.

      • And the only thing it evolved for is to survive long enough to replicate under a narrow (on a cosmic scale) set of conditions.

        And chess is mostly played by men to show that they can dominate other men, and become more attractive as a mate.

    • by gweihir ( 88907 )

      Indeed. That nicely sums it up why computers playing Chess or Go are pretty meaningless stunts.

      Of course, the AI fanatics will not even understand what you are talking about.

    • by ceoyoyo ( 59147 )

      Perception is harder than human-level chess, but not Go.

      Now we've got systems that perform perception tasks AND play GO better than humans.

      • But not one that drives a car, or can cross the road.
        • by ceoyoyo ( 59147 )

          Both of those. Do you not follow the news? There are self-driven cars driving around all over the place. Any number of robots could cross a road as well, such as the combat robots Google makes.

          • I do follow the news. None of these things work yet. Personally, I doubt they ever will, without extensive infrastructure, and even then only in very specific situations. Self-driving cars will not be driving around inner city streets. Robots will not be walking kids to school. End of story.

            Also - VR is a dead-end technology that no-one wants, and 3D tv was a terrible idea.

            Also - Google making 'combat robots', does this alarm anyone else? I'm sure we'll make robots that are good at killing people, but that

            • by ceoyoyo ( 59147 )

              Well, that's an opinion all right. Now it's in your posting history. Come back in five to ten years and review.

              • I will. Unless a car being auto-driven carrying someone playing a VR game while driving their 3D TV home kills me while I'm following a helpful robot across the street, of course.
  • by tekrat ( 242117 ) on Sunday June 18, 2017 @01:26PM (#54643143) Homepage Journal

    Here's the thing about 'brute force' in computing. Computers can go through millions of computations and thousands of strategy scenarios in a second. As we are seeing today, a computer can simply brute force its way through encryption, simply by trying *everything* until you get the desired result, simply because the machines are so damn fast.
    Brute Force can be an exceptionally powerful way of doing something, if it is tweaked to and pointed at a particular problem, in Kasperov's case, it was Chess.
    Yes, the computer wasn't intelligent, but then again, neither are half the people I meet. Those people are simply brute forcing their way through life, without a single thought in their heads.....

     

    • by gweihir ( 88907 ) on Sunday June 18, 2017 @02:38PM (#54643405)

      You seem to be unaware of the state-of-the art in encryption. Today, you want > 250 bits of key entropy to be long-term secure. These are infeasible to break with digital computers in this universe (not enough matter, energy and time until heath-death) and even with quantum-computers (should they ever be useful for anything, currently they are not and they may well scale so badly that they never will be).

      The one thing you can brute-force in modern crypto done right is bad passwords. But that is about it.

    • Indeed. I'd go further: it's not even clear to me why something that solves problems by brute-force can't be inteligent. It could well even be that our brains are also brute-forcing but in parallel rather than in series. We don't know that they're not.
  • Forget AI (Score:2, Insightful)

    by reboot246 ( 623534 )
    What we desperately need is wisdom. There's very little of it in the world, and I doubt a machine will ever be wise.
    • by gweihir ( 88907 )

      Indeed. The problem is not having high intelligence. Many people have that. The problem is what to apply it to and in which fashion. That is a problem _outside_ of intelligence, as intelligence cannot simply be applied to everything. The pre-selection is critically needed or intelligence gets overloaded and becomes useless. Yet most people, including highly intelligent ones, routinely fail at this task.

    • Wisdom is intelligently applied knowledge. Computers are already great at storing knowledge, but they've been lacking the intelligence to apply it. That is now starting to change.

    • I doubt a human will ever be wise. We're still searching, and every guru or prophet has been disappointing so far.
  • by turkeydance ( 1266624 ) on Sunday June 18, 2017 @01:40PM (#54643189)
    will it be able to edit?
  • What would he know about AI, outside of chess? I suppose he's got opinions about economics next.

  • One thing is certain... there are going to be a lot fewer paying 'knowledge work' jobs very soon. What happens then - do we invent Futurama's Suicide Booths?
    • if there's a population crash from suicide it'd make labor valuable again. The whole point of this is to devalue labor and put all power back in the hands of capital. Now get back to babby-making slave.
  • by Archtech ( 159117 ) on Sunday June 18, 2017 @02:10PM (#54643293)

    These issues are very deep and potentiall deceptive. Even the cleverest of people can get hopelessly misled.

    In Genna Sosonko's excellent book "Russian Silhouettes", a series of in-depth sketches of great chess players whom Sosonko knew personally, there is a very instructive anecdote about Mikhail Moiseyevich Botvinnik, multiple world champion and considered the "father" of the mighty Soviet School of Chess.

    As well as being a superb chess player - although an amateur by modern standards, as he strictly limited the time he devoted to the game - Botvinnik's "day job" was electrical engineering. He launched projects to study the potential of computers for a wide range of important types of work. Sosonko tells the following instructive story.

    [Botvvinik declared that] "... to write a program for managing the economy is easier than for chess, because chess is a two-sided game, antagonistic. The players hinder each other, and the devil knows what that means, whereas in economics that is not the case, and everything is simpler".

    It's not so often that one catches a world-class expert in such an utterly mistaken declaration. Today in 2017 computers play chess better than any human, but the problem of managing the economy is still not understood at all. And until it is understood, it cannot be programmed.

    • Sorry, I typed the parent too fast and made at least two typos. I'd correct them if I could.

    • And until it is understood, it cannot be programmed.

      That's a common fallacy. We're doing a lot of stuff now that people don't understand. See for instance Q-Learning: https://en.wikipedia.org/wiki/... [wikipedia.org] What's required is a value that indicates the amount of progress at each point in time, and the system can learn how to make progress by trial and error, finding patterns between input, actions, and results by itself. The system can then apply those patterns in different but similar circumstances.

      • by ceoyoyo ( 59147 )

        I think your example reinforces his point. Q-learning, or more generally reinforcement learning, is a learning algorithm. You don't program the system, you set up some basic infrastructure and then train it by example. We've learned that such systems can learn to do things we don't understand, and cannot program.

    • by gweihir ( 88907 )

      Well, this person was not a world-class expert at strong AI. Highly capable experts in one field can make completely ridiculous statements when they lose sight of the limits of their expertise.

      However, I think he was talking about soviet-style "plan economy" (does not work), and that may indeed have been easier to implement than playing chess.

      • I think he was talking about soviet-style "plan economy" (does not work), and that may indeed have been easier to implement than playing chess.

        Precisely my point! The Soviet leaders may have believed that economic planning is a great deal easier than it really is. Otherwise they would never have attempted to make plans for a system that even our Western "free enterprise capitalist" system has been getting badly wrong of late.

        As for not being "a world-class expert at strong AI", he was speaking in the 1960s when there was no AI (strong or weak) and hence no experts in it.

  • Headlines with the phrase "The World Should Embrace Artificial Intelligence" seem a little... surreal...to put it mildly...
  • With a famous name, but no clue what he is talking about when it comes to AI. I find this really pathetic. Whatever happened to actually listening to the experts in that subject area?

    • by skam240 ( 789197 )

      But if slashdot (like all news sources) didnt have lighter fluff pieces, what would you have to complain about?

    • by Anonymous Coward

      What did he say that's not correct? Are *you* an expert in AI that you would even recognize where his knowledge about the subject fails?

      I'd also like to suggest that experts in AI are, by definition, embracing AI. Since, you know, they have devoted significant time in their lives to becoming experts in the subject. Can you name a single "expert" in AI that doesn't "embrace AI"? What would that even look like?

      • by gweihir ( 88907 )

        I'd also like to suggest that experts in AI are, by definition, embracing AI. Since, you know, they have devoted significant time in their lives to becoming experts in the subject. Can you name a single "expert" in AI that doesn't "embrace AI"? What would that even look like?

        Hahahaha, you are soooo badly off about this one. This is Science, not Religion.

    • by Anonymous Coward

      I'd also like to point out that he *literally* wrote a book on the subject.

  • I'll happily embrace AI when it has been neutralized.

  • I just finished re-reading "The Two Faces of Tomorrow," the first novel in "Cyber Rogue" [amzn.to] by James P. Hogan, one of my favorite SF stories, where scientists set up an advanced AI to manage a space station and the military went to war to determine whether or not they could pull the plug if the AI determines that humans are a nuisance. Be careful about embracing the AI. The AI just might embrace back.
  • The world should embrace its demise at the hands of the soulless plutocracy and their machine slaves!

    AI in the hands of the people would be a different story, which is the vision the average proponent pastes over reality while humming a merry tune (while their head is on fire)

  • These machines do not have motivations. As they replace human thinkers, they decrease the number of human thinkers in that particular area of human thought, interrupting the stream of advance in thought in that area. What will happen is that thought in a particular area will freeze at some level. Because machines have no motivation array, they have no creative thought. They advance nothing on their own.
  • Sure, beating him in chess could be considered brute force. How does he explain Jeopardy? I don't think we can classify that as brute force.

    Could be an exciting time for mankind. Could also be a harbinger of evil. If we let them control too much.

An egghead is one who stands firmly on both feet, in mid-air, on both sides of an issue. -- Homer Ferguson

Working...