Garry Kasparov: The World Should Embrace Artificial Intelligence (bbc.com) 114
"Chess champion Garry Kasparov was beaten at his game by a chess-playing AI," writes dryriver. "But he does not think that AI is a bad thing." From Kasparov's interview with the BBC:
"We have to start recognizing the inevitability of machines taking over more and more tasks that we used to do in the past. It's called progress. Machines replaced farm animals and all forms of manual labor, and now machines are about to take over more menial parts of cognition. Big deal. It's happening. And we should not be alarmed about it. We should just take it as a fact and look into the future, trying to understand how can we adjust."
Kasparov has given the issue a lot of thought -- last month he released a new book called Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. But he also says that the IBM machine that beat him "was anything but intelligent. It was as intelligent as your alarm clock. A very expensive one, a $10 million alarm clock, but still an alarm clock. Very poweful -- brute force, with little chess knowledge. But chess proved to be vulnerable to the brute force. it could be crunched once hardware got fast enough and databases got big enough and algorithms got smart enough."
Kasparov has given the issue a lot of thought -- last month he released a new book called Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. But he also says that the IBM machine that beat him "was anything but intelligent. It was as intelligent as your alarm clock. A very expensive one, a $10 million alarm clock, but still an alarm clock. Very poweful -- brute force, with little chess knowledge. But chess proved to be vulnerable to the brute force. it could be crunched once hardware got fast enough and databases got big enough and algorithms got smart enough."
Did Kasparov not hear about AlphaGo? (Score:2)
Go was supposed to be a much tougher challenge, not expected to be dominated by machines for decades and I wouldn't call it an outright win just yet for the A.I.s but the pool of humans who are even capable of holding their own against AlphaGo has likely dropped to below 1000, out of 7 billion
Re:Did Kasparov not hear about AlphaGo? (Score:5, Interesting)
Go was supposed to be a much tougher challenge, not expected to be dominated by machines for decades
I don't think many people keeping up with advances in machine learning were surprised. There were several teams working on Go, and they were making rapid progress. The hardware was also improving rapidly, and much more historical game data was available.
the pool of humans who are even capable of holding their own against AlphaGo has likely dropped to below 1000, out of 7 billion
No, the number is zero. No human will ever again beat the best Go program.
There will still be human Go tournaments, just like forklifts haven't done away with human weightlifting contests.
Re:Did Kasparov not hear about AlphaGo? (Score:5, Interesting)
I don't think many people keeping up with advances in machine learning were surprised.
Most people even involved with Alpha Go were surprised at how quickly they were able to dominate human Go champions. From what I have read only Hassabis was confident they could do it in a few years. In most cases even AI researchers are often wrong about how quickly AI is getting better.
Humans are not very good at comprehending exponential increases in capability, even in their chosen fields. People have been spending too much time worrying about the end of Moore's law, and ignoring that exponential increase in algorithm performance has been much faster than even Moore's law.
There will probably be some things we assume are easy which will still elude us in 50 years (like flying cars). But most things we think will take 100 years will probably take less than 20.
Re: (Score:3)
There will probably be some things we assume are easy which will still elude us in 50 years (like flying cars).
Flying cars have not eluded us, we have chosen not to make them.
It is not a question of how hard the problem is, it is a question of how valuable the end result is (what is the user experience?). The designs end up being too much of a compromise or too expensive or just too heavily regulated compared to having both a car (or cars) and a plane (or planes).
Re: (Score:2)
In most cases even AI researchers are often wrong about how quickly AI is getting better.
Perhaps correct for a particular technical problem, but AI experts since the very beginning have been predicting major advances just around the corner. Organizers and promoters of the well-known Dartmouth AI conference in 1956 thought they could solve many of the major problems of AI (natural language processing, creativity, adaptability, etc.) with just a few dozen smart people sitting around talking to each other for a few weeks. Obviously that didn't happen (though the conference was productive).
Mean
Re: (Score:3)
There will probably be some things we assume are easy which will still elude us in 50 years (like flying cars). But most things we think will take 100 years will probably take less than 20.
I get that you said "most". One exception sadly seems to be space exploration. I'm pretty sure if we could go back in to, let's say, 1965 and get President Johnson and the very top NASA and private industry space experts in a room and told them the following:
"I've got good news and bad. The good news is that we're going to get men on the moon in 1969 and bring them safely back multiple times. (Sounds of cheers from the room)
The bad news is that the last time we'll go will be 1972 and we won't try
Re: (Score:2)
Re: (Score:3)
They were. Go is quite resistant to the brute force and play dictionary techniques used in the past on checkers and chess, which is why people wax poetic about the complexity of Go.
AlphaGo is trained using reinforcement learning, which, frankly, is such a twitchy thing that it's still surprising how well it can work.
Kasparov was beaten by a big computer programmed to play chess. AlphaGo is a very different thing.
Re: (Score:2)
AlphaGo is a very different thing.
Indeed. Deep Blue played chess very differently than a human, and it was very specifically programmed to play chess.
AlphaGo plays Go very similarly to how a human plays, and what was learned about configuring and training ANNs is applicable to many other tasks.
Re: (Score:2)
and much more historical game data was available.
Historical game data makes up a tiny percentage of the games Alpha Go trained with. By last year, most of its training was playing millions of games against modified versions of itself.
Re: (Score:2)
Re: (Score:1)
Per recent results, it has dropped to zero out of 7 billion. It demolished the best player in the world 3 games to 0. At the Future of Go summit AlphaGo was 60:0 against professional players. There's little doubt it was better than any human player in its last incarnation.
Re: (Score:2, Interesting)
Current research is mostly centered on "weak AI", that is machines and algorithms that tackle a specific set of problems. As such, it cannot take over the world, but it can allow the elite/1%/whatever to get to the point where they no longer need other humans for anything.
Although the end result will likely be the same for you and me.
Re: (Score:2)
What you mean is 'narrow AI', which is AI applied to specific tasks, like driving and personal assistance - and of course, specific games. All those commentators who sneered about AI never coming to fruition badly underestimated how these narrow AI applications are transforming the way we live.
Re: (Score:2)
Correct. Pretty much all the AI systems now in use are based on narrow AI. In almost any situation where you have sufficient training data, and a limited number of variables, you can develop narrow AIs that will out perform humans on specific tasks. In the specific domain, a modern narrow AI does feel like a super intelligent human, exhibiting intuition and creativity. The ultimate objective of DeepMind is to solve intelligence properly, building artificial general intelligence. To do this, we need to fin
Re: (Score:2)
Current research is mostly centered on "weak AI", that is machines and algorithms that tackle a specific set of problems. As such, it cannot take over the world, but it can allow the elite/1%/whatever to get to the point where they no longer need other humans for anything.
Well yes and no. They're trying to find general tools to train specific problem solvers. The concepts are quite generic, you need a goal (win, score, performance, speed, cost, weight, size etc.), some rules (legal moves in games, physics in many other cases) and some tools (pieces in chess, building materials in construction, boxes in shipping and so on). The goal is not to program the solution, it's to make the system find the solution so you don't want to be writing rules about how you think it should pla
New slave labor! (Score:1)
In 20 years people will just make a downpayment on a loan for a self driving car and then that car will drive for Uber to make money for the master, whose job will consist of keeping it in good running order. Bored? Just design some fashions, print out a batch on 3D printers in the basement and trade with neighbors. After all, robots don't care that they are exploited.... or so will keep telling outselves.
This Is IBM Deep Blue Speaking (Score:3)
Re: (Score:2)
Re: (Score:2)
Maybe one day Kasparov will embrace natural intelligence and reject Fomenko.
Ditto. Kasparov was a great chess player but he's also nuts. A total crank. I don't anyone really wants Kasparov endorsing anything, except a book on chess.
Re: (Score:2)
I think you have Kasparov confused with Bobby Fischer. Kasparov was the sane one.
Re: (Score:3)
...the sane-ish one. Overall, there's a bit of a trend here.
"New Chronology [wikipedia.org] is a great area for investing my intellect...My analytical abilities are well placed to figure out what was right and what was wrong."
Re: (Score:3, Informative)
Re:BF != SA (Score:4, Informative)
A combination of both. The better the algorithm, the less brute force it needs.
Think of it as a lever.
Re: (Score:2)
The better the algorithm, the less brute force it needs.
Exactly. Deep Blue, when it was playing Kasparov in 1997 did 200 million positions per second. A modern chess engine running a desktop PC would easily beat Deep Blue while only looking at 1-2 million positions per second. The brute force speed is lower, but the amount of chess knowledge is much higher.
Re: (Score:2)
Your understanding is correct. Chess is very much _not_ vulnerable to brute-force. It is far too complex for that and you cannot model the other side, i.e. the problem itself is not subject to a winning strategy computed by brute-force. My guess is that whoever wrote that has no clue what "brute force" means in CS.
Re: (Score:3)
What chess playing programs do can pretty much be described as brute force, not to an end of game solution, but choosing the best move based on examining all plausible lines, and using an evaluation function to determine how good each line is.
What is exciting about the (still narrow) AIs developed recently, based primarily on multi level neural networks, is that they can work in situations where no one knows how to create a hand crafted evaluation system. Basically, the system works out for itself what are
Re: (Score:2)
No, they cannot. Chess games use "quality" metrics to decide which move to make. These are not compatible with "brute force" approaches. A "brute force" algorithm just tries everything and can only recognize success or failure, nothing in between. The problem here may be that in CS, the term "brute force" has a well-defined meaning, while in general usage it does not.
Social puzzle (Score:2)
Almost everyone likes the idea of machines taking over grunt work like laundry and driving, but our society is NOT designed to distribute the benefits of AI evenly enough: many will get screwed, career-wise.
It's not so much about AI versus jobs, but how society adjusts (or doesn't). Change can be painful, especially if done wrong.
If the current trend continues, the owners of the technology will get really rich, and the rest will struggle or fail, fighting bitterly over the remaining scraps in ever uglier "c
Re: (Score:2)
computers and more broadly information tech and internet have been changing the workforce and economy for decades. You'll be in error if you project the present into the future where the only that changes is computers doing work. there are breakthroughs in energy production, biology, and yes even info tech that will make all sorts of new jobs even as we have robots
Re: (Score:1)
It's pretty safe to say that most those "new jobs" will require fairly hefty education requirements. Our current education system is not up to the task.
Bernie S. is right in that a college education (or equiv.) is now a necessity in the current economy the way a high-school education was in the recent past.
Re:Social puzzle [correction] (Score:1)
Correction re: "...the way a high-school education was in the recent past."
Rewrite: "...the way a high-school education has been since the recent past."
Re: (Score:2)
oh, so there will be immense pressure to improve education and to take education more seriously by a larger section of the populace? I don't see that necessity as bad, only the failure to do so would be bad.
Re: (Score:1)
That's one of the reasons why many big co's are cash-rich: they don't expand because there are not enough consumers (with money) to buy their products if they expanded. Thus, they sit on cash, using it as an emergency or future strategic fund.
Re: (Score:2)
That's one of the reasons why many big co's are cash-rich: they don't expand because there are not enough consumers (with money) to buy their products if they expanded. Thus, they sit on cash, using it as an emergency or future strategic fund.
That cannot possibly be correct because there are no companies which have exhausted all potential customers in every potential industry. They sit on the money because they have not identified a way to create a competitive advantage to gain market share against current or potential competitors.
If they did have more customers for their existing profitable products, those new customers would only give them a larger cash stockpile.
Re: (Score:1)
It's not a matter of possible/potential, it's a matter of whether they see the risk justifiable enough. I'm sure Apple is thinking that the auto-drive-car biz is a big gamble being they don't have experience there, and they did display hesitation.
Re: (Score:2)
if people do not have the money to buy products, the companies will not survive. When virtually nobody is working and nobody has money the corporate world will cease to exist. What happens then?
I don't think corporations actually care about the money itself very much. It is the absolute control of resources they're after.
What happens after everyone is unemployed? The corporate overlords give the autonomous flying solar powered drone armies order to fire on starving, rioting civilians and remotely shut down all the public transport which renders everyone immobile since nonelectric motor vehicles have been banned. Megalopolises will be depopulated in short order.
Money will become worthless so all the crazy wealthy and powerful people will become just ordinary people as their wealth will be worthless.
Money being worthless isn't real
Re: (Score:3)
What happens after everyone is unemployed? The corporate overlords give the autonomous flying solar powered drone armies order to fire on starving, rioting civilians and remotely shut down all the public transport which renders everyone immobile since nonelectric motor vehicles have been banned. Megalopolises will be depopulated in short order.
I think I saw that movie. "They just want some food, for God's sake!"
A lot of people here seem to have dystopian predictions like this. I'd argue that history is against you though, as so far, technology and automation has improved the human condition immensely. I'm not quite going to predict a Star-Trek like utopia, but I think there will be enough benefits to outweigh most of the negatives.
One of the reasons I don't believe people will become all unemployed is that people will simply find work to do, a
50 years of nonsense (Score:1)
Humans playing chess is like a dog riding a bicycle: it can be done, but it's not what the organism was designed for. Same is true for Go. The old AI idea of playing games was just a way to show that computers could show SOME intelligent behavior. The Turing test does not involve a game of chess, checkers, go, or tic-tac-toe. Ultimately, tightly constrained domains with well-defined rules but complex search trees are fertile for machine dominance.
The harder problems are involved in what humans do withou
Re: (Score:3)
Humans playing chess is like a dog riding a bicycle: it can be done, but it's not what the organism was designed for.
The organism was not designed, it evolved.
And the only thing it evolved for is to survive long enough to replicate under a narrow (on a cosmic scale) set of conditions.
Re: (Score:2)
And the only thing it evolved for is to survive long enough to replicate under a narrow (on a cosmic scale) set of conditions.
And chess is mostly played by men to show that they can dominate other men, and become more attractive as a mate.
Re: (Score:2)
Indeed. That nicely sums it up why computers playing Chess or Go are pretty meaningless stunts.
Of course, the AI fanatics will not even understand what you are talking about.
Re: (Score:2)
Perception is harder than human-level chess, but not Go.
Now we've got systems that perform perception tasks AND play GO better than humans.
Re: (Score:2)
Re: (Score:2)
Both of those. Do you not follow the news? There are self-driven cars driving around all over the place. Any number of robots could cross a road as well, such as the combat robots Google makes.
Re: (Score:2)
I do follow the news. None of these things work yet. Personally, I doubt they ever will, without extensive infrastructure, and even then only in very specific situations. Self-driving cars will not be driving around inner city streets. Robots will not be walking kids to school. End of story.
Also - VR is a dead-end technology that no-one wants, and 3D tv was a terrible idea.
Also - Google making 'combat robots', does this alarm anyone else? I'm sure we'll make robots that are good at killing people, but that
Re: (Score:2)
Well, that's an opinion all right. Now it's in your posting history. Come back in five to ten years and review.
Re: (Score:2)
Re: (Score:2)
You keep mentioning VR and 3d TVs. Are you familiar with the red herring fallacy?
Never Underestimate Brute Force (Score:3)
Here's the thing about 'brute force' in computing. Computers can go through millions of computations and thousands of strategy scenarios in a second. As we are seeing today, a computer can simply brute force its way through encryption, simply by trying *everything* until you get the desired result, simply because the machines are so damn fast.
Brute Force can be an exceptionally powerful way of doing something, if it is tweaked to and pointed at a particular problem, in Kasperov's case, it was Chess.
Yes, the computer wasn't intelligent, but then again, neither are half the people I meet. Those people are simply brute forcing their way through life, without a single thought in their heads.....
Re:Never Underestimate Brute Force (Score:5, Informative)
You seem to be unaware of the state-of-the art in encryption. Today, you want > 250 bits of key entropy to be long-term secure. These are infeasible to break with digital computers in this universe (not enough matter, energy and time until heath-death) and even with quantum-computers (should they ever be useful for anything, currently they are not and they may well scale so badly that they never will be).
The one thing you can brute-force in modern crypto done right is bad passwords. But that is about it.
Re: (Score:2)
Password protection (e.g. by Argon2 or to a lesser degree PBKDF2) brings this down quite a bit. Sure, if you want to memorize an encryption key directly, you would need to go to 256 bit or so. But for a password, "absolute security" against brute-forcing starts somewhere around 100 bits. Your 91 bit entropy password will not be broken. It may get observed or circumvented though.
Re: (Score:2)
Indeed. One problem is that you are typically not aware of all paths of attack or that the level you can secure different paths is different. Hence it boils down to risk management. For small aspects, you can get 100% security though. For example a 100 bit entropy password with Argon2 and reasonable parameters is unlikely to be breakable, ever. But it does nothing about the alternate attack path a keylogger offers.
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
Indeed. The problem is not having high intelligence. Many people have that. The problem is what to apply it to and in which fashion. That is a problem _outside_ of intelligence, as intelligence cannot simply be applied to everything. The pre-selection is critically needed or intelligence gets overloaded and becomes useless. Yet most people, including highly intelligent ones, routinely fail at this task.
Re: (Score:2)
Wisdom is intelligently applied knowledge. Computers are already great at storing knowledge, but they've been lacking the intelligence to apply it. That is now starting to change.
Re: (Score:2)
Very poweful -- brute force (Score:3)
But Kasparov is a chess player. (Score:2)
What would he know about AI, outside of chess? I suppose he's got opinions about economics next.
Yes, Buggy Whip Responses Coming (Score:2)
Heck no (Score:2)
Botvinnik got this wrong too (Score:4, Interesting)
These issues are very deep and potentiall deceptive. Even the cleverest of people can get hopelessly misled.
In Genna Sosonko's excellent book "Russian Silhouettes", a series of in-depth sketches of great chess players whom Sosonko knew personally, there is a very instructive anecdote about Mikhail Moiseyevich Botvinnik, multiple world champion and considered the "father" of the mighty Soviet School of Chess.
As well as being a superb chess player - although an amateur by modern standards, as he strictly limited the time he devoted to the game - Botvinnik's "day job" was electrical engineering. He launched projects to study the potential of computers for a wide range of important types of work. Sosonko tells the following instructive story.
[Botvvinik declared that] "... to write a program for managing the economy is easier than for chess, because chess is a two-sided game, antagonistic. The players hinder each other, and the devil knows what that means, whereas in economics that is not the case, and everything is simpler".
It's not so often that one catches a world-class expert in such an utterly mistaken declaration. Today in 2017 computers play chess better than any human, but the problem of managing the economy is still not understood at all. And until it is understood, it cannot be programmed.
Apologies for the typos (Score:2)
Sorry, I typed the parent too fast and made at least two typos. I'd correct them if I could.
Re: (Score:2)
And until it is understood, it cannot be programmed.
That's a common fallacy. We're doing a lot of stuff now that people don't understand. See for instance Q-Learning: https://en.wikipedia.org/wiki/... [wikipedia.org] What's required is a value that indicates the amount of progress at each point in time, and the system can learn how to make progress by trial and error, finding patterns between input, actions, and results by itself. The system can then apply those patterns in different but similar circumstances.
Re: (Score:2)
I think your example reinforces his point. Q-learning, or more generally reinforcement learning, is a learning algorithm. You don't program the system, you set up some basic infrastructure and then train it by example. We've learned that such systems can learn to do things we don't understand, and cannot program.
Re: (Score:2)
Well, this person was not a world-class expert at strong AI. Highly capable experts in one field can make completely ridiculous statements when they lose sight of the limits of their expertise.
However, I think he was talking about soviet-style "plan economy" (does not work), and that may indeed have been easier to implement than playing chess.
Re: (Score:2)
I think he was talking about soviet-style "plan economy" (does not work), and that may indeed have been easier to implement than playing chess.
Precisely my point! The Soviet leaders may have believed that economic planning is a great deal easier than it really is. Otherwise they would never have attempted to make plans for a system that even our Western "free enterprise capitalist" system has been getting badly wrong of late.
As for not being "a world-class expert at strong AI", he was speaking in the 1960s when there was no AI (strong or weak) and hence no experts in it.
As someone binge watching show "person of interest (Score:2)
Another non-expert (Score:2)
With a famous name, but no clue what he is talking about when it comes to AI. I find this really pathetic. Whatever happened to actually listening to the experts in that subject area?
Re: (Score:2)
But if slashdot (like all news sources) didnt have lighter fluff pieces, what would you have to complain about?
Re: (Score:1)
What did he say that's not correct? Are *you* an expert in AI that you would even recognize where his knowledge about the subject fails?
I'd also like to suggest that experts in AI are, by definition, embracing AI. Since, you know, they have devoted significant time in their lives to becoming experts in the subject. Can you name a single "expert" in AI that doesn't "embrace AI"? What would that even look like?
Re: (Score:2)
I'd also like to suggest that experts in AI are, by definition, embracing AI. Since, you know, they have devoted significant time in their lives to becoming experts in the subject. Can you name a single "expert" in AI that doesn't "embrace AI"? What would that even look like?
Hahahaha, you are soooo badly off about this one. This is Science, not Religion.
Re: (Score:1)
I'd also like to point out that he *literally* wrote a book on the subject.
Re: (Score:2)
There are tons of books around that are filled with complete nonsense.
Not going to happen. (Score:1)
I'll happily embrace AI when it has been neutralized.
You gonna need a bigger space station... (Score:2)
GREAT idea!! (Score:1)
The world should embrace its demise at the hands of the soulless plutocracy and their machine slaves!
AI in the hands of the people would be a different story, which is the vision the average proponent pastes over reality while humming a merry tune (while their head is on fire)
It is a big deal. (Score:2)
Didn't think hard enough (Score:1)
Sure, beating him in chess could be considered brute force. How does he explain Jeopardy? I don't think we can classify that as brute force.
Could be an exciting time for mankind. Could also be a harbinger of evil. If we let them control too much.