It's discouraging reading this. Especially since I knew some of the Cyc [cyc.com] people back in the 1980s, when they were pursuing the same idea. They're still at it. You can even train their system [cyc.com] if you like. But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.
I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter. Most of the AI faculty was in denial about that. Very discouraging. The "AI Winter" followed; all the startups went bust, most of the research projects ended, and there was a big empty room of cubicles labeled "Knowledge Systems Laboratory" on the second floor of the Gates Building. I still wonder what happened to the people who got degrees in "Knowledge Engineering". "Do you want fries with that?"
MIT went into a phase where Rod Brooks took over the AI Lab and put everybody on little dumb robots, at roughly the Lego Mindstorms level. Minsky bitched that all the students were soldering instead of learning theory. After a decade or so, it became clear that reactive robot AI could get you to insect level, but no further. Brooks went into the floor-cleaning business (Roomba, Scooba, Dirt Dog, etc.) with the technology, with some success.
Then came the DARPA Grand Challenge. Dr. Tony Tether, the head of DARPA, decided that AI robotics needed a serious kick in the butt. That's what the DARPA Grand Challenge was really all about. It was made clear to the universities receiving DARPA money that if they didn't do well in that game, the money supply would be turned off. It worked. Levels of effort not before seen on a single AI project produced some good results. Stanford had to replace many of the old faculty, but that worked out well in the end.
This is, at last, encouraging. The top-down strong AI problem was just too hard.
Insect-level AI, with no world model, was too dumb. But robot vehicle AI, with world models updated by sensors, is now real. So there's progress. The robot vehicle problem is nice because it's so unforgiving. The thing actually has to work; you can't hand-wave around the problems.
The classic bit of hubris in AI, by the way, is to have a good idea and then think it's generally applicable. AI has been through this too many times - the General Problem Solver, inference by theorem proving, neural nets, expert systems, neural nets again, and behavior-based AI. Each of those ideas has a ceiling which has been reached.
It's possible to get too deep into some of these ideas. The people there are brilliant, but narrow, and the culture supports this. MIT has "Nerd Pride" buttons. As someone recruiting me for the Media Lab once said, "There are fewer distractions out here" (It was sleeting.) It sounds like that's what happened to these two young people.
... I knew some of the Cyc people back in the 1980s, when they were pursuing the same idea. They're still at it.... But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.
I don't know whether they're actively still trying to get "true AI" or just milking what they've got; but, assuming the former, some things in science take a really long [aps.org] time [nobelprize.org].
It seems pretty obvious that any intelligence requires a vast amount of knowledge to be useful and that takes a lot of t
Bose-Einstein condensates and the Solar Neutrino problems were details of much broader overarching theories which had long since proven their utility. A scientific theory has to be falsifiable, that is it has to predict something that can fail. On the other hand old approaches can become practical when a critical mass of computational power appears. Frankly, I suspect the answers will come out of search and spam filters in which better data and algorithms result in tangible benefits (i.e. $$$).
Saying that the ceiling in neural nets has been reached just because no huge breakthroughs have occured lately ignores the fact that our own thinking procecess can very probably be modelled by a sufficiently complex neural net. We only know how to teach current neural nets a few tricks. But saying that we have reached the end of the road is oversimplification. After all, understanding of thought procecess in the brain is still in its infancy. We don't know how our own "thinking machines" work (which are vast
I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter.
Mycin performed epidemiology at a level beaten only by a panel of expert epidemiologists. The problem with expert systems isn't their performance: it's the brutally hard task of feeding them information to use in inference. The "knowledge acquisition bottleneck" was one of the driving motivations behind (some researchers) movement into machine learning. Machine
Measure with a micrometer. Mark with chalk. Cut with an axe.
It's discouraging (Score:5, Informative)
It's discouraging reading this. Especially since I knew some of the Cyc [cyc.com] people back in the 1980s, when they were pursuing the same idea. They're still at it. You can even train their system [cyc.com] if you like. But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.
I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter. Most of the AI faculty was in denial about that. Very discouraging. The "AI Winter" followed; all the startups went bust, most of the research projects ended, and there was a big empty room of cubicles labeled "Knowledge Systems Laboratory" on the second floor of the Gates Building. I still wonder what happened to the people who got degrees in "Knowledge Engineering". "Do you want fries with that?"
MIT went into a phase where Rod Brooks took over the AI Lab and put everybody on little dumb robots, at roughly the Lego Mindstorms level. Minsky bitched that all the students were soldering instead of learning theory. After a decade or so, it became clear that reactive robot AI could get you to insect level, but no further. Brooks went into the floor-cleaning business (Roomba, Scooba, Dirt Dog, etc.) with the technology, with some success.
Then came the DARPA Grand Challenge. Dr. Tony Tether, the head of DARPA, decided that AI robotics needed a serious kick in the butt. That's what the DARPA Grand Challenge was really all about. It was made clear to the universities receiving DARPA money that if they didn't do well in that game, the money supply would be turned off. It worked. Levels of effort not before seen on a single AI project produced some good results. Stanford had to replace many of the old faculty, but that worked out well in the end.
This is, at last, encouraging. The top-down strong AI problem was just too hard. Insect-level AI, with no world model, was too dumb. But robot vehicle AI, with world models updated by sensors, is now real. So there's progress. The robot vehicle problem is nice because it's so unforgiving. The thing actually has to work; you can't hand-wave around the problems.
The classic bit of hubris in AI, by the way, is to have a good idea and then think it's generally applicable. AI has been through this too many times - the General Problem Solver, inference by theorem proving, neural nets, expert systems, neural nets again, and behavior-based AI. Each of those ideas has a ceiling which has been reached.
It's possible to get too deep into some of these ideas. The people there are brilliant, but narrow, and the culture supports this. MIT has "Nerd Pride" buttons. As someone recruiting me for the Media Lab once said, "There are fewer distractions out here" (It was sleeting.) It sounds like that's what happened to these two young people.
Re: (Score:2)
I don't know whether they're actively still trying to get "true AI" or just milking what they've got; but, assuming the former, some things in science take a really long [aps.org] time [nobelprize.org]. It seems pretty obvious that any intelligence requires a vast amount of knowledge to be useful and that takes a lot of t
Re: (Score:3, Informative)
Re: (Score:2)
Frankly, I suspect the answers will come out of search and spam filters in which better data and algorithms result in tangible benefits (i.e. $$$).
The ceiling has not been reached in Neural Nets (Score:1)
We only know how to teach current neural nets a few tricks. But saying that we have reached the end of the road is oversimplification. After all, understanding of thought procecess in the brain is still in its infancy. We don't know how our own "thinking machines" work (which are vast
Re: (Score:1)
Mycin performed epidemiology at a level beaten only by a panel of expert epidemiologists. The problem with expert systems isn't their performance: it's the brutally hard task of feeding them information to use in inference. The "knowledge acquisition bottleneck" was one of the driving motivations behind (some researchers) movement into machine learning. Machine