
These sorts of articles that pop up from time to time on slashdot are so frustrating to those of us who actually work in the field. We take an article written by someone who doesn't actually understand the field, about an contest that has always been no better than a publicity stunt*, which triggers a whole bunch of speculation by people who read Godel, Escher, Bach and think they understand what's going on.
The answer is simple. AI researchers haven't forgotten the end goal, and it's not some cynical ploy to advance an academic career. We stopped asking the big-AI question because we realized it was an inappropriate time to ask it. By analogy: These days physicists spend a lot of time thinking about the big central unify everything theory, and that's great. In 1700, that would have been the wrong question to ask- there were too many phenomenons that we didn't understand yet (energy, EM, etc). We realized 20 years ago that we were chasing ephemera and not making real progress, and redeployed our resources in ways to understand what the problem really was. It's too bad this doesn't fit our SciFi timetable, all we can do is apologize. And PLEASE do not mention any of that "singularity" BS.
I know, I know, -1 flamebait. Go ahead.
*Note I didn't say it was a publicity stunt, just that it was no better than one. Stuart Shieber at Harvard wrote an excellent dismantling of the idea 20 years ago.
[snip]
I suppose that your own articles written on 2nd order neural nets were part of your 'junky period' then, right?
Yep. And I paid for that too. While I was doing my PhD, someone at MIT was doing very similar work, but instead of using 2nd order NNs because they were cool, he had formulated his work with a solid mathematical base.
Guess whose dissertation was better received.
In my defense, I started in a time when the whole world was gaga over NNs and I was swept up in the hype. That's why I (like the ancient mariner) roam the earth issuing warnings to others.
Well, lots of reasons, the simplest being that it's a hard problem. But that's a cop out.
One issue we've had is that because intelligence is an observed phenomenon, not a defined one, its easy to think you're much closer to a solution than you are. The usual process is to observe intelligent behavior, and try to infer a formal problem from that to then try to solve. That problem eventually gets solved, and we discover we didn't ask the right question. Each failure has moved us closer in many important ways, just not directly at the target. It's a like a predator unsure of exactly where the prey is, circling and closing in on it rather than heading right for the target.
That's the root cause in my opinion. The details would fill a book...
As has been already mentioned, Artificial Intelligence: A Modern Approach by Rusell and Norvig (or AIMA) is essentially the only choice for serious study of AI. Your relative algorithmic naivite will make it a bit of a struggle, but there is a long history of smart physicists moving into AI.
Unfortunately, there is also a long history of smart outsiders getting trapped in "junk AI". These are the branches of AI that exist more because the metaphor is compelling rather that the results or prospects. These include: Neural Networks, Genetic Algorithms, Ant Colony Optimization, etc. I won't claim there is no good work in these areas, but there is too much fascination with the techniques themselves over the results, such that research constantly "solves" problems that would be done better with other techniques, but yet are somehow "interesting" because a neural net does it. The mainstream of AI is mystified why anyone would be interested in a technique that works 80% as well as the state of the art just because some guy in the 50s attached the word "neural" to it.
If you want to simulate brains, you should study neuroscience. If you want to know what's going on in mainstream AI, you should bone up on probability, statistics, and linear algebra (if you're the right kind of physicist, you already have the math you need).
Before you mod me as flamebait, please note that I do know what I'm talking about. My PhD is in AI and I'm professor in a CS department in an undergraduate engineering school, where I teach AI and Robotics. I was once the maintainer of the comp.ai FAQ, and I have published several papers in neural networks and genetic algorithms.
Kill Ugly Processor Architectures - Karl Lehenbauer