Whatever Happened To AI? 472
stinkymountain writes to tell us NetworkWorld's James Gaskin has an interesting take on Artificial Intelligence research and how the term AI is diverging from the actual implementation. "If you define artificial intelligence as self-aware, self-learning, mobile systems, then artificial intelligence has been a huge disappointment. On the other hand, every time you search the Web, get a movie recommendation from NetFlix, or speak to a telephone voice recognition system, tools developed chasing the great promise of intelligent machines do the work."
Re:a disappointment? (Score:5, Interesting)
The correct term is "independent agents". (Score:3, Interesting)
AI in Academia (Score:5, Interesting)
I got my B. Sc. in Computer Science with a concentration in Intelligent Systems. The state of academic AI seems to me like a field looking directly for purpose and direction. The problem with AI is that stuff which was once considered part of AI is now considered an algorithm. This is especially true for graph search algorithms such as A* and heuristics. Classification algorithms, from primitive algorithms such as K-Mean to more complex Bayesian models seem to be going down the same path of "just an algorithm."
Nowadays, it seems like planning is the big thing in AI, but once again, it's just a glorified search in a graph, be it a state or plan graph.
AI is an intuitively 'simple' concept, but there's no clear way to 'get there.'
"AI Application Programming" (Score:2, Interesting)
...is a fine book by M. Tim Jones if you want a nice overview of programming some "AI" techniques. I wrote up a review of it on Freshmeat [freshmeat.net]. There's a second edition out now... and here's a translation of some of the example code from C to Ruby [rubyforge.org].
AI is doomed (Score:1, Interesting)
For the public, the term AI (like the notion of 'intelligence') appeals to ineffable mystery and magic. Once we understand how something works, it is no longer AI, but an 'algorithm'. So the bar is continuously raised for a task to be deemed as AI. People often lose sight of how far we really have come from the early days of AI.
AI is a moving target (Score:5, Interesting)
When any particular subset of what we do with our brains (chess, machine vision, speech recognition, what have you) yields to research and produces commercial applications, the critics of A.I. redraw the line and that domain is no longer part of "A.I." As this continues, the problem space still considered part of "artificial intelligence" will get smaller and smaller and nay-sayers will continue to be able to say "we still don't have A.I."
The hype has gone.... (Score:5, Interesting)
AI has always been surrounded by a lot of hype, as the idea of creating non-human life has always been an exciting one.
But we're probably as far from creating a true AI as we are from creating biological life from scratch (by synthesizing DNA sequences to build an organism from the molecular level).
AI research is providing useful gains in computer science, and some of those gains trickle down into the real world.
But contrary to what you may have been sold, we're not 10-15 years away from creating Skynet. We've got a long, long way to go, and scientists that aren't trying to get publicity have always known this.
AI hasn't "gone away"... it's just that the false marketing for it has.
Erik
Strong AI never got off the ground (Score:3, Interesting)
The promises of Minsky et al. never materialized simply because the early researchers into strong A.I. (which was then simply called "A.I.") didn't know what they were doing and had not even the beginning of a handle on what problems they were trying to solve.
In 1972, Hubert Dreyfus [amazon.com] debunked the field's efforts as misguided from the start, and in the couple of decades since he was shown to be absolutely right...
Re:I'll tell you what happened to AI (Score:3, Interesting)
Why would you want to think like a human? (Score:2, Interesting)
Artificial Intelligence is a misnomer. Only a segment of the field of AI is concerned with making computers become self aware.
The majority of the field runs away from such things. Sure, even in those other fields rough human models were originally the basis (neural nets for example). But the drive is not to become more human but to simply become better.
Frankly, once you start even considering trying to make things exactly like humans, things become messy unbelievably quickly. We're computer scientists, not philosophers.
Anyway, in truth, our level of technology is still quite a ways off from even being able to do much in terms of being able to make computers think like humans, so it's largely a moot point.
Right now the issue is less of robots having a philosophical view of "Should a robot shoot a human enemy" than of "Can a robot determine if a human is there or not? Can it detect if the human is a child? Can it detect if the human is friend or foe?"
AI is kind of like alchemy (Score:5, Interesting)
no, that's not an insult or to call AI a pseudoscience
what i mean is: the ancient alchemists goal was to turn lead into gold. which they thought possible, because they did not perceive magic in gold, it was just stuff. surely, with the right manipulations, some stuff could be turned into other stuff, right?
and from that basic fantasy thought came the groundwork for centuries of hard work, the discovery of the fields of chemistry, physics, all the subfields...
such that one day in the middle of the last century, some dudes with some extra time at a cyclotron said "hey, why don't we bombard some lead atoms, i have a feeling about what the decay product will be (snigger)"
and there, as a completely forgotten afterthought, was a fulfillment of the ancient alchemist's original goals, many generations before
to me, i think this is the fate of AI: it will be a formative motivation. just as the ancient alchemist's looked at gold and saw just stuff, we look at the brain and just see neurons. and all of the ffort to replicate the human brain will spawn incredibly sophisticated fields of information science we can only begin to grasp at the foundations of right now. look at databases, for example: that's an effort at mimicking the brain. and look at all of the unintended and beneficial consequences of database reesearch, as a superficial example of what i am saying about unintended benefits being better than the original goal
so perhaps, many centuries from now, some researchers will say "hey, remember the turing test"? and they will giggle, and make something that is exactly what we now envisage as the ultimate fruit of AI research, a thinking computer brain
but in that time period, such a thing will be but an after thought, and much as the rewards of physics and chemistry so dwarf the fruits of turning lead into gold, so whatever these as-of unimagined fields of inquiry will reward mankind with will turn the search for a thinking computer into an equally forgettable sideshow
the search for AI will lead to much more rewarding and expansive fields of knowledge than we can imagine now. jsut like the guys arguing about "phlogiston" could never imagine things like organic chemistry and radiochemistry. just imagine: fields of inquiry more rewarding than thinking computers. that's a future i want to glimpse, and looking for AI will lead us there
Robots are better than ever (Score:5, Interesting)
The robots are coming.
The big breakthrough was the DARPA Grand Challenge. Up until the 2005 DARPA Grand Challenge, mobile robots had been something of a joke. They'd been a joke since Elektro was shown at the 1939 World's Fair. But on the second day of the 2005 Grand Challenge event at the California Motor Speedway, suddenly they stopped being a joke. Forty-three autonomous vehicles were running around and they all worked. The ones that didn't had been eliminated in previous rounds.
Up until the Grand Challenge, robotics R&D had been done by small research groups under no pressure to produce working systems. Most systems were one-offs that were never deployed. DARPA figured out how to get results. There was a carrot (the $2 million prize), and a stick (universities that didn't get results risked having their DARPA funding for robotics cut off.)
The other big result from the DARPA Grand Challenge was that robotics projects became much larger. Nobody had 50-100 people on a robotics R&D project until then (well, maybe Honda). Robotics projects used to be a professor and 2 or 3 grad students. Suddenly stuff was getting done faster.
DoD started pushing harder. Robots like Big Dog got enough money to be forced through to working systems. Little tracked machines were going to battlefields in quantity, and enough engineering effort was put into mechanical reliability to make the things really work.
CPU power helped. Texture-based vision now works. Vision-based SLAM went from a 2D algorithm that sometimes worked indoors to a solid technology that worked outdoors. Much of early vision processing is now done in GPUs, which are just right for doing dumb local operations like convolution in bulk. GPS and inertial hardware got better and cheaper. Some of the mundane parts, like servomotor controllers, improved considerably. Compact hydraulic systems improved substantially.
It's finally happening.
As for the hard stuff, situational awareness and common sense, watch the NPCs in games get smarter.
Holy Grail (Score:5, Interesting)
AI is a Holy Grail. In other words, something we'll probably never get, but we'll create a whole bunch of useful stuff while trying to attain it. "AI" is just a stated goal that gets a bunch of smart people together to develop tools towards that goal. AI research has already given us Lisp and Virtual Machines and Timesharing/Multitasking and the Internet and a bunch of useful data structures and algorithms.
At some point after all that, a computer was developed that can play Grandmaster-level chess, but this was not a necessary development to justify the all research grants.
Re:AI is a moving target (Score:4, Interesting)
What really makes us different from animals?
If you are looking for a good place to draw a line, I would think that your question is a good place to start.
I'd draw the line at the point when an animal asks itself, 'What really makes us different from other animals?'
Re:NetFlix/Amazon suggestions...? (Score:5, Interesting)
Re:Necessary advances in understanding... (Score:3, Interesting)
Conceitedly, humans thought that they would have solved most of biology by now. In reality, DNA was first discovered 60 years ago, but the human genome has been mapped only in the last 10 years. Deciphering the code will take at least several decades.
We, however, still don't know all there is to know about the brain. What they have found out is that is works opposite to how computers are constructed. The brain is massively parallel and does not have a rigid, formal structure unlike computers. Basing artificial intelligence on our brain requires a shift in how computers and their systems are designed.
Re:AI in Academia (Score:4, Interesting)
Yes, some (small) parts of AI research have gone down the "just an algorithm" path in pursuit of a best solution for very specific problems, but you should not be so quick to write off even those advances which only seem to improve on relatively "simple" tasks. If you can represent a complex problem in a simple fashion, then even incremental improvements can produce large quality/efficiency improvements.
If you're looking for AI disciplines producing work with layman-notable results that are not as clearly search- or planning-based, natural language processing (NLP) and computer vision have both been quite hot over the past five years. Chris Bishop's latest book [amazon.com] is a great read for a quick jump-in to the technical underpinnings of a number of the big-press projects today, and for "pretty picture" motivation you may want to look at something like this [cmu.edu].
Nitpicks: it's k-means, and A* is a heuristic search algorithm. Yes, IAAAIR (I Am An AI Researcher).
Re:Few are working on the grand integration (Score:1, Interesting)
What strikes me is that no researchers are really putting together a multiplicity of AI techniques to produce a generally intelligent "human analogue" or "smart and lippy assistant".
Instead, the researchers are going to the nth degree of detail on a very specialized aspect, like some variant of bayesian inference that is optimal under these very particular circumstances,
etc.
I don't know of any AI research other than Marvin Minsky who is even interested in or advocating a grand synthesis of current techniques to produce a first cut of general intelligence.
That being said, probably there are two (related) exceptions:
1. I think some fascinating AI stuff must be going on at Google. They have the motherlode of associative data to work with. They are sifting all of human knowledge, news, interest, and opinion that anyone bothers to put on the net.
They must be trying to figure out how to make algorithms take advantage of the general patterns in this data to start giving people info-concierge
type of functionality. Pro-active information gethering, organization, prioritization in support of the users' activities, which have been inferred by google-spying on their pattern of computer use and other peoples' average patterns.
2. I think there is some pretty squirrelly stuff
happening on behalf of the department of homeland security, though. Stuff that probably combs all signals intelligence including the whole Internet, and tries to impute motives and then detect very weak correlations that might be consistent with those motives.
Re:a good quote (Score:2, Interesting)
The aptly named sage [urbandictionary.com] publications has this to say
http://sss.sagepub.com/cgi/content/abstract/31/1/123 [sagepub.com]
What is the Problem with Experts?
Re:Necessary advances in understanding... (Score:4, Interesting)
That's basically the "scruffy" approach to AI, as opposed the "neat" approach, which was to define all the supposed rules that people supposedly follow. There was always a competition between the "scruffies", who thought that neural nets, genetic algorithms, and bayesian nets would enable us to "grow brains in a box" that would eventually be complex enough to think like we do, and the "neats" who could never define all the rules, because they relied on question-answer sessions with the "thinkers" who often thought they were following rules, but often turned out to be using instincts and assumptions that they never consciously thought about.
I was working in an AI research company back in the late 80's, and I remember the fun and "fun" we had back then. I tend to fall more in the "scruffy" category, but I'm coming from an implementation rather than research background, and I saw all the problems with the rule-based approaches. Even getting good probabilities from "experts" concerning their decisions and evaluations, to feed into Bayesian probability nets, was nearly impossible.
Nowadays I think we'll have better luck just following biology augmented with microelectronics. I want my cyberbrain, dammit!
"dot.bust" of the 1980s (Score:3, Interesting)
It birth a successful step-child however: graphics workstations. The A.I. companies like Xerox PARC were among the first to integrate bitmap graphics with computers. There was the Xerox Alto, Symbolics, and Texas Instruments graphics workstations based on LISP, an A.I. language. New startups like Apollo, Sun MicroSystems, DEC microVAX gambled graphics workstations were more easility commercialized in UNIX. Last, but not least, the Appled MacIntosh- direct "borowing" of the Xerox Alto.
Re:Steve screwed it up (Score:5, Interesting)
If it ended with the robot seeing his other selves, realizing he wasn't a beautiful and unique snowflake, and kervorking into the ocean -- THE END -- it would have been a really pretty good movie. Dark, but with a Western message that it is our individualism and uniqueness that make life worth living.
I think Kubrick must have written everything except the ending. He didn't know how to add some inspiring, lifting message to a movie that can't have one.
Fascinating book on AI and Beyond (Score:2, Interesting)
Re:AI is kind of like alchemy (Score:3, Interesting)
the ancient alchemists goal was to turn lead into gold. which they
thought possible, because they did not perceive magic in gold, it was
just stuff. surely, with the right manipulations, some stuff could be
turned into other stuff, right?
and from that basic fantasy thought came the groundwork for centuries
of hard work, the discovery of the fields of chemistry, physics, all the
subfields...
Interesting comparison. And it's very refreshing to see the
tradition of the alchemists portrayed as ennobled by their not regarding
gold as magical.
What I find interesting, though, is what almost everyone in this
forum assumes: That what gives an adult human being his amazing mind is,
to use your analogy, just stuff. That is, everyone seems to assume that
the existence of a human brain---or some physical equivalent---is
sufficient for the existence of a human mind.
Of course, this is a natural assumption for anyone who subscribes to
philosophical materialism, according to which matter (stuff) is all that
really exists anyway. (Though the modern materialist would no doubt
admit also the existence of other forms of energy besides matter.) So
perhaps it is just the dominance of materialism that is evident
here.
such that one day in the middle of the last century, some dudes with
some extra time at a cyclotron said "hey, why don't we bombard some lead
atoms, i have a feeling about what the decay product will be
(snigger)"
and there, as a completely forgotten afterthought, was a fulfillment
of the ancient alchemist's original goals, many generations
before
This is very entertaining, and there would seem to be some truth in
it.
However, your presentation is also misleading. If we could produce
gold from a more common element by transmutation *efficiently*, then,
and only then, would we have achieved the ancient alchemist's original
goal. We have still not achieved that goal. It is far too expensive to
produce gold in a nuclear reactor or collider.
And if we *did* find a way to do this efficiently, it would *not* be
just an afterthought. It would have a major impact on the economy.
to me, i think this is the fate of AI: it will be a formative
motivation. just as the ancient alchemist's looked at gold and saw just
stuff, we look at the brain and just see neurons. and all of the ffort
to replicate the human brain will spawn incredibly sophisticated fields
of information science we can only begin to grasp at the foundations of
right now.
Yes, there is no doubt that the effort spent on understanding the
human brain and on designing machines that mimic certain aspects of the
brain's behavior will have amazing and interesting consequences.
But there is, I think, at least some room to doubt that a human brain
is equivalent to a human mind.
And there is even more room to doubt that algorithms in a digital
computer could every produce a mind like that of a human being. Roger
Penrose, in particular, has made some interesting arguments for how
human thought is non-algorithmic.
It is perhaps politically unwise to suggest, in a room populated :^).
mostly by materialists, that there could exist anything more fundamental
than matter. Maybe I am committing karma suicide by posting this here
(unless no one notices my post
Re:They keep changing the definition (Score:5, Interesting)
That's not the same. When there is a success made in any of the fields that you mention it remains part of that field. A solved part of that field. Every success made in AI is no longer AI, so there are no successes or progress made "within the field". It's quite a substantial difference when it comes down to the perception of the field.
Chess was considered the ultimate AI problem back in the 40s and 50s. When we knew little about the game and how to solve it, it seemed that intelligence must be required to solve it. Now that machines are better at chess than humans we've redefined as a problem that is susceptible to brute force. It is not considered a success in the AI field, just another refinement of what is not AI.
My prof said it best (Score:2, Interesting)
Re:It's still too early (Score:3, Interesting)
Besides, creating a self-aware, self-learning system could (will) be feasible
I keep hearing this and reading it decade after decade, but I have yet to have anyone explain exactly why they believe it. Can you? What makes you so sure we will create a self-aware machine, especially since we don't understand how sentience actually works?
Re:Necessary advances in understanding... (Score:5, Interesting)
This is the primary point I came in here to say. Whenever I've read anything about AI, it seems to be based on cool science-fictiony ideas, or else it's actually a simpler method to use statistical analysis to approximate human decision-making for particular purposes. If you're talking about real self-aware thinking things, the approaches are all wrong.
People tend to act treat the subject as though dumping enough raw information into a fast enough processor will yield intelligence, and then as that intelligence grows and develops, things like "sensible responses to answers" or "appropriate emotional responses" will emerge. Or else they think grouping enough "appropriate responses" will eventually yield intelligence.
It seems to me that that's all backwards. If you want to design an artificial intelligence, you first need a good philosophical understanding of how intelligence works, which will tell you straight-off something that AI researchers don't seem to consider: intelligence is an animal trait.
I think the absolute first thing you need to do is to figure out how to give machines emotions, to approximate pleasure/desire and pain/aversion. The second thing you need to do is give it "senses", and the ability to draw a very basic sensory conception of its world based on those senses, which includes a sense of time and objects. Also, you'll have to give it the ability to interact with its world in such a way that it is able to pursue its desires, encounter obstacles, and experience "pain". Finally, you'll have to figure out a way to give it the ability to adapt, to "rewrite its programing", preferably in a way that allows it to reproduce and evolve.
So in a way, the most obvious answer is that if you want an artificial intelligence, you'll have to design an artificial/virtual animal and place it into an environment where it can evolve intelligence. There may be some shortcuts on growing/evolving it faster, but you shouldn't be quick to discount the animal nature of intelligence as we know it.
And the reason for these things are bound up with the fact that, like I said, the only model for real intelligence we have to base anything on is animal intelligence. Animals develop and express their intelligence by being self-motivated in a world that presents obstacles. If there's nothing you want, there's no point in figuring anything out. If there's no way to get what you want, then there's no point in figuring things out. If there are no obstacles in your way, then there's nothing to figure out.
So if you don't have a self-motivated desire and the ability to move towards achieving that desire, then you can't make self-determined intelligent decisions. If course, this also presents a scary twist to the whole AI thing, because it suggests one of the chief scifi fears of AI will turn out to be correct: If we're successful in creating AI, we may not be able to control it.
Re:They keep changing the definition (Score:4, Interesting)
Indeed, there was a time when binary search trees were called "artificial intelligence".
Remember that program to catalogue animals? It started with something like "Is it a dog?", then you say no, and since the database is seeded with only one animal, it would respond with "I don't know the animal, what is it?" ("a bird"). Then it would ask what question would make the difference between the two clear ("Does it fly?"), and next time you run the program, it starts with "Does it fly?". If you say yes, it would ask "Is it a bird?" and so on, and so on.
It's a fun little project while learning how to program, but it's not really counted in the AI-domain anymore.
Re:They keep changing the definition (Score:4, Interesting)
Actually, I detest them, but I do think there is a lot of untapped research potential there, because of the sheer number of people who are willing to sit there for hours on end, waiting for npcs to respawn. With a good learning algorithm and enough entropy (causing 'genetic mutations'), those npcs will eventually find a few optimal ways to react to their environments, prolonging their own lives. They just need people to coach them through it.
With that many users, you'd get enough variation between the newb that's killing them for experience to the maxed-out-character that blows up everything in his way.
It would be really cool if Blizzard let some serious AI programmers go nuts, so that the NPCs try to maximize their own lifespans, rather than just dying and respawning.
Maybe for enough money, they'd let you set up a few thousand bot-controlled characters?
Re:Disappointment? (Score:3, Interesting)
And despite what you say, renaming the goal of AI to something less ambitious, to something other than a machine that thinks, to make it smell like victory for the human species doesn't make it any less of an utter failure. I know that you know this, but emotions are more important than truth for most humans as many in this thread are demonstrating. AI is one of the more obvious failures of the human species, but emotionally we don't like failure. Solution: just redefine the problem to something we can already do well.
Artificial intelligence is all about hubris. I have a 9 year old nephew who is one of the dumbest human beings I have ever encountered, but he seems to think he is intelligent. He lacks the intelligence to see that he lacks it. That is like the human species. We lack the insight to see that there are some things we may not be intelligent enough to achieve. So we try to scale down the problem domain, simplify it so that maybe we can achieve it.
We are excellent at creating Artificial Stupidity, because stupidity is what we are good at, what we know. I have been observing the field of AI for about 25 years. We have failed. Period. There is no way around it. Oh sure we have done the easy stuff. Our voice synthesis has reached a point that the voices can almost pass for human. We have pretty good voice recognition. Handwriting recognition. IOW, the lowest of the low hanging fruit. We can nearly achieve Hal's voice. But that was never really the problem. Our Hal would have nothing at all to say.
Chimpanzees can use simple tools: a stick to catch termites. But how would they even begin to make a flying machine or a submarine? We were once like them. Maybe in a few million years we will have evolved to a point where we can figure things out that today we couldn't even conceive of due to our utter stupidity. For now we are like chimpanzees tracing a bird in the sand, not understanding why it can't fly.
The examples you give from conventional programming are only examples of "intelligence" if you so redefine the word as to be meaningless. Which seems to be the entire point of your post. It is true that some of our "advances" (and I use the term loosely) in conventional programming originally started out as problems that people in the AI field were interested in. All that demonstrates is that AI researchers did not sit around doing nothing. They worked on some problems that seemed solvable. They were hoping that by solving some of those easy problems that it would bring them closer to the goal of an intelligent machine, but that didn't happen. Which is the point.
Re:They keep changing the definition (Score:3, Interesting)
Re:They keep changing the definition (Score:4, Interesting)
Q20. I am guessing that it is a lamp shade? Right, Wrong, Close
19. Does it weigh more than a duck? No.
18. Is it found on a desk? Sometimes.
17. Is it larger than a microwave oven (or bread box)? Sometimes.
16. Do you use it at night? Sometimes.
15. Is some part of it made of glass? No.
14. Is it worn? No.
13. Is it decorative? Yes.
12. Is it pleasurable? No.
11. Does it move air? No.
10. Is it black? Sometimes.
9. Is it square shaped? No.
8. Can it be easily moved? Yes.
7. Does it beep? No.
6. Can you talk on it? No.
5. Does it usually have four corners? No.
4. Is it larger than a pound of butter? Yes.
3. Does it get wet? No.
2. Do you hold it when you use it? No.
1. It is classified as Other.
Re:Disappointment? (Score:3, Interesting)