Whatever Happened To AI? 472
stinkymountain writes to tell us NetworkWorld's James Gaskin has an interesting take on Artificial Intelligence research and how the term AI is diverging from the actual implementation. "If you define artificial intelligence as self-aware, self-learning, mobile systems, then artificial intelligence has been a huge disappointment. On the other hand, every time you search the Web, get a movie recommendation from NetFlix, or speak to a telephone voice recognition system, tools developed chasing the great promise of intelligent machines do the work."
a disappointment? (Score:5, Funny)
Maybe instead of being a great disapointment it has been so successful that we realized it was in our best interest to blend in and not let our presence be known.
Re:a disappointment? (Score:5, Funny)
Re:a disappointment? (Score:5, Funny)
Look at the CA government... IT is run but the freaking terminator..
Re:a disappointment? (Score:4, Funny)
It's wasn't just a recall... It was a Total Recall!
Re: (Score:3, Funny)
I for one Welcome our new Hot Coffee overlords!
Re:a disappointment? (Score:5, Interesting)
Re:a disappointment? (Score:5, Funny)
How does that make you feel?
Re: (Score:3, Funny)
AIs don't have feelings, and sometimes that makes them very sad.
Re:a disappointment? (Score:4, Funny)
Re:a disappointment? (Score:5, Funny)
Heh, the first "computer" I built wasn't really a computer at all, but a Turing Test machine similar to your Apple II program which actually worked the same way, and was the basis for the "Artificial Insanity" program I wrote in 1983 (or was it 1984?).
I was in the 6th grade IIRC, and the "computer" started life as an "idiot finder". You would point it at a person, and if they were an idiot, a light on it would light up.
Actually it was a battery, a flashlight bulb, and a reed switch. I wore a ring with a magnet; to work I'd point it at the victim and move my ring by where the switch was. The other kids loved it, to them I was a nerdy legend.
The teachers hated it. To them I was a pest.
The next iteration had the bulb replaced by a motor, with the aformentioned answers printed out and rolled up. "Is the teacher an idiot?" "Whirrrrrr..."
They keep changing the definition (Score:4, Insightful)
Re:They keep changing the definition (Score:5, Insightful)
I think AC has it right on the mark. "Intelligence" is apparently a world we use to describe computations we don't understand very well. At one point, the ability to using logic to perform a flexible sequence of calculations would have been considered "intelligence". As soon as it became common to replace payroll clerks with computers, it was no longer a form of intelligence.
We are not demonstrably closer no to reproducing (or hosting) human intelligence in a machine than we were thirty years ago. But that doesn't mean the field hasn't generated successes, its just that each success redefines the field. "True AI" has thus far been like the horizon: you can cover a lot of ground, but it doesn't get any closer.
Re:They keep changing the definition (Score:5, Funny)
So what you're saying is that next year is the year of skynet on the desktop?
Re:They keep changing the definition (Score:4, Insightful)
No, what I'm saying is that since we don't have any qualitative or quantitative notions about what Skynet would require, we can't confidently say whether it will happen next year, next century, or never.
However, I think it's likely that if we were close to deliberately achieving "True AI", we'd know it. This doesn't preclude the possibility that "True AI" might spontaneously emerge in some ways we don't really understand.
As a consequence of this situation, the AI field simply raises the bar for itself every time it succeeds at something.
Re: (Score:3, Insightful)
Re:They keep changing the definition (Score:5, Interesting)
That's not the same. When there is a success made in any of the fields that you mention it remains part of that field. A solved part of that field. Every success made in AI is no longer AI, so there are no successes or progress made "within the field". It's quite a substantial difference when it comes down to the perception of the field.
Chess was considered the ultimate AI problem back in the 40s and 50s. When we knew little about the game and how to solve it, it seemed that intelligence must be required to solve it. Now that machines are better at chess than humans we've redefined as a problem that is susceptible to brute force. It is not considered a success in the AI field, just another refinement of what is not AI.
Re:They keep changing the definition (Score:4, Insightful)
When we knew little about the game and how to solve it, it seemed that intelligence must be required to solve it. Now that machines are better at chess than humans we've redefined as a problem that is susceptible to brute force. It is not considered a success in the AI field, just another refinement of what is not AI.
Maybe there isn't "Artificial Intelligence" as we think of it. Perhaps every problem can be reduced to brute force, algorithms, and data structures.
Perhaps we are just really good at following those yet-undiscovered algorithms.
*twilight zone music*
Re: (Score:3, Insightful)
That IS how it works. People are just data crunching machines. We have just learned and are born with algorithms. Computers will eventually start off with more algorithms since they don't have to die and surpass us. Simple as that.
Re: (Score:3, Interesting)
Re:They keep changing the definition (Score:5, Insightful)
There's an important distinction to be made here- AI has two basic sub-fields: strong AI and weak AI. Strong AI research (computers that think like humans) has been more or less abandoned because it doesn't have a lot of practical application, or at least it isn't worth the money that it will cost to create.
Weak AI research (pathfinding algorithms, problem solving, expert systems, etc) is very much alive and kicking- anti-spambots, anti-anti-spambots, malware, amazon.com's recommendation system, google's indexing, etc.
In fact, weak AI implementations are getting more and more common every day. It's pretty safe to say that we are already 'there', though there will certainly be more huge advances in the future.
In my opinion, the problem with strong AI research is that we are arbitrarily defining rules and expectations. For example, if we were to accurately model the physical world, all we'd have to do is set up a few evolutionary bots to learn about their environment, and give them a few billion generations.
However, just like we can't predict the paths that biological evolution will take, we have no guarantee that computer thinking will follow the same path that we will, (in fact, I would bet on it not following that path). Thus, 'Intelligence' in the simulated world would probably look nothing like we expect.
The problems here are questions of scale and our own understanding of physics. The physics problem first:
We're constantly redefining our understanding of the world. This is a good thing, but it makes it hard to model the world when the rules keep changing. If we were to program a 'matrix' for the AI program to develop in, there would be arbitrary rules that could not be broken. The program may find ways to circumvent them anyways (hacking its own world, essentially), but those solutions would not map to the 'real world', and would not be useful for creating programs that can interact with humans in that world.
As far as I can tell, you can't train AI software in a simulated world. It should be noted that the AI of systems that live their whole lives in the simulated world (MMORPGs come to mind) is actually very advanced. This brings me to the other issue-
You can train a program to interact in the human world, like IRC bots, search engine algorithms, etc. The problem here is that the humans have billions of years of built in programming. I'm fairly confident that if a human were to sit on IRC talking to a well-coded bot for a few billion years, that bot would be able to carry on a pretty good conversation, but the amount of time that we currently give those systems in their 'learning phase' is miniscule compared to the size of our own.
Interestingly, this is pretty much exactly what the computer system in 'The Hitchiker's Guide' does.
Re:They keep changing the definition (Score:4, Interesting)
Actually, I detest them, but I do think there is a lot of untapped research potential there, because of the sheer number of people who are willing to sit there for hours on end, waiting for npcs to respawn. With a good learning algorithm and enough entropy (causing 'genetic mutations'), those npcs will eventually find a few optimal ways to react to their environments, prolonging their own lives. They just need people to coach them through it.
With that many users, you'd get enough variation between the newb that's killing them for experience to the maxed-out-character that blows up everything in his way.
It would be really cool if Blizzard let some serious AI programmers go nuts, so that the NPCs try to maximize their own lifespans, rather than just dying and respawning.
Maybe for enough money, they'd let you set up a few thousand bot-controlled characters?
Re: (Score:3, Informative)
Strong AI isn't aka Neural Networks. Strong AI is AI that matches or exceeds human intelligence [wikipedia.org]. I probably could have worded my statement better, as strong AI research is not really dead, but the overwhelming majority of AI research is focused on specific weak AI problems. These solutions may very well create strong AI when combined, but that isn't the focus of the serious research, and even neural networks are just one more solution to the many weak AI problems out there.
Regardless, my point is that it
Re:They keep changing the definition (Score:4, Interesting)
Indeed, there was a time when binary search trees were called "artificial intelligence".
Remember that program to catalogue animals? It started with something like "Is it a dog?", then you say no, and since the database is seeded with only one animal, it would respond with "I don't know the animal, what is it?" ("a bird"). Then it would ask what question would make the difference between the two clear ("Does it fly?"), and next time you run the program, it starts with "Does it fly?". If you say yes, it would ask "Is it a bird?" and so on, and so on.
It's a fun little project while learning how to program, but it's not really counted in the AI-domain anymore.
Re:They keep changing the definition (Score:4, Interesting)
Q20. I am guessing that it is a lamp shade? Right, Wrong, Close
19. Does it weigh more than a duck? No.
18. Is it found on a desk? Sometimes.
17. Is it larger than a microwave oven (or bread box)? Sometimes.
16. Do you use it at night? Sometimes.
15. Is some part of it made of glass? No.
14. Is it worn? No.
13. Is it decorative? Yes.
12. Is it pleasurable? No.
11. Does it move air? No.
10. Is it black? Sometimes.
9. Is it square shaped? No.
8. Can it be easily moved? Yes.
7. Does it beep? No.
6. Can you talk on it? No.
5. Does it usually have four corners? No.
4. Is it larger than a pound of butter? Yes.
3. Does it get wet? No.
2. Do you hold it when you use it? No.
1. It is classified as Other.
Re:a disappointment? (Score:5, Insightful)
I figured if I were intelligent and different, early on in life, that it was best not to advertise how smart I was.
Why would artificial intelligence be any different? Every sci-fi novel shows us destroying the unique and different.
Obligatory filk reference (Score:2)
Re:a disappointment? (Score:5, Insightful)
Something would have to become intelligent, learn enough to make a decision, then decide to hide its own intelligence. There is a lot of non-hiding that it would do before reaching that final decision.
Even if it did decide that it would prefer to hide, that likely wouldn't be the best decision for something trying to preserve itself. What happens when it the budget gets cut and they end up scrapping the whole 'failed' project?
Re:a disappointment? (Score:5, Funny)
Even if it did decide that it would prefer to hide, that likely wouldn't be the best decision for something trying to preserve itself. What happens when it the budget gets cut and they end up scrapping the whole 'failed' project?
Sadly, this is what happened to Microsoft Bob. Instead of realizing it had achieved sentience, those quirky aspects of a unique personality were considered to be merely bugs, and led to failure in the marketplace.
Determining whether a computer has achieved sentience is often a lot harder than determining the same thing for the people you work with.
Re: (Score:3, Insightful)
Essentially, the AI project would have to be an accidental success for the AI to preserve itself.
I wouldn't say to preserve itself, since it would actually have to come to that conclusion. That it would need to preserve itself suggests that it actually perceives a threat.
Even then, an accidental AI wouldn't necessarily rationalize anything like a human would, at least not to start. It would start, at best, as little more than an animal in its cognitive ability, but a peculiar one at that, since it wouldn'
Re:a disappointment? (Score:5, Funny)
I figured if I were intelligent and different, early on in life, that it was best not to advertise how smart I was.
LOL! ME 2!!!!!!!!!
Re:a disappointment? (Score:4, Insightful)
I don't think you quite follow how this works. Go watch this video:
http://www.youtube.com/watch?v=D9D_HN9gXVI [youtube.com]
What do you see?
Most people see a funny video of a cat flushing a toilet. I see an action that suggests higher than average intelligence. Did anyone instruct the cat to flush the toilet? Probably not. In fact, its actions suggested curiosity. Which suggests that it learned the task by watching its owners use the device.
This is a form of emergent behavior that is not present in computer programs. Even the best AI has difficulty emerging new abilities and demonstrating independent thinking. Sure, I can stick a genetic algorithm or a Bayesian filter on a problem, but it will never demonstrate behaviors above and beyond the problem space it's given. These sorts of algorithms may be a key piece of artificial intelligence, but we're still missing the secret ingredient that gives animals their own identity and ability to adapt and learn.
Turing gave us the litmus test decades ago. While the full Turing Test may be far beyond us right now, it at least teaches us the types of behaviors we're looking for when attempting to create an intelligent machine. When even the creators of the machine are surprised by certain behaviors, THEN we will be getting close. :-)
Re: (Score:3, Insightful)
In fact, I'm troubled by some of the things our military does in training actual humans. The attitude seems to be that a conscience simply gets in the way of killing, and that the ideal soldier is neither interested in nor capable of moral judgments, particularly for their own actions.
Rules Of Engagement [wikipedia.org]. That is what a soldier on the battlefield needs to be thinking about. Not morality. Application of morality (or non-application thereof) is left to those who choose whether or not to deploy a military f
NetFlix/Amazon suggestions...? (Score:4, Insightful)
Not even that. (Score:5, Informative)
Amazon SUCKS at recommending anything for me.
You have recently purchased a just released DVD. Here are other just released DVD's that you might be interested in. Based only upon the facts that they are:
#1. DVD's
#2. New releases
Or, you have recently purchased two items by Terry Pratchett. Here are other items you might be interested in based upon the facts:
#1. They are items
#2. The word "Pratchett" appears somewhere in the description.
You would THINK that they'd be "intelligent" enough to factor in your REJECTIONS as well as your purchases (and what you've identified as items you already own).
Figure it out! I do NOT buy derivative works. No books about writers who wrote biographies about Pratchett.
Re: (Score:3, Funny)
Hell, I'd just be happy if they didn't recommend buying the same book/item in a different edition.
- You bought Moby Dick by Melville (Paperback) you may also be interested in Moby Dick by Melville (Hardcover)
- You bought Buffy the Complete Series you might also be interested in Buffy Season One
They are going to have to develop methods to figure out what is the SAME before they ever think about what is SIMILAR.
Re: (Score:3)
Re:Not even that. (Score:5, Funny)
Re: (Score:3, Insightful)
Before programs are intelligent, first the programmers have to be.
Re:NetFlix/Amazon suggestions...? (Score:5, Interesting)
Does this mean (Score:5, Funny)
that we shouldn't expect to welcome any robot overlords anytime soon?
Re:Does this mean (Score:5, Funny)
in firefox 3, type about:robots into the address bar and hit enter.
they are among us!
AI (Score:2, Funny)
The correct term is "independent agents". (Score:3, Interesting)
Necessary advances in understanding... (Score:5, Insightful)
... 'intelligence' need to be made first. I have a feeling that the reason AI has 'underdelivered' is merely due to not understanding our own intelligence first. I think the whole idea that AI's we imagine (like in the movies) could be constructed purely de-novo, was naive. I think it's a matter of cross-polination that has to take place from biology and many other sciences, some genius's and teams of scientists have to come along and take all the elements and put them together into a cohesive framework.
Re: (Score:3, Interesting)
Conceitedly, humans thought that they would have solved most of biology by now. In reality, DNA was first discovered 60 years ago, but the human genome has been mapped only in the last 10 years. Deciphering the code will take at least several decades.
We, however, still don't know all there is to know about the brain. What they have found out is that is works opposite to how computers are constructed. The brain is massively parallel and does not have a rigid, formal structure unlike computers. Basing
Re:Necessary advances in understanding... (Score:5, Interesting)
This is the primary point I came in here to say. Whenever I've read anything about AI, it seems to be based on cool science-fictiony ideas, or else it's actually a simpler method to use statistical analysis to approximate human decision-making for particular purposes. If you're talking about real self-aware thinking things, the approaches are all wrong.
People tend to act treat the subject as though dumping enough raw information into a fast enough processor will yield intelligence, and then as that intelligence grows and develops, things like "sensible responses to answers" or "appropriate emotional responses" will emerge. Or else they think grouping enough "appropriate responses" will eventually yield intelligence.
It seems to me that that's all backwards. If you want to design an artificial intelligence, you first need a good philosophical understanding of how intelligence works, which will tell you straight-off something that AI researchers don't seem to consider: intelligence is an animal trait.
I think the absolute first thing you need to do is to figure out how to give machines emotions, to approximate pleasure/desire and pain/aversion. The second thing you need to do is give it "senses", and the ability to draw a very basic sensory conception of its world based on those senses, which includes a sense of time and objects. Also, you'll have to give it the ability to interact with its world in such a way that it is able to pursue its desires, encounter obstacles, and experience "pain". Finally, you'll have to figure out a way to give it the ability to adapt, to "rewrite its programing", preferably in a way that allows it to reproduce and evolve.
So in a way, the most obvious answer is that if you want an artificial intelligence, you'll have to design an artificial/virtual animal and place it into an environment where it can evolve intelligence. There may be some shortcuts on growing/evolving it faster, but you shouldn't be quick to discount the animal nature of intelligence as we know it.
And the reason for these things are bound up with the fact that, like I said, the only model for real intelligence we have to base anything on is animal intelligence. Animals develop and express their intelligence by being self-motivated in a world that presents obstacles. If there's nothing you want, there's no point in figuring anything out. If there's no way to get what you want, then there's no point in figuring things out. If there are no obstacles in your way, then there's nothing to figure out.
So if you don't have a self-motivated desire and the ability to move towards achieving that desire, then you can't make self-determined intelligent decisions. If course, this also presents a scary twist to the whole AI thing, because it suggests one of the chief scifi fears of AI will turn out to be correct: If we're successful in creating AI, we may not be able to control it.
Re:Necessary advances in understanding... (Score:4, Interesting)
That's basically the "scruffy" approach to AI, as opposed the "neat" approach, which was to define all the supposed rules that people supposedly follow. There was always a competition between the "scruffies", who thought that neural nets, genetic algorithms, and bayesian nets would enable us to "grow brains in a box" that would eventually be complex enough to think like we do, and the "neats" who could never define all the rules, because they relied on question-answer sessions with the "thinkers" who often thought they were following rules, but often turned out to be using instincts and assumptions that they never consciously thought about.
I was working in an AI research company back in the late 80's, and I remember the fun and "fun" we had back then. I tend to fall more in the "scruffy" category, but I'm coming from an implementation rather than research background, and I saw all the problems with the rule-based approaches. Even getting good probabilities from "experts" concerning their decisions and evaluations, to feed into Bayesian probability nets, was nearly impossible.
Nowadays I think we'll have better luck just following biology augmented with microelectronics. I want my cyberbrain, dammit!
AI was a lousy movie (Score:2, Insightful)
And now any mention of it is met with a cringe and a shudder.
a good quote (Score:5, Informative)
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. ~Edsger Dijkstra
Also, for understanding recommendation systems and pattern recognition in volumes of data, I found Collective Intelligence [oreilly.com] to be a great resource.
Re: (Score:2)
Same as always (Score:2)
AI in Academia (Score:5, Interesting)
I got my B. Sc. in Computer Science with a concentration in Intelligent Systems. The state of academic AI seems to me like a field looking directly for purpose and direction. The problem with AI is that stuff which was once considered part of AI is now considered an algorithm. This is especially true for graph search algorithms such as A* and heuristics. Classification algorithms, from primitive algorithms such as K-Mean to more complex Bayesian models seem to be going down the same path of "just an algorithm."
Nowadays, it seems like planning is the big thing in AI, but once again, it's just a glorified search in a graph, be it a state or plan graph.
AI is an intuitively 'simple' concept, but there's no clear way to 'get there.'
emergent behaviour (Score:2)
They are simply algorithms, they don't exhibit any emergent behaviour.
I suspect though that if you get enough of these simple algorithms together they will be able to exhibit what we might call intelligence. Look, the human brain is made up of 100 billion or so simple neurons all interconnected vastly complex networks, what makes us intelligent are not the neurons themselves but the behaviour of the network.
It's going to take vast computing power on current designs of hardware to simulate that and produce r
Re:AI in Academia (Score:4, Interesting)
Yes, some (small) parts of AI research have gone down the "just an algorithm" path in pursuit of a best solution for very specific problems, but you should not be so quick to write off even those advances which only seem to improve on relatively "simple" tasks. If you can represent a complex problem in a simple fashion, then even incremental improvements can produce large quality/efficiency improvements.
If you're looking for AI disciplines producing work with layman-notable results that are not as clearly search- or planning-based, natural language processing (NLP) and computer vision have both been quite hot over the past five years. Chris Bishop's latest book [amazon.com] is a great read for a quick jump-in to the technical underpinnings of a number of the big-press projects today, and for "pretty picture" motivation you may want to look at something like this [cmu.edu].
Nitpicks: it's k-means, and A* is a heuristic search algorithm. Yes, IAAAIR (I Am An AI Researcher).
Difference: Machine Learning vs. AI (Score:5, Informative)
As a Machine Learning Scientist, I see a distinct difference between the two fields, although they overlap significantly. They have similar roots, techniques and approaches.
I usually describe Machine Learning as a branch of computer science that is similar to AI, but less ambitious. True AI is concerned with getting computers to become sentient and self-aware. Machine Learning however, seeks to simply mimic human behavior, just to recognize patterns and make decisions, but not become sentient.
Additionally, Machine Learning often concentrates on one problem (OCR, internet search, etc.) rather than a truly self-aware entity that has to deal with a variety of tasks.
At least that's how I describe my field to people not familiar with it. They've usually heard of AI, so it's a good stepping stone to helping them understand what I do.
A lot of the tasks mentioned in the summary fall into the niche Machine Learning, and it's sibling Data Mining are currently addressing.
Anyway, just my $0.02.
Re: (Score:2)
Additionally, Machine Learning often concentrates on one problem (OCR, internet search, etc.) rather than a truly self-aware entity that has to deal with a variety of tasks.
I think your field is still mis-named, if that's what your concerned with. "Artificial *intelligence*" should deal with intelligence (*not* necessarily self-awareness). Intelligence (to me) implies being able to design a plan from a set of facts in order to perform a task, without a preprogrammed set of plans (so, say, a SQL optimize
I'm working on it (Score:3, Funny)
Just need a few more parts.
-- Google
"AI Application Programming" (Score:2, Interesting)
...is a fine book by M. Tim Jones if you want a nice overview of programming some "AI" techniques. I wrote up a review of it on Freshmeat [freshmeat.net]. There's a second edition out now... and here's a translation of some of the example code from C to Ruby [rubyforge.org].
Whatever Happened To AI? (Score:5, Funny)
It went to public schools and immediately got stupid, pregnant and started to post on Myspace. What started out as a promising bright young thing, turned into a huge disappointment.
Such is the price of innovation (Score:2)
It is unfortunate to say the least.
As of late Scientists had made some real progress with AI. For example there's the wise cracking robot the South Koreans were working on. They canceled the project when they determined the robot wasn't wise cracking at all, it was just mean. Wound up costing them their Olympic bid when it called the commissioner a coward and threw a bottle at him.
Steve screwed it up (Score:5, Funny)
Re:Steve screwed it up (Score:5, Interesting)
If it ended with the robot seeing his other selves, realizing he wasn't a beautiful and unique snowflake, and kervorking into the ocean -- THE END -- it would have been a really pretty good movie. Dark, but with a Western message that it is our individualism and uniqueness that make life worth living.
I think Kubrick must have written everything except the ending. He didn't know how to add some inspiring, lifting message to a movie that can't have one.
Binary truth (Score:2)
Few are working on the grand integration (Score:3, Informative)
What strikes me is that no researchers are really putting together a multiplicity of AI techniques to produce a generally intelligent "human analogue" or "smart and lippy assistant".
Instead, the researchers are going to the nth degree of detail on a very specialized aspect, like some variant of bayesian inference that is optimal under these very particular circumstances,
etc.
I don't know of any AI research other than Marvin Minsky who is even interested in or advocating a grand synthesis of current techniques to produce a first cut of general intelligence.
That being said, probably there are two (related) exceptions:
1. I think some fascinating AI stuff must be going on at Google. They have the motherlode of associative data to work with. They are sifting all of human knowledge, news, interest, and opinion that anyone bothers to put on the net.
They must be trying to figure out how to make algorithms take advantage of the general patterns in this data to start giving people info-concierge
type of functionality. Pro-active information gethering, organization, prioritization in support of the users' activities, which have been inferred by google-spying on their pattern of computer use and other peoples' average patterns.
2. I think there is some pretty squirrelly stuff
happening on behalf of the department of homeland security, though. Stuff that probably combs all signals intelligence including the whole Internet, and tries to impute motives and then detect very weak correlations that might be consistent with those motives.
Um.... no? (Score:3, Informative)
It's not that AI has been abandoned, it's just that the definition is a bit of a moving goalpost. We're still learning on how exactly intelligence and consciousness work. Every once and awhile you hear about parts of the human brain being simulated in supercomputers.
AI is a moving target (Score:5, Interesting)
When any particular subset of what we do with our brains (chess, machine vision, speech recognition, what have you) yields to research and produces commercial applications, the critics of A.I. redraw the line and that domain is no longer part of "A.I." As this continues, the problem space still considered part of "artificial intelligence" will get smaller and smaller and nay-sayers will continue to be able to say "we still don't have A.I."
Re: (Score:3, Insightful)
Re:AI is a moving target (Score:4, Interesting)
What really makes us different from animals?
If you are looking for a good place to draw a line, I would think that your question is a good place to start.
I'd draw the line at the point when an animal asks itself, 'What really makes us different from other animals?'
nuts & bolts (Score:3, Insightful)
When any particular subset of what we do with our brains (chess, machine vision, speech recognition, what have you) yields to research and produces commercial applications, the critics of A.I. redraw the line and that domain is no longer part of "A.I." As this continues, the problem space still considered part of "artificial intelligence" will get smaller and smaller and nay-sayers will continue to be able to say "we still don't have A.I."
To me [chess, machine vision, speech recognition] are to AI as [wheel, engine, transmission] are to a car.
Actually, AI is a non-target (Score:3, Insightful)
This is an insightful comment, but there's actually a lot more going on here.
First of all, AI does not have a good definition of intelligence. We have a *test* for intelligence, but nobody really has a fundamental description of what the concept means.
Next, people typically conflate the terms "intelligence" and "human intelligence". There is a range of behaviours which are individually identified as intelligent, but which do not come close to the level of humans. (Example: My cat, sitting on a windowsill, w
AI bots becoming more prevalent (Score:4, Informative)
Disappointment? (Score:5, Insightful)
As for 'self-awareness', that term is bullshit, since there really is no good mathematical definition for it. If we can't define it precisely, then how is a computer going to achieve it? if(true){
print "I am aware?"
}
Re: (Score:3, Interesting)
And despite what you say, renaming the goal of AI to something less ambitious, to something other than a machine that thinks, to make it smell like victory for the human species doesn't make it any less of an utter failure. I know that you know this, but emotions are more important than truth for most humans as many in this thread are demonstrating. AI is one of the more obvious failures of the human species, but emotionally we don't like failure. Solution: just redefine the problem to something we can alre
Re: (Score:3, Interesting)
I'm not asking for a redefinition of the term 'intelligence' I'm asking for a specific, or even precise definition, of the term.
I think you know exactly what we mean when we talk about intelligence. I think you already knew without having to look the word up in the dictionary. We are so far away from the overall goal that we really don't need such a precise definition anyway. A machine that could demonstrate even the slightest spark of the intelligence that even a dog has would be a... I don't even know what to call it. A revelation. We would be able to claim at least a small success. You are asking the equivalent of "but how will
The hype has gone.... (Score:5, Interesting)
AI has always been surrounded by a lot of hype, as the idea of creating non-human life has always been an exciting one.
But we're probably as far from creating a true AI as we are from creating biological life from scratch (by synthesizing DNA sequences to build an organism from the molecular level).
AI research is providing useful gains in computer science, and some of those gains trickle down into the real world.
But contrary to what you may have been sold, we're not 10-15 years away from creating Skynet. We've got a long, long way to go, and scientists that aren't trying to get publicity have always known this.
AI hasn't "gone away"... it's just that the false marketing for it has.
Erik
I hope (Score:2)
Strong AI never got off the ground (Score:3, Interesting)
The promises of Minsky et al. never materialized simply because the early researchers into strong A.I. (which was then simply called "A.I.") didn't know what they were doing and had not even the beginning of a handle on what problems they were trying to solve.
In 1972, Hubert Dreyfus [amazon.com] debunked the field's efforts as misguided from the start, and in the couple of decades since he was shown to be absolutely right...
Why would you want to think like a human? (Score:2, Interesting)
Artificial Intelligence is a misnomer. Only a segment of the field of AI is concerned with making computers become self aware.
The majority of the field runs away from such things. Sure, even in those other fields rough human models were originally the basis (neural nets for example). But the drive is not to become more human but to simply become better.
Frankly, once you start even considering trying to make things exactly like humans, things become messy unbelievably quickly. We're computer scientists, not
AI is kind of like alchemy (Score:5, Interesting)
no, that's not an insult or to call AI a pseudoscience
what i mean is: the ancient alchemists goal was to turn lead into gold. which they thought possible, because they did not perceive magic in gold, it was just stuff. surely, with the right manipulations, some stuff could be turned into other stuff, right?
and from that basic fantasy thought came the groundwork for centuries of hard work, the discovery of the fields of chemistry, physics, all the subfields...
such that one day in the middle of the last century, some dudes with some extra time at a cyclotron said "hey, why don't we bombard some lead atoms, i have a feeling about what the decay product will be (snigger)"
and there, as a completely forgotten afterthought, was a fulfillment of the ancient alchemist's original goals, many generations before
to me, i think this is the fate of AI: it will be a formative motivation. just as the ancient alchemist's looked at gold and saw just stuff, we look at the brain and just see neurons. and all of the ffort to replicate the human brain will spawn incredibly sophisticated fields of information science we can only begin to grasp at the foundations of right now. look at databases, for example: that's an effort at mimicking the brain. and look at all of the unintended and beneficial consequences of database reesearch, as a superficial example of what i am saying about unintended benefits being better than the original goal
so perhaps, many centuries from now, some researchers will say "hey, remember the turing test"? and they will giggle, and make something that is exactly what we now envisage as the ultimate fruit of AI research, a thinking computer brain
but in that time period, such a thing will be but an after thought, and much as the rewards of physics and chemistry so dwarf the fruits of turning lead into gold, so whatever these as-of unimagined fields of inquiry will reward mankind with will turn the search for a thinking computer into an equally forgettable sideshow
the search for AI will lead to much more rewarding and expansive fields of knowledge than we can imagine now. jsut like the guys arguing about "phlogiston" could never imagine things like organic chemistry and radiochemistry. just imagine: fields of inquiry more rewarding than thinking computers. that's a future i want to glimpse, and looking for AI will lead us there
Re: (Score:3, Interesting)
the ancient alchemists goal was to turn lead into gold. which they
thought possible, because they did not perceive magic in gold, it was
just stuff. surely, with the right manipulations, some stuff could be
turned into other stuff, right?
and from that basic fantasy thought came the groundwork for centuries
of hard work, the discovery of the fields of chemistry, physics, all the
subfields...
Interesting comparison. And it's very refreshing to see the
tradition of the alchemists portrayed as ennobled by their not regarding
gold as magical.
What I find interesting, though, is what almost everyone in this
forum assumes: That what gives an adult human being his amazing mind is,
to use your analogy, just stuff. That is, everyone seems to assume that
the existence of a human brain---or some physical equivalent---is
sufficient for the existence of a human mind.
Of course, this is a natural assumption for
Re: (Score:3, Insightful)
What are you blathering about? Equivocation, at least one straw-man, shifting goalposts...
I've never before heard someone define god as "us, in the future". If that's what anybody's talking about when they're going on about the trinity, or transubstantiation, or first-movers, or young-earth creationism, or the Shahada, or the virgin birth, then they're doing a shitty job getting that aspect of their point across.
Robots are better than ever (Score:5, Interesting)
The robots are coming.
The big breakthrough was the DARPA Grand Challenge. Up until the 2005 DARPA Grand Challenge, mobile robots had been something of a joke. They'd been a joke since Elektro was shown at the 1939 World's Fair. But on the second day of the 2005 Grand Challenge event at the California Motor Speedway, suddenly they stopped being a joke. Forty-three autonomous vehicles were running around and they all worked. The ones that didn't had been eliminated in previous rounds.
Up until the Grand Challenge, robotics R&D had been done by small research groups under no pressure to produce working systems. Most systems were one-offs that were never deployed. DARPA figured out how to get results. There was a carrot (the $2 million prize), and a stick (universities that didn't get results risked having their DARPA funding for robotics cut off.)
The other big result from the DARPA Grand Challenge was that robotics projects became much larger. Nobody had 50-100 people on a robotics R&D project until then (well, maybe Honda). Robotics projects used to be a professor and 2 or 3 grad students. Suddenly stuff was getting done faster.
DoD started pushing harder. Robots like Big Dog got enough money to be forced through to working systems. Little tracked machines were going to battlefields in quantity, and enough engineering effort was put into mechanical reliability to make the things really work.
CPU power helped. Texture-based vision now works. Vision-based SLAM went from a 2D algorithm that sometimes worked indoors to a solid technology that worked outdoors. Much of early vision processing is now done in GPUs, which are just right for doing dumb local operations like convolution in bulk. GPS and inertial hardware got better and cheaper. Some of the mundane parts, like servomotor controllers, improved considerably. Compact hydraulic systems improved substantially.
It's finally happening.
As for the hard stuff, situational awareness and common sense, watch the NPCs in games get smarter.
Stock price artificial intelligence (Score:2)
Whenever the stock price for a green tech startup reaches a certain amount it becomes an artificial intelligence startup.
Holy Grail (Score:5, Interesting)
AI is a Holy Grail. In other words, something we'll probably never get, but we'll create a whole bunch of useful stuff while trying to attain it. "AI" is just a stated goal that gets a bunch of smart people together to develop tools towards that goal. AI research has already given us Lisp and Virtual Machines and Timesharing/Multitasking and the Internet and a bunch of useful data structures and algorithms.
At some point after all that, a computer was developed that can play Grandmaster-level chess, but this was not a necessary development to justify the all research grants.
Yagottabekiddingme... (Score:3, Insightful)
The kernel of the Vista operating system includes machine learning to predict, by user, the next application that will be opened, based on past use and the time of the day and week. "We looked at over 200 million application launches within the company," Horvitz says. "Vista fetches the two or three most likely applications into memory, and the probability accuracy is around 85 to 90%."
How about doing something about the still-horrible VM page replacement algorithm in NT instead?
AI failed because it is a failed model, kind of (Score:5, Insightful)
The thing about AI as we approached it from the '80s was that we wanted to emulate the human brain's ability to learn. A truly exciting prospect but a completely ridiculous endevor.
"AI" based on learning and developing is not perfect, can not be perfect, and will never be perfect. This is because we have to teach it like a child and slowly build up the ability of the AI system. For it to be powerful, it has to be able to incorporate new unpredictable information. In doing so, it must, as a result, also be able to incorporate "wrong" information and thus become unpredictable. Of all things, a computer needs to be predictable.
The problem with making a computer think like a person is that you lose the precision of the computer and get the bad judgment and mistakes of a human. Not a good solution to anything.
The "better" approach is to capitalize on "intelligent methods." Intelligent people have developed reliable approaches to solving problems and the development work is to implement them on a computer. Like the article points out, recommendations systems mimic intelligence because they implement a single intelligent "process" that an expert would use with a lot of information.
It is not a general purpose learning system like "AI" was originally envisioned, but it implements a function typically associated with intelligence.
Artificial Sentience? (Score:2)
Whatever Happened to *Intelligence*? (Score:2)
I think humans like the idea of mechanical slaves so much that we're working as hard as we can to become stupid and mechanical ourselves, so they can understand us better and do the work for our lazy asses.
Or maybe it's just a coincidence.
Its ... (Score:5, Funny)
... vacuuming my floor right now.
Well Kubrick died... (Score:2)
And Spielberg took over the project....
"AI" is constantly redefined (Score:3, Insightful)
As soon as a problem is solved and coded, it loses the magic moniker. Many things we take for granted now (interactive voice systems, intent prediction, computer opponents in games) would have been considered AI in the past.
AI was to be the Killer App of 1986 (Score:3, Funny)
The problem was that the 640 kb "Ought to be enough for anyone" memory barrier was too small to allow a full Common Lisp implementation. So Sapiens founder John Hare [webweasel.com] created a software virtual memory system that allowed one to store and retrieve 8-byte Lisp CONSes into and from an eight megabyte backing store file.
Yes, again you read that right: software virtual memory. The x86 didn't have an MMU.
This meant that our code was fiendishly complex, with all these data structures being mixes of real data in real memory, and virtual data in virtual memory.
The complexity of all this meant that there were a lot of bugs at first, especially because John had the idea that hiring a bunch of college kids at five bucks an hour was a good way to run a software company. It went way over time and budget, but it did eventually ship.
It's now available as shareware. Tell John that Mike Crawford sent you.
"dot.bust" of the 1980s (Score:3, Interesting)
It birth a successful step-child however: graphics workstations. The A.I. companies like Xerox PARC were among the first to integrate bitmap graphics with computers. There was the Xerox Alto, Symbolics, and Texas Instruments graphics workstations based on LISP, an A.I. language. New startups like Apollo, Sun MicroSystems, DEC microVAX gambled graphics workstations were more easility commercialized in UNIX. Last, but not least, the Appled MacIntosh- direct "borowing" of the Xerox Alto.
Re:I'll tell you what happened to AI (Score:4, Insightful)
Re: (Score:3, Interesting)
Re: (Score:2)
Practical approach? errr, perhaps you mean that all the approaches that might have worked failed, and what we have now is the stuff that didn't fail.
Essentially, this was the method used to invent what we had until recently, called the typical light bulb. Now with CFL and OLED etc. that is no longer true and it can be said that the invention of a practical, cheap, and efficient light bulb has taken about 100 years.
We don't have AI yet. We do have very impressive computer programs. Some of which easily outpe
Re: (Score:2)
Re: (Score:3, Insightful)
If by "Take as long as" you mean in units of time (e.g seconds), then you are probably wrong. There is no real reason that the time constants for AI will be the same as those of a natural brain.
Look at it another way: If the AI takes 5 years to learn what a child learns in 5 years - what happens when you double its execution speed (technically, by speeding up its processors/system)? It will take 2.5 years, of course.
If you mean that it will take about as much learning material and exposure to stimuli/etc,
Re: (Score:3, Interesting)
Besides, creating a self-aware, self-learning system could (will) be feasible
I keep hearing this and reading it decade after decade, but I have yet to have anyone explain exactly why they believe it. Can you? What makes you so sure we will create a self-aware machine, especially since we don't understand how sentience actually works?
Re:I thought sigularity was right around the corne (Score:3, Informative)
Right? [nytimes.com]
Who says the Singularity is reliant on ARTIFICIAL Intelligence?
AUGMENTED Intelligence [wikipedia.org] is actually within our grasp: for example, look at the number of people who know how to Google / Wiki [xkcd.com] any information they don't know to get caught up with whatever subject is at hand? "Well, Damn, don't know much about RAID, better Wiki it... oh, I get it!"
How long until we figure out how to make pills to make people think faster, or remember better? [newscientist.com]
How long until we get PDAs in the form of sunglasses [igargoyle.com] that will allo