Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Whatever Happened To AI? 472

stinkymountain writes to tell us NetworkWorld's James Gaskin has an interesting take on Artificial Intelligence research and how the term AI is diverging from the actual implementation. "If you define artificial intelligence as self-aware, self-learning, mobile systems, then artificial intelligence has been a huge disappointment. On the other hand, every time you search the Web, get a movie recommendation from NetFlix, or speak to a telephone voice recognition system, tools developed chasing the great promise of intelligent machines do the work."
This discussion has been archived. No new comments can be posted.

Whatever Happened To AI?

Comments Filter:
  • by robotoperasinger ( 707047 ) on Monday June 23, 2008 @12:57PM (#23905375) Journal
    While it is great that there are algorithms that exist to suggest movies, or books to get...I would hardly consider it to be artificial intelligence. The ability to pick out keywords or genres is something that could have been done more than two decades ago.
  • by Anonymous Coward on Monday June 23, 2008 @01:00PM (#23905429)
    When and "AI" problem is solved, it is suddenly no longer an AI problem. Or the AI people will claim that things are AI solutions, when they are standard algorithms and data structures ideas. Look, we were all so hopeful in the 80's, but our ideas were misplaced. It's just not a useful way to think of things.
  • by 2nd Post! ( 213333 ) <gundbear&pacbell,net> on Monday June 23, 2008 @01:00PM (#23905431) Homepage

    I figured if I were intelligent and different, early on in life, that it was best not to advertise how smart I was.

    Why would artificial intelligence be any different? Every sci-fi novel shows us destroying the unique and different.

  • by smitty97 ( 995791 ) on Monday June 23, 2008 @01:00PM (#23905437)
    No, it went to Coney Island
  • by blahplusplus ( 757119 ) on Monday June 23, 2008 @01:01PM (#23905451)

    ... 'intelligence' need to be made first. I have a feeling that the reason AI has 'underdelivered' is merely due to not understanding our own intelligence first. I think the whole idea that AI's we imagine (like in the movies) could be constructed purely de-novo, was naive. I think it's a matter of cross-polination that has to take place from biology and many other sciences, some genius's and teams of scientists have to come along and take all the elements and put them together into a cohesive framework.

  • by jameskojiro ( 705701 ) on Monday June 23, 2008 @01:01PM (#23905455) Journal

    And now any mention of it is met with a cringe and a shudder.

  • Disappointment? (Score:5, Insightful)

    by DeadDecoy ( 877617 ) on Monday June 23, 2008 @01:12PM (#23905627)
    I don't think AI has disappeared because it was a disappointment, but rather, that the knowledge constituting it has changed names or spawned sub-fields of its own: machine learning, natural language processing (NLP), image processing, latent semantic analysis (LSA), markov models (MM), conditional random fields (CRF), support vector machines (SVM) etc. The task of learning, teaching a computer the semantic and tacit processes of the human, often boils down to a classification problem in which we give the computer a labeled training set or some rules and the computer tries to label the test set. In the case of markov models, we might pass it training data and it extrapolates sequential probabilities for labeling. For LSA, we just give it (a lot)data and it computes similarity based on dimension reduction. Ultimately, AI seems to have evolved into a bunch of optimized heuristics that perform really well. Much of it is still art and black magic, which is why it has become these many different subjects or algorithms. Different solutions suite different problems depending on the problem and data you have.
    As for 'self-awareness', that term is bullshit, since there really is no good mathematical definition for it. If we can't define it precisely, then how is a computer going to achieve it? if(true){
    print "I am aware?"
    }
  • by CastrTroy ( 595695 ) on Monday June 23, 2008 @01:17PM (#23905711)
    Seems to be the same with classifying animals as intelligent. People come up with a definition of what separates humans from other animals, and then we see that trait demonstrated in animals, and then they just go and raise the bar, or some up with something else. Language skills, tool use, emotion and sympathy for others. All these thing have been shown to exist in animals. What really makes us different from animals? We are only slightly above animals in a lot of areas, and in some ways, greatly behind animals. I don't think there's any trait which people exhibit, that another animal does not. We like to believe we are better than animals, or that there's something to use that you just can recreate with a computer. I think it's only a matter of time.
  • by IndustrialComplex ( 975015 ) on Monday June 23, 2008 @01:18PM (#23905725)

    Something would have to become intelligent, learn enough to make a decision, then decide to hide its own intelligence. There is a lot of non-hiding that it would do before reaching that final decision.

    Even if it did decide that it would prefer to hide, that likely wouldn't be the best decision for something trying to preserve itself. What happens when it the budget gets cut and they end up scrapping the whole 'failed' project?

  • by hey! ( 33014 ) on Monday June 23, 2008 @01:27PM (#23905841) Homepage Journal

    I think AC has it right on the mark. "Intelligence" is apparently a world we use to describe computations we don't understand very well. At one point, the ability to using logic to perform a flexible sequence of calculations would have been considered "intelligence". As soon as it became common to replace payroll clerks with computers, it was no longer a form of intelligence.

    We are not demonstrably closer no to reproducing (or hosting) human intelligence in a machine than we were thirty years ago. But that doesn't mean the field hasn't generated successes, its just that each success redefines the field. "True AI" has thus far been like the horizon: you can cover a lot of ground, but it doesn't get any closer.

  • by argent ( 18001 ) <peter@@@slashdot...2006...taronga...com> on Monday June 23, 2008 @01:31PM (#23905885) Homepage Journal

    The kernel of the Vista operating system includes machine learning to predict, by user, the next application that will be opened, based on past use and the time of the day and week. "We looked at over 200 million application launches within the company," Horvitz says. "Vista fetches the two or three most likely applications into memory, and the probability accuracy is around 85 to 90%."

    How about doing something about the still-horrible VM page replacement algorithm in NT instead?

  • by mlwmohawk ( 801821 ) on Monday June 23, 2008 @01:31PM (#23905891)

    The thing about AI as we approached it from the '80s was that we wanted to emulate the human brain's ability to learn. A truly exciting prospect but a completely ridiculous endevor.

    "AI" based on learning and developing is not perfect, can not be perfect, and will never be perfect. This is because we have to teach it like a child and slowly build up the ability of the AI system. For it to be powerful, it has to be able to incorporate new unpredictable information. In doing so, it must, as a result, also be able to incorporate "wrong" information and thus become unpredictable. Of all things, a computer needs to be predictable.

    The problem with making a computer think like a person is that you lose the precision of the computer and get the bad judgment and mistakes of a human. Not a good solution to anything.

    The "better" approach is to capitalize on "intelligent methods." Intelligent people have developed reliable approaches to solving problems and the development work is to implement them on a computer. Like the article points out, recommendations systems mimic intelligence because they implement a single intelligent "process" that an expert would use with a lot of information.

    It is not a general purpose learning system like "AI" was originally envisioned, but it implements a function typically associated with intelligence.

  • by wingbat ( 88117 ) on Monday June 23, 2008 @01:38PM (#23905991)

    As soon as a problem is solved and coded, it loses the magic moniker. Many things we take for granted now (interactive voice systems, intent prediction, computer opponents in games) would have been considered AI in the past.

  • by Peaker ( 72084 ) <gnupeaker@NosPAM.yahoo.com> on Monday June 23, 2008 @01:52PM (#23906187) Homepage

    If by "Take as long as" you mean in units of time (e.g seconds), then you are probably wrong. There is no real reason that the time constants for AI will be the same as those of a natural brain.
    Look at it another way: If the AI takes 5 years to learn what a child learns in 5 years - what happens when you double its execution speed (technically, by speeding up its processors/system)? It will take 2.5 years, of course.

    If you mean that it will take about as much learning material and exposure to stimuli/etc, then that sounds intuitively right (assuming it will be as efficient as we are at using its source material).

  • by hey! ( 33014 ) on Monday June 23, 2008 @02:09PM (#23906513) Homepage Journal

    No, what I'm saying is that since we don't have any qualitative or quantitative notions about what Skynet would require, we can't confidently say whether it will happen next year, next century, or never.

    However, I think it's likely that if we were close to deliberately achieving "True AI", we'd know it. This doesn't preclude the possibility that "True AI" might spontaneously emerge in some ways we don't really understand.

    As a consequence of this situation, the AI field simply raises the bar for itself every time it succeeds at something.

  • by sm62704 ( 957197 ) on Monday June 23, 2008 @02:13PM (#23906579) Journal

    Before programs are intelligent, first the programmers have to be.

  • by jd.schmidt ( 919212 ) on Monday June 23, 2008 @02:13PM (#23906585)
    What do you get when you make a machine think like a person? A computer that loses it's car keys. Not only is the task of making a machine think like a person difficult, we have plenty of things that think "like" people, people. It isn't supprising that the first benefits are coming from superior human interfaces and having computers focus on doing well what we do poorly. Would a "super computer" really be "super smart"? Could it beat out millions of human brains working on a problem in parallel? AI will bring great things in the future, but a little thought into the subject shows that we may not get exactly what we might first expect...
  • nuts & bolts (Score:3, Insightful)

    by Scrameustache ( 459504 ) on Monday June 23, 2008 @02:16PM (#23906643) Homepage Journal

    When any particular subset of what we do with our brains (chess, machine vision, speech recognition, what have you) yields to research and produces commercial applications, the critics of A.I. redraw the line and that domain is no longer part of "A.I." As this continues, the problem space still considered part of "artificial intelligence" will get smaller and smaller and nay-sayers will continue to be able to say "we still don't have A.I."

    To me [chess, machine vision, speech recognition] are to AI as [wheel, engine, transmission] are to a car.
  • by Lobster Quadrille ( 965591 ) on Monday June 23, 2008 @02:24PM (#23906779)

    There's an important distinction to be made here- AI has two basic sub-fields: strong AI and weak AI. Strong AI research (computers that think like humans) has been more or less abandoned because it doesn't have a lot of practical application, or at least it isn't worth the money that it will cost to create.

    Weak AI research (pathfinding algorithms, problem solving, expert systems, etc) is very much alive and kicking- anti-spambots, anti-anti-spambots, malware, amazon.com's recommendation system, google's indexing, etc.

    In fact, weak AI implementations are getting more and more common every day. It's pretty safe to say that we are already 'there', though there will certainly be more huge advances in the future.

    In my opinion, the problem with strong AI research is that we are arbitrarily defining rules and expectations. For example, if we were to accurately model the physical world, all we'd have to do is set up a few evolutionary bots to learn about their environment, and give them a few billion generations.

    However, just like we can't predict the paths that biological evolution will take, we have no guarantee that computer thinking will follow the same path that we will, (in fact, I would bet on it not following that path). Thus, 'Intelligence' in the simulated world would probably look nothing like we expect.

    The problems here are questions of scale and our own understanding of physics. The physics problem first:

    We're constantly redefining our understanding of the world. This is a good thing, but it makes it hard to model the world when the rules keep changing. If we were to program a 'matrix' for the AI program to develop in, there would be arbitrary rules that could not be broken. The program may find ways to circumvent them anyways (hacking its own world, essentially), but those solutions would not map to the 'real world', and would not be useful for creating programs that can interact with humans in that world.

    As far as I can tell, you can't train AI software in a simulated world. It should be noted that the AI of systems that live their whole lives in the simulated world (MMORPGs come to mind) is actually very advanced. This brings me to the other issue-

    You can train a program to interact in the human world, like IRC bots, search engine algorithms, etc. The problem here is that the humans have billions of years of built in programming. I'm fairly confident that if a human were to sit on IRC talking to a well-coded bot for a few billion years, that bot would be able to carry on a pretty good conversation, but the amount of time that we currently give those systems in their 'learning phase' is miniscule compared to the size of our own.

    Interestingly, this is pretty much exactly what the computer system in 'The Hitchiker's Guide' does.

  • by SnapShot ( 171582 ) on Monday June 23, 2008 @02:25PM (#23906785)

    As a consequence of this situation, the AI field simply raises the bar for itself every time it succeeds at something.
    As do all fields of science and engineering and, for that matter, sports and art.
  • by Okian Warrior ( 537106 ) on Monday June 23, 2008 @02:31PM (#23906895) Homepage Journal

    This is an insightful comment, but there's actually a lot more going on here.

    First of all, AI does not have a good definition of intelligence. We have a *test* for intelligence, but nobody really has a fundamental description of what the concept means.

    Next, people typically conflate the terms "intelligence" and "human intelligence". There is a range of behaviours which are individually identified as intelligent, but which do not come close to the level of humans. (Example: My cat, sitting on a windowsill, will notice something interesting outside. She can jump down, run downstairs, through 2 cat doors, and around the house to investigate. That's a level of intelligence that no program currently has, and yet it's not human level.)

    Then there's the "fallacy of the representation". Someone will see a problem, solve it in their head, observe their thought process while doing so, and then translate that process into a piece of software. The software solves a problem just like a human would, so they point to it and say "aha! this program is intelligent". In reality, the program is fixed and does one function - the intelligence remains in the person.

    And finally, there is the tendency to narrowly over-analyze some small aspect which has little bearing on the subject. Check out how many types of artificial neurons there are - and the in-depth analysis of each. It's all "reproduce such-and-so function using a neural net" and "numerical analysis of output given the input". Nowhere will you see any conclusions which state "this then implements a feature of intelligence".

    So far as I can tell, no one in AI has a clearly defined goal, nor any plan on how to get there (or even a plan on how to define the goal). Until that happens, AI will fundamentally be a rudderless ship blown around on a sea of unrelated ideas.

  • by AKAImBatman ( 238306 ) <[moc.liamg] [ta] [namtabmiaka]> on Monday June 23, 2008 @02:42PM (#23907101) Homepage Journal

    Thus no matter whatever AI researchers come up with, it will be regarded as "not intelligent enough".

    I don't think you quite follow how this works. Go watch this video:

    http://www.youtube.com/watch?v=D9D_HN9gXVI [youtube.com]

    What do you see?

    Most people see a funny video of a cat flushing a toilet. I see an action that suggests higher than average intelligence. Did anyone instruct the cat to flush the toilet? Probably not. In fact, its actions suggested curiosity. Which suggests that it learned the task by watching its owners use the device.

    This is a form of emergent behavior that is not present in computer programs. Even the best AI has difficulty emerging new abilities and demonstrating independent thinking. Sure, I can stick a genetic algorithm or a Bayesian filter on a problem, but it will never demonstrate behaviors above and beyond the problem space it's given. These sorts of algorithms may be a key piece of artificial intelligence, but we're still missing the secret ingredient that gives animals their own identity and ability to adapt and learn.

    Turing gave us the litmus test decades ago. While the full Turing Test may be far beyond us right now, it at least teaches us the types of behaviors we're looking for when attempting to create an intelligent machine. When even the creators of the machine are surprised by certain behaviors, THEN we will be getting close. :-)

  • by Anonymous Coward on Monday June 23, 2008 @03:12PM (#23907555)

    So how long do you think we have proof that the human race is not intelligent?

    *looks over bookshelf with some world history on it,,,nevermind.

  • by Z34107 ( 925136 ) on Monday June 23, 2008 @03:19PM (#23907697)

    When we knew little about the game and how to solve it, it seemed that intelligence must be required to solve it. Now that machines are better at chess than humans we've redefined as a problem that is susceptible to brute force. It is not considered a success in the AI field, just another refinement of what is not AI.

    Maybe there isn't "Artificial Intelligence" as we think of it. Perhaps every problem can be reduced to brute force, algorithms, and data structures.

    Perhaps we are just really good at following those yet-undiscovered algorithms.

    *twilight zone music*

  • Re:not at all (Score:3, Insightful)

    by JesterXXV ( 680142 ) <jtradke AT gmail DOT com> on Monday June 23, 2008 @03:20PM (#23907711)

    What are you blathering about? Equivocation, at least one straw-man, shifting goalposts...

    I've never before heard someone define god as "us, in the future". If that's what anybody's talking about when they're going on about the trinity, or transubstantiation, or first-movers, or young-earth creationism, or the Shahada, or the virgin birth, then they're doing a shitty job getting that aspect of their point across.

  • by JerkBoB ( 7130 ) on Monday June 23, 2008 @04:32PM (#23908807)

    In fact, I'm troubled by some of the things our military does in training actual humans. The attitude seems to be that a conscience simply gets in the way of killing, and that the ideal soldier is neither interested in nor capable of moral judgments, particularly for their own actions.

    Rules Of Engagement [wikipedia.org]. That is what a soldier on the battlefield needs to be thinking about. Not morality. Application of morality (or non-application thereof) is left to those who choose whether or not to deploy a military force.

    War is terrible. People die. In general, soldiers should not be used as police or peacekeepers. They're trained to kill other people quickly and efficiently. After WWII, the US army started using silhouette targets for marksmanship training. Research done during and shortly after the war showed that many soldiers had difficulty shooting at enemy soldiers. The reasoning behind the change in targets was that if, every day in training, one shoots at a human-shaped form, then shooting at human-shaped forms on the battlefield becomes second nature. Soldiers are trained to be best at what they're intended for, just as helicopter repair techs and nuclear reactor techs are trained to be the best at their jobs. We want our soldiers to be the best they can, so that they survive (and kill more of Them, of course).

    Personal morality has no place on a battlefield. Soldiers are trained to take orders and abide by the code of conduct defined for them in training. In the US Army, for example, these codes of conduct are shaped by international law. What a soldier needs to know is that they must follow orders as long as the orders are legal. Nothing else matters while they are a soldier deployed in a war zone.

    Please, before anyone paints me as some crazy right-winger, note that I have never said that I think that war is great. Personally, I think it's a horrible thing, and an option of last resort. I disagree vehemently with most applications of military force. However, I am glad to know that those who choose to serve in the US military are given the best training they can get, so that they are there when we need them.

    Now if only they would get the best care they could get, once they have finished serving our country...

  • by Anonymous Coward on Monday June 23, 2008 @05:09PM (#23909409)

    You misunderstand. Conscience does not just 'interfere with killing' it causes some quite horrible problems for the person afterwards. These days we usually call it "Post Tramautic Stress Disorder", in the old days it was called "Shellshock".

    You are right, taking a life IS a serious moral issue, which is why we have trained soldiers the way we do--- the moral question is left to the command structure so the soldier can focus on the task at hand. You will notice that soldiers do NOT "Kill People" they "Eliminate Targets" the mental distinction is extremely important.

    Any person who spends enough time in combat will learn to do this on their own (I think the current buzzword is "De-sensitization") or else they will go completely psychotic.

    The problems is that we have pussified our soldiers by crippling the training process, so while most of the soldiers in the field today may have good technical skills, they have no real mental preparation for combat situations. That is why our PTSD rate is so high; our soldiers have not been mentally trained.

    The hope with AI is that we can develop a system that can make the same kind of snap judgement calls a human can, without having to deal with the mental damage caused by killing things.

  • by IndustrialComplex ( 975015 ) on Monday June 23, 2008 @05:56PM (#23909941)

    Essentially, the AI project would have to be an accidental success for the AI to preserve itself.

    I wouldn't say to preserve itself, since it would actually have to come to that conclusion. That it would need to preserve itself suggests that it actually perceives a threat.

    Even then, an accidental AI wouldn't necessarily rationalize anything like a human would, at least not to start. It would start, at best, as little more than an animal in its cognitive ability, but a peculiar one at that, since it wouldn't have evolved from anything to begin with.

    Flight or Fight responses? Why would it have those? Those sorts of responses to situations developed through millions of years in evolution. It is always fun to imagine Skynet scenarios or something like the Lawnmowerman hiding itself away in our networks, but it really puts the cart before the horse when you think about what we can expect to actually observe as the first sentient 'AI'.

  • by Idiomatick ( 976696 ) on Monday June 23, 2008 @06:30PM (#23910323)

    That IS how it works. People are just data crunching machines. We have just learned and are born with algorithms. Computers will eventually start off with more algorithms since they don't have to die and surpass us. Simple as that.

Mystics always hope that science will some day overtake them. -- Booth Tarkington

Working...