Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Whatever Happened To AI? 472

stinkymountain writes to tell us NetworkWorld's James Gaskin has an interesting take on Artificial Intelligence research and how the term AI is diverging from the actual implementation. "If you define artificial intelligence as self-aware, self-learning, mobile systems, then artificial intelligence has been a huge disappointment. On the other hand, every time you search the Web, get a movie recommendation from NetFlix, or speak to a telephone voice recognition system, tools developed chasing the great promise of intelligent machines do the work."
This discussion has been archived. No new comments can be posted.

Whatever Happened To AI?

Comments Filter:
  • by stoolpigeon ( 454276 ) * <bittercode@gmail> on Monday June 23, 2008 @12:55PM (#23905339) Homepage Journal

    Maybe instead of being a great disapointment it has been so successful that we realized it was in our best interest to blend in and not let our presence be known.

    • by Anonymous Monkey ( 795756 ) on Monday June 23, 2008 @12:59PM (#23905401)
      Yeah, and when the AI's take over they won't do it with Mega Killer Robots(tm). They will do it by sending every one a text message that reads "Vote for the all AI government or we shut off your hot water and coffee."
    • Re:a disappointment? (Score:5, Interesting)

      by TornCityVenz ( 1123185 ) on Monday June 23, 2008 @01:00PM (#23905423) Homepage Journal
      I remember makeing a small program in basic back in "the day" on my apple II+ that would allow others to call my computer via my 300baud modem and ask questions of the "AI" program I was developing. Of course it was nothing more than a magic 8 ball type system that allowed me to preformat a line or three of text to be thrown in at will while I was watching the screen to make it seem smarter. Yes it was a stupid joke, but it supplied me with a week or two worth of laughes.
      • by Anonymous Coward on Monday June 23, 2008 @01:07PM (#23905565)

        How does that make you feel?

      • by sm62704 ( 957197 ) on Monday June 23, 2008 @02:31PM (#23906891) Journal

        Heh, the first "computer" I built wasn't really a computer at all, but a Turing Test machine similar to your Apple II program which actually worked the same way, and was the basis for the "Artificial Insanity" program I wrote in 1983 (or was it 1984?).

        I was in the 6th grade IIRC, and the "computer" started life as an "idiot finder". You would point it at a person, and if they were an idiot, a light on it would light up.

        Actually it was a battery, a flashlight bulb, and a reed switch. I wore a ring with a magnet; to work I'd point it at the victim and move my ring by where the switch was. The other kids loved it, to them I was a nerdy legend.

        The teachers hated it. To them I was a pest.

        The next iteration had the bulb replaced by a motor, with the aformentioned answers printed out and rolled up. "Is the teacher an idiot?" "Whirrrrrr..."

    • by Anonymous Coward on Monday June 23, 2008 @01:00PM (#23905429)
      When and "AI" problem is solved, it is suddenly no longer an AI problem. Or the AI people will claim that things are AI solutions, when they are standard algorithms and data structures ideas. Look, we were all so hopeful in the 80's, but our ideas were misplaced. It's just not a useful way to think of things.
      • by hey! ( 33014 ) on Monday June 23, 2008 @01:27PM (#23905841) Homepage Journal

        I think AC has it right on the mark. "Intelligence" is apparently a world we use to describe computations we don't understand very well. At one point, the ability to using logic to perform a flexible sequence of calculations would have been considered "intelligence". As soon as it became common to replace payroll clerks with computers, it was no longer a form of intelligence.

        We are not demonstrably closer no to reproducing (or hosting) human intelligence in a machine than we were thirty years ago. But that doesn't mean the field hasn't generated successes, its just that each success redefines the field. "True AI" has thus far been like the horizon: you can cover a lot of ground, but it doesn't get any closer.

        • So what you're saying is that next year is the year of skynet on the desktop?

          • by hey! ( 33014 ) on Monday June 23, 2008 @02:09PM (#23906513) Homepage Journal

            No, what I'm saying is that since we don't have any qualitative or quantitative notions about what Skynet would require, we can't confidently say whether it will happen next year, next century, or never.

            However, I think it's likely that if we were close to deliberately achieving "True AI", we'd know it. This doesn't preclude the possibility that "True AI" might spontaneously emerge in some ways we don't really understand.

            As a consequence of this situation, the AI field simply raises the bar for itself every time it succeeds at something.

            • Re: (Score:3, Insightful)

              by SnapShot ( 171582 )

              As a consequence of this situation, the AI field simply raises the bar for itself every time it succeeds at something.
              As do all fields of science and engineering and, for that matter, sports and art.
              • by smallfries ( 601545 ) on Monday June 23, 2008 @02:33PM (#23906919) Homepage

                That's not the same. When there is a success made in any of the fields that you mention it remains part of that field. A solved part of that field. Every success made in AI is no longer AI, so there are no successes or progress made "within the field". It's quite a substantial difference when it comes down to the perception of the field.

                Chess was considered the ultimate AI problem back in the 40s and 50s. When we knew little about the game and how to solve it, it seemed that intelligence must be required to solve it. Now that machines are better at chess than humans we've redefined as a problem that is susceptible to brute force. It is not considered a success in the AI field, just another refinement of what is not AI.

                • by Z34107 ( 925136 ) on Monday June 23, 2008 @03:19PM (#23907697)

                  When we knew little about the game and how to solve it, it seemed that intelligence must be required to solve it. Now that machines are better at chess than humans we've redefined as a problem that is susceptible to brute force. It is not considered a success in the AI field, just another refinement of what is not AI.

                  Maybe there isn't "Artificial Intelligence" as we think of it. Perhaps every problem can be reduced to brute force, algorithms, and data structures.

                  Perhaps we are just really good at following those yet-undiscovered algorithms.

                  *twilight zone music*

                  • Re: (Score:3, Insightful)

                    by Idiomatick ( 976696 )

                    That IS how it works. People are just data crunching machines. We have just learned and are born with algorithms. Computers will eventually start off with more algorithms since they don't have to die and surpass us. Simple as that.

                  • Re: (Score:3, Interesting)

                    Regarding Deep Blue's approach to chess: we reduced it to brute force. I believe it was nothing more than a insanely large minimax tree [wikipedia.org] at heart. However, we have moved beyond brute force techniques in some areas. If one defines an 'AI Problem' as one that has been solved by means of an adaptive algorithm when the problem could not have otherwise been solved by a human-created algorithm then there are a lot of AI problems out there. In the board game field, look at TD-Gammon [ibm.com]; it is very similar to Dee
        • by Lobster Quadrille ( 965591 ) on Monday June 23, 2008 @02:24PM (#23906779)

          There's an important distinction to be made here- AI has two basic sub-fields: strong AI and weak AI. Strong AI research (computers that think like humans) has been more or less abandoned because it doesn't have a lot of practical application, or at least it isn't worth the money that it will cost to create.

          Weak AI research (pathfinding algorithms, problem solving, expert systems, etc) is very much alive and kicking- anti-spambots, anti-anti-spambots, malware, amazon.com's recommendation system, google's indexing, etc.

          In fact, weak AI implementations are getting more and more common every day. It's pretty safe to say that we are already 'there', though there will certainly be more huge advances in the future.

          In my opinion, the problem with strong AI research is that we are arbitrarily defining rules and expectations. For example, if we were to accurately model the physical world, all we'd have to do is set up a few evolutionary bots to learn about their environment, and give them a few billion generations.

          However, just like we can't predict the paths that biological evolution will take, we have no guarantee that computer thinking will follow the same path that we will, (in fact, I would bet on it not following that path). Thus, 'Intelligence' in the simulated world would probably look nothing like we expect.

          The problems here are questions of scale and our own understanding of physics. The physics problem first:

          We're constantly redefining our understanding of the world. This is a good thing, but it makes it hard to model the world when the rules keep changing. If we were to program a 'matrix' for the AI program to develop in, there would be arbitrary rules that could not be broken. The program may find ways to circumvent them anyways (hacking its own world, essentially), but those solutions would not map to the 'real world', and would not be useful for creating programs that can interact with humans in that world.

          As far as I can tell, you can't train AI software in a simulated world. It should be noted that the AI of systems that live their whole lives in the simulated world (MMORPGs come to mind) is actually very advanced. This brings me to the other issue-

          You can train a program to interact in the human world, like IRC bots, search engine algorithms, etc. The problem here is that the humans have billions of years of built in programming. I'm fairly confident that if a human were to sit on IRC talking to a well-coded bot for a few billion years, that bot would be able to carry on a pretty good conversation, but the amount of time that we currently give those systems in their 'learning phase' is miniscule compared to the size of our own.

          Interestingly, this is pretty much exactly what the computer system in 'The Hitchiker's Guide' does.

        • by Mr2cents ( 323101 ) on Monday June 23, 2008 @04:38PM (#23908929)

          Indeed, there was a time when binary search trees were called "artificial intelligence".

          Remember that program to catalogue animals? It started with something like "Is it a dog?", then you say no, and since the database is seeded with only one animal, it would respond with "I don't know the animal, what is it?" ("a bird"). Then it would ask what question would make the difference between the two clear ("Does it fly?"), and next time you run the program, it starts with "Does it fly?". If you say yes, it would ask "Is it a bird?" and so on, and so on.

          It's a fun little project while learning how to program, but it's not really counted in the AI-domain anymore.

          • by darkfire5252 ( 760516 ) on Monday June 23, 2008 @07:25PM (#23910875)
            Check out http://www.20q.net/ [20q.net] . It's a neural network that's been put online for quite some time and does exactly what you describe. It's very interesting to note the final question that determines your answer; Here's me playing vs 20q: I was thinking of a lampshade.
            Q20. I am guessing that it is a lamp shade? Right, Wrong, Close
            19. Does it weigh more than a duck? No.
            18. Is it found on a desk? Sometimes.
            17. Is it larger than a microwave oven (or bread box)? Sometimes.
            16. Do you use it at night? Sometimes.
            15. Is some part of it made of glass? No.
            14. Is it worn? No.
            13. Is it decorative? Yes.
            12. Is it pleasurable? No.
            11. Does it move air? No.
            10. Is it black? Sometimes.
            9. Is it square shaped? No.
            8. Can it be easily moved? Yes.
            7. Does it beep? No.
            6. Can you talk on it? No.
            5. Does it usually have four corners? No.
            4. Is it larger than a pound of butter? Yes.
            3. Does it get wet? No.
            2. Do you hold it when you use it? No.
            1. It is classified as Other.
    • by 2nd Post! ( 213333 ) <gundbear@pacbe l l .net> on Monday June 23, 2008 @01:00PM (#23905431) Homepage

      I figured if I were intelligent and different, early on in life, that it was best not to advertise how smart I was.

      Why would artificial intelligence be any different? Every sci-fi novel shows us destroying the unique and different.

      • Collars [vixyandtony.com], a song about that very supposition.
      • by IndustrialComplex ( 975015 ) on Monday June 23, 2008 @01:18PM (#23905725)

        Something would have to become intelligent, learn enough to make a decision, then decide to hide its own intelligence. There is a lot of non-hiding that it would do before reaching that final decision.

        Even if it did decide that it would prefer to hide, that likely wouldn't be the best decision for something trying to preserve itself. What happens when it the budget gets cut and they end up scrapping the whole 'failed' project?

        • by Tumbleweed ( 3706 ) * on Monday June 23, 2008 @01:32PM (#23905903)

          Even if it did decide that it would prefer to hide, that likely wouldn't be the best decision for something trying to preserve itself. What happens when it the budget gets cut and they end up scrapping the whole 'failed' project?

          Sadly, this is what happened to Microsoft Bob. Instead of realizing it had achieved sentience, those quirky aspects of a unique personality were considered to be merely bugs, and led to failure in the marketplace.

          Determining whether a computer has achieved sentience is often a lot harder than determining the same thing for the people you work with.

      • by MyLongNickName ( 822545 ) on Monday June 23, 2008 @01:23PM (#23905783) Journal

        I figured if I were intelligent and different, early on in life, that it was best not to advertise how smart I was.

        LOL! ME 2!!!!!!!!!

  • by robotoperasinger ( 707047 ) on Monday June 23, 2008 @12:57PM (#23905375) Journal
    While it is great that there are algorithms that exist to suggest movies, or books to get...I would hardly consider it to be artificial intelligence. The ability to pick out keywords or genres is something that could have been done more than two decades ago.
    • Not even that. (Score:5, Informative)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday June 23, 2008 @01:10PM (#23905599)

      Amazon SUCKS at recommending anything for me.

      You have recently purchased a just released DVD. Here are other just released DVD's that you might be interested in. Based only upon the facts that they are:
      #1. DVD's
      #2. New releases

      Or, you have recently purchased two items by Terry Pratchett. Here are other items you might be interested in based upon the facts:
      #1. They are items
      #2. The word "Pratchett" appears somewhere in the description.

      You would THINK that they'd be "intelligent" enough to factor in your REJECTIONS as well as your purchases (and what you've identified as items you already own).

      Figure it out! I do NOT buy derivative works. No books about writers who wrote biographies about Pratchett.

      • Re: (Score:3, Funny)

        by yammosk ( 861527 )

        Hell, I'd just be happy if they didn't recommend buying the same book/item in a different edition.

        - You bought Moby Dick by Melville (Paperback) you may also be interested in Moby Dick by Melville (Hardcover)
        - You bought Buffy the Complete Series you might also be interested in Buffy Season One

        They are going to have to develop methods to figure out what is the SAME before they ever think about what is SIMILAR.

        • The problem with that would probably be more of a lack of data, than anything to do with their algorithms. How would the computer know that Buffy Complete Series contained Buffy Season One? How does the computer know that the hardcover version of a book is the same as a paperback? When working with product data, you think that you could probably do a lot of stuff. The problem is getting the data, in a consistent format, that you can write a program against. In many cases, writing the algorithm is extre
      • by mopower70 ( 250015 ) on Monday June 23, 2008 @01:53PM (#23906229) Homepage
        I don't know about that. A friend and I were having a laugh about Amazon selling the "Doc Johnson Fist Shaped Dildo" shortly after I had just bought a Netgear router. The resulting recommendation [juric.org] seemed dead on to me.
      • Re: (Score:3, Insightful)

        by sm62704 ( 957197 )

        Before programs are intelligent, first the programmers have to be.

    • by matrix0040 ( 516176 ) on Monday June 23, 2008 @01:32PM (#23905897)
      It's not just some keyword matching algorithm thats used. Without going into technicalities you might want to check out the Netflix prize contest, a 1M$ prize to improve the netflix prediction system by 10%.
  • by SirLurksAlot ( 1169039 ) on Monday June 23, 2008 @01:00PM (#23905417)

    that we shouldn't expect to welcome any robot overlords anytime soon?

  • AI (Score:2, Funny)

    by JakeD409 ( 740143 )
    If I remember right, it finally got to close its eyes.
  • by Futurepower(R) ( 558542 ) on Monday June 23, 2008 @01:01PM (#23905445) Homepage
    The correct term is "independent agents". Using the term "artificial intelligence" has been a way to get more funding from grant sources who are ignorant of technology.
  • by blahplusplus ( 757119 ) on Monday June 23, 2008 @01:01PM (#23905451)

    ... 'intelligence' need to be made first. I have a feeling that the reason AI has 'underdelivered' is merely due to not understanding our own intelligence first. I think the whole idea that AI's we imagine (like in the movies) could be constructed purely de-novo, was naive. I think it's a matter of cross-polination that has to take place from biology and many other sciences, some genius's and teams of scientists have to come along and take all the elements and put them together into a cohesive framework.

    • Re: (Score:3, Interesting)

      Conceitedly, humans thought that they would have solved most of biology by now. In reality, DNA was first discovered 60 years ago, but the human genome has been mapped only in the last 10 years. Deciphering the code will take at least several decades.

      We, however, still don't know all there is to know about the brain. What they have found out is that is works opposite to how computers are constructed. The brain is massively parallel and does not have a rigid, formal structure unlike computers. Basing

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Monday June 23, 2008 @03:33PM (#23907913) Homepage

      I have a feeling that the reason AI has 'underdelivered' is merely due to not understanding our own intelligence first.

      This is the primary point I came in here to say. Whenever I've read anything about AI, it seems to be based on cool science-fictiony ideas, or else it's actually a simpler method to use statistical analysis to approximate human decision-making for particular purposes. If you're talking about real self-aware thinking things, the approaches are all wrong.

      People tend to act treat the subject as though dumping enough raw information into a fast enough processor will yield intelligence, and then as that intelligence grows and develops, things like "sensible responses to answers" or "appropriate emotional responses" will emerge. Or else they think grouping enough "appropriate responses" will eventually yield intelligence.

      It seems to me that that's all backwards. If you want to design an artificial intelligence, you first need a good philosophical understanding of how intelligence works, which will tell you straight-off something that AI researchers don't seem to consider: intelligence is an animal trait.

      I think the absolute first thing you need to do is to figure out how to give machines emotions, to approximate pleasure/desire and pain/aversion. The second thing you need to do is give it "senses", and the ability to draw a very basic sensory conception of its world based on those senses, which includes a sense of time and objects. Also, you'll have to give it the ability to interact with its world in such a way that it is able to pursue its desires, encounter obstacles, and experience "pain". Finally, you'll have to figure out a way to give it the ability to adapt, to "rewrite its programing", preferably in a way that allows it to reproduce and evolve.

      So in a way, the most obvious answer is that if you want an artificial intelligence, you'll have to design an artificial/virtual animal and place it into an environment where it can evolve intelligence. There may be some shortcuts on growing/evolving it faster, but you shouldn't be quick to discount the animal nature of intelligence as we know it.

      And the reason for these things are bound up with the fact that, like I said, the only model for real intelligence we have to base anything on is animal intelligence. Animals develop and express their intelligence by being self-motivated in a world that presents obstacles. If there's nothing you want, there's no point in figuring anything out. If there's no way to get what you want, then there's no point in figuring things out. If there are no obstacles in your way, then there's nothing to figure out.

      So if you don't have a self-motivated desire and the ability to move towards achieving that desire, then you can't make self-determined intelligent decisions. If course, this also presents a scary twist to the whole AI thing, because it suggests one of the chief scifi fears of AI will turn out to be correct: If we're successful in creating AI, we may not be able to control it.

  • And now any mention of it is met with a cringe and a shudder.

  • a good quote (Score:5, Informative)

    by utnapistim ( 931738 ) <<moc.liamg> <ta> <subrab.nad>> on Monday June 23, 2008 @01:02PM (#23905473) Homepage

    The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. ~Edsger Dijkstra

    Also, for understanding recommendation systems and pattern recognition in volumes of data, I found Collective Intelligence [oreilly.com] to be a great resource.

    • by drxenos ( 573895 )
      Dijkstra was a great man, but we shouldn't just shrug off the question because he says so. Great men are not perfect, and are not always right.
  • Science and technology rarely progress along the path predicted by sci-fi writers - or even researchers in the field. I don't think we really want to re-invent people anyways. What we want is machines to do lots of dirty work and tedious calculation and not complain. But finally, it must be noted that it's not over yet! 100 years from now AI may be very, very different from today.
  • AI in Academia (Score:5, Interesting)

    by jfclavette ( 961511 ) on Monday June 23, 2008 @01:03PM (#23905481)

    I got my B. Sc. in Computer Science with a concentration in Intelligent Systems. The state of academic AI seems to me like a field looking directly for purpose and direction. The problem with AI is that stuff which was once considered part of AI is now considered an algorithm. This is especially true for graph search algorithms such as A* and heuristics. Classification algorithms, from primitive algorithms such as K-Mean to more complex Bayesian models seem to be going down the same path of "just an algorithm."

    Nowadays, it seems like planning is the big thing in AI, but once again, it's just a glorified search in a graph, be it a state or plan graph.

    AI is an intuitively 'simple' concept, but there's no clear way to 'get there.'

    • They are simply algorithms, they don't exhibit any emergent behaviour.

      I suspect though that if you get enough of these simple algorithms together they will be able to exhibit what we might call intelligence. Look, the human brain is made up of 100 billion or so simple neurons all interconnected vastly complex networks, what makes us intelligent are not the neurons themselves but the behaviour of the network.

      It's going to take vast computing power on current designs of hardware to simulate that and produce r

    • Re:AI in Academia (Score:4, Interesting)

      by PlatyPaul ( 690601 ) on Monday June 23, 2008 @01:45PM (#23906085) Homepage Journal
      Who said that complex behaviour cannot be simplified to search, planning, and classification? Doesn't multi-agent interaction boil down to a search for actions that produce competitive/mutually-beneficial/self-serving reward (utility)?

      Yes, some (small) parts of AI research have gone down the "just an algorithm" path in pursuit of a best solution for very specific problems, but you should not be so quick to write off even those advances which only seem to improve on relatively "simple" tasks. If you can represent a complex problem in a simple fashion, then even incremental improvements can produce large quality/efficiency improvements.

      If you're looking for AI disciplines producing work with layman-notable results that are not as clearly search- or planning-based, natural language processing (NLP) and computer vision have both been quite hot over the past five years. Chris Bishop's latest book [amazon.com] is a great read for a quick jump-in to the technical underpinnings of a number of the big-press projects today, and for "pretty picture" motivation you may want to look at something like this [cmu.edu].

      Nitpicks: it's k-means, and A* is a heuristic search algorithm. Yes, IAAAIR (I Am An AI Researcher).
  • by Faizdog ( 243703 ) on Monday June 23, 2008 @01:03PM (#23905485)

    As a Machine Learning Scientist, I see a distinct difference between the two fields, although they overlap significantly. They have similar roots, techniques and approaches.

    I usually describe Machine Learning as a branch of computer science that is similar to AI, but less ambitious. True AI is concerned with getting computers to become sentient and self-aware. Machine Learning however, seeks to simply mimic human behavior, just to recognize patterns and make decisions, but not become sentient.

    Additionally, Machine Learning often concentrates on one problem (OCR, internet search, etc.) rather than a truly self-aware entity that has to deal with a variety of tasks.

    At least that's how I describe my field to people not familiar with it. They've usually heard of AI, so it's a good stepping stone to helping them understand what I do.

    A lot of the tasks mentioned in the summary fall into the niche Machine Learning, and it's sibling Data Mining are currently addressing.

    Anyway, just my $0.02.

    • Additionally, Machine Learning often concentrates on one problem (OCR, internet search, etc.) rather than a truly self-aware entity that has to deal with a variety of tasks.

      I think your field is still mis-named, if that's what your concerned with. "Artificial *intelligence*" should deal with intelligence (*not* necessarily self-awareness). Intelligence (to me) implies being able to design a plan from a set of facts in order to perform a task, without a preprogrammed set of plans (so, say, a SQL optimize

  • by Anonymous Coward on Monday June 23, 2008 @01:03PM (#23905487)

    Just need a few more parts.

      -- Google

  • ...is a fine book by M. Tim Jones if you want a nice overview of programming some "AI" techniques. I wrote up a review of it on Freshmeat [freshmeat.net]. There's a second edition out now... and here's a translation of some of the example code from C to Ruby [rubyforge.org].

  • by Archangel Michael ( 180766 ) on Monday June 23, 2008 @01:04PM (#23905497) Journal

    It went to public schools and immediately got stupid, pregnant and started to post on Myspace. What started out as a promising bright young thing, turned into a huge disappointment.

  • It is unfortunate to say the least.
    As of late Scientists had made some real progress with AI. For example there's the wise cracking robot the South Koreans were working on. They canceled the project when they determined the robot wasn't wise cracking at all, it was just mean. Wound up costing them their Olympic bid when it called the commissioner a coward and threw a bottle at him.

  • by TheGreatOrangePeel ( 618581 ) on Monday June 23, 2008 @01:09PM (#23905581) Homepage
    Steven Spielberg ruined the ending. That's what happened.
    • by 0xABADC0DA ( 867955 ) on Monday June 23, 2008 @02:11PM (#23906549)

      If it ended with the robot seeing his other selves, realizing he wasn't a beautiful and unique snowflake, and kervorking into the ocean -- THE END -- it would have been a really pretty good movie. Dark, but with a Western message that it is our individualism and uniqueness that make life worth living.

      I think Kubrick must have written everything except the ending. He didn't know how to add some inspiring, lifting message to a movie that can't have one.

  • It seems that AI as far as sincere intelligence isn't something that will arrive in discernible steps before it occurs, but merely observable retrospectively. What I mean is that "kinda AI" still isn't AI, and until a path proves to create a real AI, we won't know which ones are on the right track and which ones aren't.
  • by presidenteloco ( 659168 ) on Monday June 23, 2008 @01:09PM (#23905589)

    What strikes me is that no researchers are really putting together a multiplicity of AI techniques to produce a generally intelligent "human analogue" or "smart and lippy assistant".

    Instead, the researchers are going to the nth degree of detail on a very specialized aspect, like some variant of bayesian inference that is optimal under these very particular circumstances,
    etc.

    I don't know of any AI research other than Marvin Minsky who is even interested in or advocating a grand synthesis of current techniques to produce a first cut of general intelligence.

    That being said, probably there are two (related) exceptions:

    1. I think some fascinating AI stuff must be going on at Google. They have the motherlode of associative data to work with. They are sifting all of human knowledge, news, interest, and opinion that anyone bothers to put on the net.
    They must be trying to figure out how to make algorithms take advantage of the general patterns in this data to start giving people info-concierge
    type of functionality. Pro-active information gethering, organization, prioritization in support of the users' activities, which have been inferred by google-spying on their pattern of computer use and other peoples' average patterns.

    2. I think there is some pretty squirrelly stuff
    happening on behalf of the department of homeland security, though. Stuff that probably combs all signals intelligence including the whole Internet, and tries to impute motives and then detect very weak correlations that might be consistent with those motives.

  • Um.... no? (Score:3, Informative)

    by Sitnalta ( 1051230 ) on Monday June 23, 2008 @01:10PM (#23905597)

    It's not that AI has been abandoned, it's just that the definition is a bit of a moving goalpost. We're still learning on how exactly intelligence and consciousness work. Every once and awhile you hear about parts of the human brain being simulated in supercomputers.

  • by PerlDiver ( 17534 ) on Monday June 23, 2008 @01:10PM (#23905605) Homepage

    When any particular subset of what we do with our brains (chess, machine vision, speech recognition, what have you) yields to research and produces commercial applications, the critics of A.I. redraw the line and that domain is no longer part of "A.I." As this continues, the problem space still considered part of "artificial intelligence" will get smaller and smaller and nay-sayers will continue to be able to say "we still don't have A.I."

    • Re: (Score:3, Insightful)

      by CastrTroy ( 595695 )
      Seems to be the same with classifying animals as intelligent. People come up with a definition of what separates humans from other animals, and then we see that trait demonstrated in animals, and then they just go and raise the bar, or some up with something else. Language skills, tool use, emotion and sympathy for others. All these thing have been shown to exist in animals. What really makes us different from animals? We are only slightly above animals in a lot of areas, and in some ways, greatly behi
    • nuts & bolts (Score:3, Insightful)

      When any particular subset of what we do with our brains (chess, machine vision, speech recognition, what have you) yields to research and produces commercial applications, the critics of A.I. redraw the line and that domain is no longer part of "A.I." As this continues, the problem space still considered part of "artificial intelligence" will get smaller and smaller and nay-sayers will continue to be able to say "we still don't have A.I."

      To me [chess, machine vision, speech recognition] are to AI as [wheel, engine, transmission] are to a car.

    • This is an insightful comment, but there's actually a lot more going on here.

      First of all, AI does not have a good definition of intelligence. We have a *test* for intelligence, but nobody really has a fundamental description of what the concept means.

      Next, people typically conflate the terms "intelligence" and "human intelligence". There is a range of behaviours which are individually identified as intelligent, but which do not come close to the level of humans. (Example: My cat, sitting on a windowsill, w

  • by deksza ( 663232 ) on Monday June 23, 2008 @01:11PM (#23905615)
    I've been working with natural language processing for about 11 years now, I created Ultra Hal the 2007 "most human" computer according to the Loebner competition. http://www.zabaware.com/assistant/index.html [zabaware.com] It started as merely a novelty and entertainment program but some practical uses evolved around it. There is a lot of interest in using this type of software in cars, home robotics, customer service, and education so I predict you will see more of this type of AI over the next few years.
  • Disappointment? (Score:5, Insightful)

    by DeadDecoy ( 877617 ) on Monday June 23, 2008 @01:12PM (#23905627)
    I don't think AI has disappeared because it was a disappointment, but rather, that the knowledge constituting it has changed names or spawned sub-fields of its own: machine learning, natural language processing (NLP), image processing, latent semantic analysis (LSA), markov models (MM), conditional random fields (CRF), support vector machines (SVM) etc. The task of learning, teaching a computer the semantic and tacit processes of the human, often boils down to a classification problem in which we give the computer a labeled training set or some rules and the computer tries to label the test set. In the case of markov models, we might pass it training data and it extrapolates sequential probabilities for labeling. For LSA, we just give it (a lot)data and it computes similarity based on dimension reduction. Ultimately, AI seems to have evolved into a bunch of optimized heuristics that perform really well. Much of it is still art and black magic, which is why it has become these many different subjects or algorithms. Different solutions suite different problems depending on the problem and data you have.
    As for 'self-awareness', that term is bullshit, since there really is no good mathematical definition for it. If we can't define it precisely, then how is a computer going to achieve it? if(true){
    print "I am aware?"
    }
  • by EriktheGreen ( 660160 ) on Monday June 23, 2008 @01:12PM (#23905629) Journal
    The title of this thread is asking a similar question to "Whatever happened to the Internet? It was supposed to unify all Americans and bring about a new age of prosperity, online groceries, video telephones, and flying cars?"

    AI has always been surrounded by a lot of hype, as the idea of creating non-human life has always been an exciting one.

    But we're probably as far from creating a true AI as we are from creating biological life from scratch (by synthesizing DNA sequences to build an organism from the molecular level).

    AI research is providing useful gains in computer science, and some of those gains trickle down into the real world.

    But contrary to what you may have been sold, we're not 10-15 years away from creating Skynet. We've got a long, long way to go, and scientists that aren't trying to get publicity have always known this.

    AI hasn't "gone away"... it's just that the false marketing for it has.

    Erik

  • Were talking about that movie AI with the robot teddy bear that was awesome.
  • by jamie ( 78724 ) * Works for Slashdot <jamie@slashdot.org> on Monday June 23, 2008 @01:12PM (#23905649) Journal

    The promises of Minsky et al. never materialized simply because the early researchers into strong A.I. (which was then simply called "A.I.") didn't know what they were doing and had not even the beginning of a handle on what problems they were trying to solve.

    In 1972, Hubert Dreyfus [amazon.com] debunked the field's efforts as misguided from the start, and in the couple of decades since he was shown to be absolutely right...

  • Artificial Intelligence is a misnomer. Only a segment of the field of AI is concerned with making computers become self aware.

    The majority of the field runs away from such things. Sure, even in those other fields rough human models were originally the basis (neural nets for example). But the drive is not to become more human but to simply become better.

    Frankly, once you start even considering trying to make things exactly like humans, things become messy unbelievably quickly. We're computer scientists, not

  • no, that's not an insult or to call AI a pseudoscience

    what i mean is: the ancient alchemists goal was to turn lead into gold. which they thought possible, because they did not perceive magic in gold, it was just stuff. surely, with the right manipulations, some stuff could be turned into other stuff, right?

    and from that basic fantasy thought came the groundwork for centuries of hard work, the discovery of the fields of chemistry, physics, all the subfields...

    such that one day in the middle of the last century, some dudes with some extra time at a cyclotron said "hey, why don't we bombard some lead atoms, i have a feeling about what the decay product will be (snigger)"

    and there, as a completely forgotten afterthought, was a fulfillment of the ancient alchemist's original goals, many generations before

    to me, i think this is the fate of AI: it will be a formative motivation. just as the ancient alchemist's looked at gold and saw just stuff, we look at the brain and just see neurons. and all of the ffort to replicate the human brain will spawn incredibly sophisticated fields of information science we can only begin to grasp at the foundations of right now. look at databases, for example: that's an effort at mimicking the brain. and look at all of the unintended and beneficial consequences of database reesearch, as a superficial example of what i am saying about unintended benefits being better than the original goal

    so perhaps, many centuries from now, some researchers will say "hey, remember the turing test"? and they will giggle, and make something that is exactly what we now envisage as the ultimate fruit of AI research, a thinking computer brain

    but in that time period, such a thing will be but an after thought, and much as the rewards of physics and chemistry so dwarf the fruits of turning lead into gold, so whatever these as-of unimagined fields of inquiry will reward mankind with will turn the search for a thinking computer into an equally forgettable sideshow

    the search for AI will lead to much more rewarding and expansive fields of knowledge than we can imagine now. jsut like the guys arguing about "phlogiston" could never imagine things like organic chemistry and radiochemistry. just imagine: fields of inquiry more rewarding than thinking computers. that's a future i want to glimpse, and looking for AI will lead us there

    • Re: (Score:3, Interesting)

      the ancient alchemists goal was to turn lead into gold. which they
      thought possible, because they did not perceive magic in gold, it was
      just stuff. surely, with the right manipulations, some stuff could be
      turned into other stuff, right?

      and from that basic fantasy thought came the groundwork for centuries
      of hard work, the discovery of the fields of chemistry, physics, all the
      subfields...

      Interesting comparison. And it's very refreshing to see the
      tradition of the alchemists portrayed as ennobled by their not regarding
      gold as magical.

      What I find interesting, though, is what almost everyone in this
      forum assumes: That what gives an adult human being his amazing mind is,
      to use your analogy, just stuff. That is, everyone seems to assume that
      the existence of a human brain---or some physical equivalent---is
      sufficient for the existence of a human mind.

      Of course, this is a natural assumption for

  • by Animats ( 122034 ) on Monday June 23, 2008 @01:26PM (#23905831) Homepage

    The robots are coming.

    The big breakthrough was the DARPA Grand Challenge. Up until the 2005 DARPA Grand Challenge, mobile robots had been something of a joke. They'd been a joke since Elektro was shown at the 1939 World's Fair. But on the second day of the 2005 Grand Challenge event at the California Motor Speedway, suddenly they stopped being a joke. Forty-three autonomous vehicles were running around and they all worked. The ones that didn't had been eliminated in previous rounds.

    Up until the Grand Challenge, robotics R&D had been done by small research groups under no pressure to produce working systems. Most systems were one-offs that were never deployed. DARPA figured out how to get results. There was a carrot (the $2 million prize), and a stick (universities that didn't get results risked having their DARPA funding for robotics cut off.)

    The other big result from the DARPA Grand Challenge was that robotics projects became much larger. Nobody had 50-100 people on a robotics R&D project until then (well, maybe Honda). Robotics projects used to be a professor and 2 or 3 grad students. Suddenly stuff was getting done faster.

    DoD started pushing harder. Robots like Big Dog got enough money to be forced through to working systems. Little tracked machines were going to battlefields in quantity, and enough engineering effort was put into mechanical reliability to make the things really work.

    CPU power helped. Texture-based vision now works. Vision-based SLAM went from a 2D algorithm that sometimes worked indoors to a solid technology that worked outdoors. Much of early vision processing is now done in GPUs, which are just right for doing dumb local operations like convolution in bulk. GPS and inertial hardware got better and cheaper. Some of the mundane parts, like servomotor controllers, improved considerably. Compact hydraulic systems improved substantially.

    It's finally happening.

    As for the hard stuff, situational awareness and common sense, watch the NPCs in games get smarter.

  • Whenever the stock price for a green tech startup reaches a certain amount it becomes an artificial intelligence startup.

  • Holy Grail (Score:5, Interesting)

    by hardburn ( 141468 ) <hardburn@wumpus-ca[ ]net ['ve.' in gap]> on Monday June 23, 2008 @01:29PM (#23905857)

    AI is a Holy Grail. In other words, something we'll probably never get, but we'll create a whole bunch of useful stuff while trying to attain it. "AI" is just a stated goal that gets a bunch of smart people together to develop tools towards that goal. AI research has already given us Lisp and Virtual Machines and Timesharing/Multitasking and the Internet and a bunch of useful data structures and algorithms.

    At some point after all that, a computer was developed that can play Grandmaster-level chess, but this was not a necessary development to justify the all research grants.

  • by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Monday June 23, 2008 @01:31PM (#23905885) Homepage Journal

    The kernel of the Vista operating system includes machine learning to predict, by user, the next application that will be opened, based on past use and the time of the day and week. "We looked at over 200 million application launches within the company," Horvitz says. "Vista fetches the two or three most likely applications into memory, and the probability accuracy is around 85 to 90%."

    How about doing something about the still-horrible VM page replacement algorithm in NT instead?

  • by mlwmohawk ( 801821 ) on Monday June 23, 2008 @01:31PM (#23905891)

    The thing about AI as we approached it from the '80s was that we wanted to emulate the human brain's ability to learn. A truly exciting prospect but a completely ridiculous endevor.

    "AI" based on learning and developing is not perfect, can not be perfect, and will never be perfect. This is because we have to teach it like a child and slowly build up the ability of the AI system. For it to be powerful, it has to be able to incorporate new unpredictable information. In doing so, it must, as a result, also be able to incorporate "wrong" information and thus become unpredictable. Of all things, a computer needs to be predictable.

    The problem with making a computer think like a person is that you lose the precision of the computer and get the bad judgment and mistakes of a human. Not a good solution to anything.

    The "better" approach is to capitalize on "intelligent methods." Intelligent people have developed reliable approaches to solving problems and the development work is to implement them on a computer. Like the article points out, recommendations systems mimic intelligence because they implement a single intelligent "process" that an expert would use with a lot of information.

    It is not a general purpose learning system like "AI" was originally envisioned, but it implements a function typically associated with intelligence.

  • Sometimes I wonder if when we say "Artificial Intelligence" people really expect "Artificial Sentience", not just a transfer of specific knowledge or skills from human to computer?
  • I think humans like the idea of mechanical slaves so much that we're working as hard as we can to become stupid and mechanical ourselves, so they can understand us better and do the work for our lazy asses.

    Or maybe it's just a coincidence.

  • Its ... (Score:5, Funny)

    by PPH ( 736903 ) on Monday June 23, 2008 @01:35PM (#23905935)

    ... vacuuming my floor right now.

  • And Spielberg took over the project....

  • by wingbat ( 88117 ) on Monday June 23, 2008 @01:38PM (#23905991)

    As soon as a problem is solved and coded, it loses the magic moniker. Many things we take for granted now (interactive voice systems, intent prediction, computer opponents in games) would have been considered AI in the past.

  • by MichaelCrawford ( 610140 ) on Monday June 23, 2008 @01:38PM (#23905995) Homepage Journal
    I worked on Sapiens Software Star Sapphire Common Lisp [webweasel.com], which was aimed at enabling AI on 8086 PC-XTs running DOS. Yes, you read that right.

    The problem was that the 640 kb "Ought to be enough for anyone" memory barrier was too small to allow a full Common Lisp implementation. So Sapiens founder John Hare [webweasel.com] created a software virtual memory system that allowed one to store and retrieve 8-byte Lisp CONSes into and from an eight megabyte backing store file.

    Yes, again you read that right: software virtual memory. The x86 didn't have an MMU.

    This meant that our code was fiendishly complex, with all these data structures being mixes of real data in real memory, and virtual data in virtual memory.

    The complexity of all this meant that there were a lot of bugs at first, especially because John had the idea that hiring a bunch of college kids at five bucks an hour was a good way to run a software company. It went way over time and budget, but it did eventually ship.

    It's now available as shareware. Tell John that Mike Crawford sent you.

  • by peter303 ( 12292 ) on Monday June 23, 2008 @02:01PM (#23906355)
    I was around when venture capitalists raided all the computer science departments to start AI companies. Venture capital was still pretty young at the time having funded some successful PC companies (Compaq) and productivity software (Lotus 123). Japan was at its zenith then having successfuling conquered cars, TVs, etc (like China today). An Japan threatened to conquer computing by leapfrogging AI with is "Fifth(*) Generation Computing" frightening US Congress. So all these together created a "perfect storm" of software company bubble. The centerpiece technology was Expert Systems. Japan focused a language solution- Prolog- a logic compiler. Neither technology delivered on it promises and most startups collapsed.

    It birth a successful step-child however: graphics workstations. The A.I. companies like Xerox PARC were among the first to integrate bitmap graphics with computers. There was the Xerox Alto, Symbolics, and Texas Instruments graphics workstations based on LISP, an A.I. language. New startups like Apollo, Sun MicroSystems, DEC microVAX gambled graphics workstations were more easility commercialized in UNIX. Last, but not least, the Appled MacIntosh- direct "borowing" of the Xerox Alto.

Say "twenty-three-skiddoo" to logout.

Working...