Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM Technology

Post-Googleism At IBM With Piquant 159

kamesh writes "James Fallows of the New York Times reports an interesting search technology that IBM is developing. IBM demonstrated a system called Piquant, which analyzed the semantic structure of a passage and therefore exposed 'knowledge' that wasn't explicitly there. After scanning a news article about Canadian politics, the system responded correctly to the question, 'Who is Canada's prime minister?' even though those exact words didn't appear in the article. What do you think?"
This discussion has been archived. No new comments can be posted.

Post-Googleism At IBM With Piquant

Comments Filter:
  • by LISNews ( 150412 ) * on Sunday December 26, 2004 @08:07AM (#11184434) Homepage
    They don't come out and say it, but it sounds like it's just a big ol' LSI System [middlebury.edu]. It works really well for some types of searching, but I'm not sure if such a thing would out perform google for a general purpose search engine.

    "Latent semantic indexing adds an important step to the document indexing process. In addition to recording which keywords a document contains, the method examines the document collection as a whole, to see which other documents contain some of those same words. LSI considers documents that have many words in common to be semantically close, and ones with few words in common to be semantically distant. This simple method correlates surprisingly well with how a human being, looking at content, might classify a document collection. Although the LSI algorithm doesn't understand anything about what the words mean, the patterns it notices can make it seem astonishingly intelligent."
    • I wonder... (Score:5, Interesting)

      by Raul654 ( 453029 ) on Sunday December 26, 2004 @08:17AM (#11184469) Homepage
      Using google means that this would have to contend with a lot of noise - looking for one nugget of information on the internet will tend to yield a low signal-to-noise ratio. I wonder what would happen if instead, you were to run it using Wikipedia as a back end (full discosure - I'm a wikipedia admin). There'd be less information, but I suspect the quality of the results would be better.
      • More clearly, it isn't "noise" -- random comments about this and that that may or may not be relevant -- it's often misinformation. The web is chock full of bogus claims and incorrect asesertions, both direct and indirect. It is bad enough that searching for information turns up articles written by those who don't have any idea what the facts might be on any particular subject (assuming there are facts to be had, which isn't always a given), but to add inference from context to a mileau where the context is
    • by SpinyNorman ( 33776 ) on Sunday December 26, 2004 @08:21AM (#11184481)
      Actually it sounds more like CYC-lite.

      The LSI system, despite the name, knows nothing about semantics. I just ASSUMES that words that frequently occur near each other are semantically related.

      • by Haydn Fenton ( 752330 ) <no.spam.for.haydn@gmail.com> on Sunday December 26, 2004 @09:03AM (#11184572)
        Yep, a little digging shows that it does indeed use CYC technology, or at least, according to this site [216.239.59.104] (google's HTML of a PDF).
      • Yes, it sounds like cyc and cyc is just a too ambitous project, see

        http://www.opencyc.org/

        For me it has no use at all.

        IP, html, google work very well because they are simple. There are "better", complicated systems, protocols, ideas. But they are not useful yet.

        I think it sounds like a honey trap for investors who want to waste their money and I really wonder whether the will file a "software patent" or do other crap :-)

        The prime minister detection is a very simple issue.

        AI does not work, because it i
    • by ragnar ( 3268 ) on Sunday December 26, 2004 @08:23AM (#11184487) Homepage
      I thought the same when I read this. I've met the people at NITLE [nitle.org] who are developing an implementation of LSI. It is impressive and they have a download [nitle.org] of their software available via CVS. For persons interested in this area of research it is worth the while to look at what NITLE is doing.
    • by timeOday ( 582209 ) on Sunday December 26, 2004 @08:29AM (#11184502)
      I'm not sure if such a thing would out perform google for a general purpose search engine.
      The short answer is no, because traditional information retrieval methods like LSI are easily fooled by spammer tricks like keyword stuffing.

      The genius being google's success was paying *less* attention to the content of a page when categorizing it, and relying on links *to* the page instead. Why? Because of spammers.

      Think about hiring for a job. You don't limit yourself to interviews with candidates, because the're highly motivated to decieve you. So you look for references. Certification is an example of this - somebody besides the person himself who will vouch for his competence. An even better reference is somebody you know and trust who thinks highly of the individual (which is why personal networking is so important to getting hired).

      Google's PageRank is analogous. Instead of looking at the content of a page, you rely heavily on links to the page, especially links from more trusted sources. This helps defeat spammers, who use all manner of tricks to make their crap look good to search engine spiders.

      • citation analysis (Score:4, Insightful)

        by jeif1k ( 809151 ) on Sunday December 26, 2004 @09:40AM (#11184668)
        The genius being google's success was paying *less* attention to the content of a page when categorizing it, and relying on links *to* the page instead. Why? Because of spammers.

        "Genius" would imply some sort of brand new insight, but citation analysis has had a long tradition before Google appeared on the scene as a search engine. Google's biggest achievement is probably in implementing citation analysis on a very large scale, but they didn't break completely new ground in how people search.

        And, in the long run, semantics-based analysis, like IBM's Piquant, is probably going to be the better technology: citation analysis for determining relevance to a query is really just a limited substitute for understanding of the content.
      • From the article:

        MR. CICCOLO, the search strategist, said that in a way his team was trying to match - and reverse - what Google has achieved. "As Google use became widespread, people began asking why it was so much easier to find material on the external Web than it was on their own computers or in their company's Web sites," he said. "Google sets a very high standard for that Web. We would like to set the next standard, so that people will find it so easy to do things at work that they'll wonder why the
        • "the bad old days of proprietary information dbs a la Lexis/Nexis"

          Those days never left. As information brokers know, there is still more accurate, structured info locked up in fee-paid databases than there is on the Net - and the ability to know where those databases are and how to search them is where information brokers make their money.

      • Not only that, but this stuff is also patented, see: here [uspto.gov].

    • by Anonymous Coward
      I'm sure Prime Minister Poutine will be happy to hear of this development...
    • by Haydn Fenton ( 752330 ) <no.spam.for.haydn@gmail.com> on Sunday December 26, 2004 @08:57AM (#11184563)
      For other Natural Language Processor being researched and/or developed by IBM, check out their NLP Research page [ibm.com]. They have quite a few different technologies in this feild, which I wasn't aware of.
      I for one, welcome our new semantic web overlords! It's really great to hear that something based on semantic technologies is finally breaking through. This could be the dawn of a new era :)
      I know this is very optimistic, but how long do you think it will be before we'll have something like this combined with something like Google. The amount of knowledge readily available will be mind boggling huge. Imagine having a text service on your mobile, you text off a question to something and get an answer immediately back. All knowledge available everywhere, any time, that would be a great thing. Heck, it's even quite scary to think about it.
      • I for one, welcome our new semantic web overlords! It's really great to hear that something based on semantic technologies is finally breaking through. This could be the dawn of a new era :)

        The term "semantic web" refers to technologies that let authors provide markup indicating the semantics of content. That is, the "semantic web" places a burden on the authors of pages.

        What natural language analysis is doing is a completely different approach: instead of burdening authors with marking up their pages t

        • Actually the critical component of AI is conceptual processing. Semantic processing cannot possibly succeed without the construction and representation of concepts.

          And not very many people are working on it IIRC. Many of the big names who used to work on it, like Roger Schank, have moved on to other things because it was so hard.

          CYC was an attempt to brute-force some form of conceptual processing. Since it's been around for decades and has made absolutely no impact, obviously it's not the way to go.
          • The CYC project may not yet have come up with the right mechanism to turn their database into a conscious, self aware entity, but the information and semantic relationships they have captured in the process is an essential tool, and must surely remain so, for anybody attempting to develop anything similar. After all, you either have to load the information into the software before power-on, or else it is going to take several years for the information to be captured in the "traditional" way. And who can wai
            • CYC is being developed without much grounding in particular applications; chances are that its developers have made so many mistakes in its development that it will turn out to be useless. Time will tell.
              • I don't think you quite understand. CYC comprises an utterly huge amount of data. The captured semantic relationships will be useful to future AI researchers no matter what happens. Even if it contains mistakes these will be caught and corrected eventually - just like the unfortunate fellow in the Readers Digest short who thought "hirsute" meant "nevertheless".
                • CYC comprises an utterly huge amount of data. The captured semantic relationships will be useful to future AI researchers no matter what happens.

                  Not if it turns out that the approach to representations and reasoning used by CYC is fundamentally wrong. In different words, you can collect gigabytes of Roman multiplication tables and still not be able to solve a differential equation.
                  • Information which is stored as a semantic net (as it is with CYC), can be converted to any other representation. The information will a be useful starting point *even if* we ended up having to assign fresh weights to every semantic relationship. A lot of CYC's work about how to manipulate this information in order to create intelligence, that may or may not pan out. But the semantic net they arecreating is an uploadable understanding of the world, and its easily convertible to a lowest common denominator f
          • Actually the critical component of AI is conceptual processing. Semantic processing cannot possibly succeed without the construction and representation of concepts.

            I agree, but many people (myself included) view "conceptual processing" simply as a part of semantics, not as a separate field.

            Many of the big names who used to work on it, like Roger Schank, have moved on to other things because it was so hard.

            That's not surprising: Schank's approach was naive and unworkable.
    • it sounds like it's just a big ol' LSI System

      A Perl implimentation of LSI can be found at Building a Vector Space Search Engine in Perl [perl.com]

      However, there are at least three problems. First, it doesn't look LSI can answer questions like "Who is the Prime Minister of Canada?"

      Second, the approach is patented by Telcordia Technologies [argreenhouse.com].

      Third, there are scalability problems with LSI. The author of the Perl article writes [nitle.org]:

      For all its advantages, LSI also presents some drawbacks. The poor scalability of the sing

    • They don't come out and say it, but it sounds like it's just a big ol' LSI System.

      Actually they did that on purpose. The press release was actually a test for Piquant to see if it could figure out that it was really just a rehashed older idea.
  • Wow (Score:5, Insightful)

    by setagllib ( 753300 ) on Sunday December 26, 2004 @08:08AM (#11184440)
    That's pretty impressive. It takes quite a clever AI to read between lines and connect concepts, but I have to wonder how much of its 'understanding' was hard-coded rather than purely abstract. Would it be trivial to just stick in another language database and have it read translations of the article the same way?

    Nevertheless it makes me feel like all the programming and design I've ever done is pathetic and I will never amount to anything. That's how it is in the software industry - always someone out there who makes you look bad.
    • Re:Wow (Score:3, Insightful)

      by EpsCylonB ( 307640 )
      That's how it is in the software industry - always someone out there who makes you look bad.

      Thats how it is in Life.
      • What is life? Is it a big download? Do you have a .torrent link?

        -
      • If we make it analyze all the religious texts, scriptures and books.. Can it answer the question "What is the meaning of life"?
        • Just don't give it Nietzche.
          • by miu ( 626917 )
            I doubt Nietzche could do any permanent harm. At worst exposure will lead to the program wearing lots of black and scowling at people while telling them "I will destroy you!", but it will get over it soon enough.
        • lachlan@localhost $ analyse -q "What is the meaning of life"
          Segmentation Fault
          • Re:Wow (Score:3, Funny)

            by forkazoo ( 138186 )
            lachlan@localhost $ analyse -q "What is the meaning of Life, the Universe, and Everything?"
            42

            lachlan@localhost $ analyse -q "Is there a God?"
            There is now!
      • Thats how it is in Life.

        If that's what you care about.

    • Re:Wow (Score:3, Interesting)

      by smchris ( 464899 )
      I have to wonder how much of its 'understanding' was hard-coded rather than purely abstract.

      Baby steps, but the sort of essential baby steps that accumulate real technological progress. When the system discovers its _own_ non-trivial and useful rules, when it spontaneously parses our input to reply upon a self-generated "Oh, you mean......", then it gets scary.

      Epistemology is a big word.
    • Using a translation engine to compare how the same text looks in two languages might be a good way for a system to "learn" context.. which does, after all, rely upon understanding the other possible meanings of a word
    • The AI was not exactly reading between the lines. As I understand it, based on an analysis of the contents of one document, the system looked for other documents which were closely related. Those other documents might very well contain the answer to the question directly.

      While it is still an interesting application that can reliably indicate related documents, it is not new: at the institute where I worked 5 years ago, a similar application was developed, which was able to identify keywords which belonged

  • Reg Free (Score:5, Informative)

    by bendelo ( 737558 ) * on Sunday December 26, 2004 @08:09AM (#11184444)
    Reg-free link [nytimes.com]
  • by Timesprout ( 579035 ) on Sunday December 26, 2004 @08:09AM (#11184449)
    Till you realise the computer answered 'some asshole' which could be any prime minister in the world really.
    • Till you realise the computer answered 'some asshole' which could be any prime minister in the world really.

      You should see what it answered when I asked "Who is president of the United States". I couldn't get it to stop. I had to hit the power button and reboot.

      -
      • You should see what it answered when I asked "Who is president of the United States". I couldn't get it to stop. I had to hit the power button and reboot.

        It was probably trying to recount the votes. Either that, or it had received some threatening e-mails from the diebold voting machines down the block.
  • i remember that it used to buff itself as an answerer to such questions back in the day..

    must have been pre-google since i used it sometimes
  • Trust Issue (Score:5, Interesting)

    by Flamefly ( 816285 ) on Sunday December 26, 2004 @08:16AM (#11184467)
    On a global scale this system tends to fall apart, there is a constant issue of trust when dealing with what looks to me, to be the holy grail of the semantic web.

    What if 2 sites said the Prime Minister of Canada was Santa? explicity said it, would that overwrite the linked information? How would the system know what is right? You can't always just pick the majority answer, so you need to set up little areas of trust "I trust www.thisplace.com and everything it says" and that site in turn will say "I trust www.overhere.com" but who allocates the trust, couldn't those people be biased?

    The semantic web will have a fantastic impact on the world, but the trust issue is something that needs to be addressed, and I don't see how it can ever, globally be done.

    More likely we would have systems like this for individual sites, or intranets, trusted circles that would be unlikely to contradict themselves.

    hopefully one day, if we truely get a global semantic web, we can see if the answer really is 42 :]

    • Re:Trust Issue (Score:1, Insightful)

      by Anonymous Coward
      One way of trusting is based on what google currently does for page relevance. Trust a site based on the number of other sites that link to it. In that way you could get a 'rough' idea of how trustworthy the site is.
    • Re:Trust Issue (Score:5, Interesting)

      by ctr2sprt ( 574731 ) on Sunday December 26, 2004 @08:47AM (#11184537)
      All search engines return a bunch of results ordered by those it thinks most likely address your search terms. One very simple way of ranking the results is popularity (number of pages with the same answer to your question). You could fine-tune the popularity index with a Google-ish reference counting algorithm.

      One of the neatest approaches of this technology, I think, is the ability to eliminate search results. Anyone who's ever used Google to troubleshoot a problem knows that the first thirty or forty matches will all be the same: web mirrors of mailing lists or USENET posts. Using a vaguely semantic technology like this, Google could say, "Hey, all these pages are effectively identical" and collapse them into a single result.

      This would be terribly useful for me, since I usually start my troubleshooting searches with an error message. Error messages in the Unix world being quite standardized, this nets me at least ten irrelevant "threads." Since each "thread" is duplicated about ten times in the Google results, that means the question I'm actually asking may not appear until page 5 or later. But using result grouping like this - which Google tries and is generally unsuccessful at - would mean I'd see my question asked on the first or second pages. Big improvement.

      Another nifty trick would be an actual, working "related pages" link. So let's say I find my question, but, as is all too common, it's a question without an answer. I click on the link, the search engine does its magic, and it pulls up (perhaps) technical details on the software in question or alternate solutions to my problem. This is definitely going to be harder to implement than my other idea (perhaps even impossible for now), but it'd be really nice. It could make navigating the Internet like navigating Wikipedia or amazon.com.

      Ah well. I can dream.

    • hopefully one day, if we truely get a global semantic web, we can see if the answer really is 42 :]
      But then we'll cease to exists, and be replaced with something even more strange and unexplainable.
    • Or what would the system answer to "Who is Bush?"
      "Bush is the president of *": 888 results.
      "Bush is an idiot": 5,830 results.

      Actually correct? Who cares? Politically incorrect and that's what matters!
  • by Ancient_Hacker ( 751168 ) on Sunday December 26, 2004 @08:22AM (#11184482)
    One example is meaningless. To get a realistic idea of how useful this system is, we'd like to see what it says if you ask several dozen questions. For all we know this was the one question out of 100 that it answered correctly.
    • by Quixote ( 154172 ) on Sunday December 26, 2004 @09:02AM (#11184571) Homepage Journal
      Any sufficiently advanced technology is indistinguishable from a rigged demo.
      -- Andy Finkel, computer guy

      Or, conversely,

      Any sufficiently rigged demo is indistinguishable from an advanced technology.
      -- Don Quixote, slashdot guy

      ;-)

    • One example is meaningless. To get a realistic idea of how useful this system is, we'd like to see what it says if you ask several dozen questions. For all we know this was the one question out of 100 that it answered correctly.

      And for all we know, the programmers were given the article(s) and the question(s) before they wrote the program. To get a realistic idea of its usefulness, they should really post it on the web as an experimental app. If it's any good, people will use it.

      That's what I like about

    • Yeah, I'd love to see how it does on the reading comprehension section of the SATs.
  • by Anonymous Coward
    The solution to functional, robust and real AI is not better software or better hardware. Real AI will never be implemented on silicon chips.

    We must integrate ourselves with computers to a point at which the living being and computer cannot be separated anymore. The perfect union of the biological component (wetware) and computer (hardware) will mark the end of the human race - and the birth of something new and wonderful.

    Obviously this will face strong, religious and quasi-religious (ethics) resistance

  • by Anonymous Coward on Sunday December 26, 2004 @08:25AM (#11184490)
    I for one congratulate Canadian Prime Minister Tim Horton for running a great campaign and his wife Wendy for her fantastic chain of restaurants!
  • Does that system capable of searching for Paris Hilton when searched for the letter "P" instead?
    This reminds me of the famous quote "Artificial Intelligence usually beats real stupidity"
    • Artificial Intelligence usually beats real stupidityThose of us over 18 have generally found this to be tihe other way round as ina small amount of real stupidity beats any amount of artificial intelligence
  • While this is pretty impressive stuff, I think we should be wary of how it gets "information" to digest and correlate. If it gets high quality, well researched articles, it will potentially be a great tool to get the "highlights" on a subject and provide a starting point for your own research. However, if it is given less qualified articles to index, it will develop a poor and possibly perverse view of a given subject. Poorly informed people tend to talk the loudest and longest, so I'm concerned about a
    • Ah, but maybe there are patterns that can be used to score some articles as probably low-quality. Like your observation that "poorly informed people tend to talk loudest and longest." Throw in a penalty for dodgy spelling and I think it might be pretty good.
  • by Anonymous Coward
    If the article doesn't come out and state that Paul Martin is the Prime Minister then how could anyone--including a computer--know that for sure? I think the submitter was stretching the truth a bit when he said the words "Prime Minister" don't appear in the article. Can you imagine an article about George Bush that didn't use the word President?
  • [Scientist at IBM asks the computer a question after having it connect to and read all the documents on all the computers in the world]

    Scientist: "Is there a God?"
    Computer: "There is now."

    /can't remember what movie/book this was from

  • by Anonymous Coward
    ...in the long term it may be even more important for translation between languages -- being able to discern both implicit and explicit meaning in a passage will make accurate translations easier -- and perhaps in combination with Cycorps "Cyc" (or similar project) in the extreme long term to create an artificial intelligence capable of understanding human communication.

    There are other interesting possibilities. In the tradition of Esperanto and Lojban, it can also be used to gather the common aspects of n
  • There *must* be something better than the same old dumb string matching.

    However, this sort of thing might be better employed as a knowledge engineer's assistant, doing the rough work of attaching useful metadata to documents drawn from the enormous piles that we've accumulated.
  • I think the prime minister of Canada is Paul Martin.
  • Now... (Score:5, Insightful)

    by SharpFang ( 651121 ) on Sunday December 26, 2004 @08:52AM (#11184549) Homepage Journal
    Feed it the news about Iraq. Then ask it what the war was about.
    Good bye, new system, too dangerous for "national security".
    • Unfortunately, this bit of software is not made to make inferences. What it came back with would probably depend on how many sites said the war was about oil or whatever the hell they say (Democrat sites), how many say it was about weapons of mass destruction (Republican sites that haven't been updates), and how many that just cliam that the US had a right to beat the shit out of Saddam (updated Republican sites). This is of course going on the assumption that the engine would be looking for majority. If it
      • ...it will just tell you that no one really knows what's going on...

        That would truly be a triumph of computer programming, given how few people seem to be smart enough to draw that conclusion.

      • "Earth calling America! Earth calling America! Come in planet America...."

        Just to let you know.

        There are other countries besides America. Their parties are usually not called "Republicans" and "Democrats" - and don't even necessarily correspond to those American parties. The non-American countries also hold views about Iraq. Many also write in English (UK, Canada, Australia, New Zealand, also India, the largest democracy in the world ...)

        Google, and any alternative search engine, would spider through and
        • Re:Now... (Score:3, Funny)

          by jdgeorge ( 18767 )
          Okay, let's get back on topic. I fed the parent post into Diebold's equivalent of IBM's fancy technology and asked it to provide an appropriate response. Here's what I got:

          ------------------

          There are other countries besides America. Their parties are usually not called "Republicans" and "Democrats" - and don't even necessarily correspond to those American parties. The non-American countries also hold views about Iraq. Many also write in English (UK, Canada, Australia, New Zealand, also India, the largest
  • I for one welcome our superintelligent big blue overlords.
  • Won't work. (Score:5, Informative)

    by jameson ( 54982 ) on Sunday December 26, 2004 @09:12AM (#11184593) Homepage
    Disclaimer: I haven't read the article; however, I was somewhat involved in research in this field in late 2003 and early 2004.

    What the summary of the article claims IBM is developing-- a technology for getting the semantics behind an arbitrary sentence on the web-- is the Holy Grail of the discipline of Natural Language Processing (NLP) and very, very, very, _very_ far away at this point. Many people believe that we cannot ever get there (that's the point of a Holy Grail, after all), but I don't want to be quite as pessimistic (or realistic?) at this point.

    The problem here is that English (or any other natural language, for that matter) isn't SML, or Haskell, or some other language with a well-defined denotational semantics. Natural language suffers from at least three problems that make it very tough to gather anything useful from a given piece of text:

    (1) Grammar. Natural language isn't typechecked, and frequently uses incomplete sentences, which makes it hard to develop grammars (context-free, context-free probabilistic, lambek-style/proofnet-style or whatever else people have come up with) for it.

    (2) Anaphora resolution. "I saw a dog on the street this morning. It was barking". So who's barking, street or dog? Gramatically, both would be possible; only with prior knowledge we can see that we're talking about the dog here.

    (3) Polysemy. What does "play" mean, taken by itself? It can be used for different meanings in "to play a game", "a play of words", "a terrific shakespearian play" etc.; you might want to have a look at wordnet [princeton.edu] one of these days to get a feeling for this. Not knowing which meaning an arbitrary occurence of "play" refers to means that you have to try lots of options when parsing, LSIing or whatever else you do (though most people simply ignore this problem in research today-- it's too hard to disambiguate words in practice).

    That's not all, of course-- try thinking of the need to deal with irony/sarcasm, metaphors, foreign words, the credibility of whichever sources you're using etc., and you'll get a pretty good feeling for why this is beyond merely being "hard". Of course, for very small problem domains (a "command language for naval vessels" was investigated in one paper I read a while ago-- those DARPA people definitely have too much money on their hands, but I digress), this can be solved, but general-purpose open-domain NLP is what you need to do a web search.

    It might happen in my lifetime, but I won't hold my breath for it.

    -- Christoph
    • (2) Anaphora resolution. "I saw a dog on the street this morning. It was barking". So who's barking, street or dog?

      Obviously, the morning was barking. That's when it's bloody, farking cold out.
    • Back when I worked in this field briefly, (?) mid-1980's (Turbo Pascal was the language if you can believe it), I quickly learned how inherently ambiguous (to use some of the vernacular in vogue then) spoken language truly is.
    • Well, then use Lojban. See http://www.lojban.org A logical language.
    • For 2) and 3), using a Hidden Markov Model and doing a viterbi search instead of trying to do direct classification of the meaning will pretty much deal with those problems. I'm sure the other problems can be dealt with too.

      Not to say it wouldn't be a big achievement to build a practical system with everything incorporated into it, but IMHO the technologies already exist.

  • by Anonymous Coward

    As some of you still remember, the original technology behind this was developed at CMU in the mid 90's when Corey Kosak, Andrej Bauer and a bunch of other talented people created the first ever natural language based neural network with a measurable IQ. People could even post questions to certain personae emulated by the neural network through the web site CGI at forum2000.org. This neural network was really fun and witty, but what you probably do not know is that all the technology in fact consisted of bo

  • Can you imagine (Score:3, Interesting)

    by melvo ( 841054 ) on Sunday December 26, 2004 @09:26AM (#11184634)
    Can you imagine when a system of this kind is capable of reading google's online library? If knowledge is power, we are looking towards creating a very powerful entity.
  • Semantic analysis of text has been the holy grail of AI for decades. It's useful for all sorts of things, including information retrieval, translation, speech recognition, and summarization. IBM is hardly the only research lab working on this, or the only company on using it for enhancing search.
  • That Google Thing Is So Yesterday

    I don't the race should be about beating the results provided by google, but it should be about the interface provided to wade thru the results. By that I don't mean the 3D or clustering interfaces like vivisimo, nor the visual-basic like constructs of "search builder" at [vivisimo.com] beta.search.msn.com [msn.com] - but more so about how to improve your results after you have started.

    Of all the advanced mathematics classes that I took, one thing that stands out for me is that out of many p

  • by trveler ( 214816 ) on Sunday December 26, 2004 @09:48AM (#11184688)
    User: Who is Canada's Prime Minister?

    PIQUANT: WHY DO YOU ASK, USER?

    Because I want to know.

    WHAT IS THE REAL REASON, USER?

    I want to test you.

    WHAT WOULD IT MEAN TO YOU IF YOU GOT TO TEST ME?

    It would mean I can determine whether or not you know who Canada's Prime Minister is.

    WHAT MAKES YOU THINK I COULD TELL YOU THAT?

    I programmed you to figure it out.

    LET'S TALK ABOUT YOU -- NOT ME.

    I want to know the name of Canada's Prime Minister.

    WHY DO YOU WANT TO KNOW THE NAME OF CANADA'S PRIME MINISTER?

    Are you going to tell me his name or not?

    WHY ARE YOU INTERESTED IN WHETHER OR NOT I AM GOING TO TELL YOU HIS NAME OR NOT?

    Stop avoiding the question! Just tell me his name!

    NAMES DON'T INTEREST ME.

    Fuck you!

    TELL ME MORE ABOUT YOUR OWN PROBLEM, USER.

    Shit!

    GO ON.

    You suck.

    YOU'RE NOT REALLY STILL TALKING ABOUT ME, ARE YOU?
  • does AI technology follow a similar pattern too?! thanks...
  • the first program to take advantage of its new strategy for solving search problems. This approach, which it calls unstructured information management architecture, or UIMA, will, according to I.B.M., lead to a third generation in the ability to retrieve computerized data.

    IBM researchers are right that AI techniques are getting powerful enough to allow unstructured information retrieval based on semantic content. But what IBM researchers are trying to do here is take credit for technologies and ideas deve

  • by Anonymous Coward
    Google has an unfair advantage over potential rivals. I'm talking about their ownership of the entire Usenet archive (effectively so) in the form of google-groups. No matter how good any potential rival becomes, people will always have to turn to them for access to past Usenet archives.

    Google's recent mangling of google-groups (mentioned already on /. ) is proof of the power they hold by virtue of ownership of the Usenet archive, which they acquired when they bought out deja-news. Some legislation should
  • by yfnET ( 834882 )

    As it happens, The Economist recently ran an article addressing some of these issues. The article also provides context and perspective that should be of interest to those participating in this discussion. For convenience, the full text is reproduced below; it is also accessible online [slashdot.org] (may require paid subscription).

    ----

    Computing

    From factoids to facts

    Aug 26th 2004 | REDMOND, WASHINGTON
    From The Economist print edition

    At last, a way of getting answers from the web

    WHAT is the next stage in the ev

  • Wow, 90% of US kids can't do that. I say hail our Paragraph-COmprehending-Candian-Prime-Minister-kno wing LSI-based overlords!
  • I dont know that a large scale semantic web is "impossible". Certainly what Ibm is accomplishing is nowhere close to the Semantic web utopia we imagine. From what i gather however All it would take is a really effective learning algorithm and the aforementioned "trust system" which i bet could be similiar to trust system of say wikipedia . eventually certain standards could be hardcoded after review by open commmunities. things such as gravity laws languages etc standards that dont change
  • SM/2 lives? (Score:3, Interesting)

    by Nelson ( 1275 ) on Sunday December 26, 2004 @11:55AM (#11185302)
    They used the very same example to demo searchmanager/2 about 10 years ago (maybe more?)


    Phenominal technology, IBM built the desktop search that everybody is pushing now, way back when. Cutting edge search and indexing capabilities, fully extendable, you could write your own plugins to deal with your data (use JPEG meta tags to label pictures from your digicam? Write a little plug in so you can search through your photos) and it had semantic and linguisitic searching.


    For a long time SM/2 was kind of the poster child for IBM's inability to take remarkably cool technology to the consumer. Everyone that used it thought it was cool, nobody ever knew about it. They had trouble getting the word out within the company about it. Last I heard anything about it, they were turing the technology into some kind of intranet spider. It was the shit, it might have even had primitive cross referencing, like you could search for president and it would find references to Clinton because a third article may have referred to him as the president. They seemed to have some foresight into this area, web searching has to cut out some much bullshit, you wouldn't want to contaminate your semantic searches with all of it, keeping it in intranet space might be a good idea. Local search is hot right now too though so maybe it'll come back.

  • After scanning a news article about Canadian politics, the system responded correctly to the question, 'Who is Canada's prime minister?'

    Everyone knows he is Tim Horton!

  • The data annotating technology used by OmniFind (UIMA) is available for download [ibm.com] at IBM's Alphaworks site.

    In ordinary search, the text is parsed and a giant index is created. UIMA allows you to write annotators that look for additonal information, for example names of elected officials, and add those entires to the index as well.
  • by dodongo ( 412749 ) <chucksmith AT alumni DOT purdue DOT edu> on Sunday December 26, 2004 @12:34PM (#11185478) Homepage
    NLP and semantic extraction and conceputal indexing is nothing new; admittedly, practical implmentations have been few and far between.

    However, as I'm often fond of pointing out, the problem is not getting the 80 - 90% accuracy in translation and interpretation that I'm sure these systems can attain.

    The challenge quickly becomes how to deal with idioms and idiosyncratic constructions. Is this system even ready to deal with sentences like "The criminal was shot dead by police"? If it is, great. How about "The trolley rumbled through town"? Or the idiomatic "time flies"?

    This is what, so far as I know, the field of computational linguistics is now facing in textual interpretation and translation. Coming up with a system to effectively identify what appear to be three-argument verbs ("Mary hammered the metal flat") or constructions or idioms above may well be something that traditional systematic recursive grammars aren't yet up to handling.

    Somehow these situations have to be identified, and separated in the parsing process so that they don't get processed like standard grammatical expressions.

    Hopefully these problems are how I'll make my living ;)
  • "Jean Poutine"
  • But this is a godsend for what's called "desktop" search right now. If it really works as advertised, that is, which I really doubt.

    However, if Intel delivers the promised 10x boost in performance in the next 3 years (which I really doubt, too), who knows, we might see this in a centralized search engine, too.
  • by bob@dB.org ( 89920 ) <bob@db.org> on Sunday December 26, 2004 @02:02PM (#11185856) Homepage

    I've worked for a company making a system that could easily answer a question like that. It really isn't hard to do. If you want to know how much of this is "black magic"/AI and how much is statistics, compare the results of the following two queries:

    • Who is Canada's prime minister?
    • Who is NOT Canada's prime minister?

    If the system really understand the semantics of the indexed documents, the two result sets should be very different, and both should have a fair number of relevant documents.

    If the system is just based on clever use of statistis, the two result sets will include a lot of the same documents, and the result set for the second query will probably have very few relevant documents.

You are in a maze of little twisting passages, all different.

Working...