Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet IBM

IBM vs. Content Chaos 216

ps writes "IBM's Almaden Research Center has been featured for their continued work on "Web Fountain", a huge system to turn all the unstructured info on the web into structured data. (Is "pink" the singer or the color?) IEEE reports that the first commercial use will be to track public opinion for companies. " It looks like its feeding ground is primarily the public Internet, but it can be fed private information as well.
This discussion has been archived. No new comments can be posted.

IBM vs. Content Chaos

Comments Filter:
  • by bc90021 ( 43730 ) * <bc90021.bc90021@net> on Monday January 12, 2004 @11:45AM (#7953216) Homepage
    ...doesn't concern whether "Pink" is a colour or a singer, but whether "Paris Hilton" is a hotel in France or an oft downloaded video... ;)
  • by 3lb4rt0 ( 736495 )
    The spinoff that will be used by joe sixpack net user.
  • All we need... (Score:3, Interesting)

    by TJ_Phazerhacki ( 520002 ) on Monday January 12, 2004 @11:46AM (#7953230) Journal
    There is already altogether too much "Stuff out there" for anyone to put any major effort into catogorizing it. We should soon reach the point of info overload, and then what? What is the point of catologing overflow data? Do we really need something like this? Or should we just ship a bunch of programmers wasting their time over to something else, like better spam filters and OS's without gaping security holes?
    • Re:All we need... (Score:1, Flamebait)

      by Frymaster ( 171343 )
      the first commercial use will be to track public opinion for companies.

      here's one to start with:

      microsoft (msft) of redmond washington: you suck!

      now, go log that.

    • Re:All we need... (Score:1, Insightful)

      by geoffspear ( 692508 )
      Oh yes, because there's such an enormous shortage of programmers right now. IBM should lay off all of these programmers so Microsoft will have a pool of available programmers who know nothing about OS security to work on security.

      And once all the game producers, who make a product we definitely don't "need" get rid of all of their programmers, there will be plenty of free people to work on anti-spam technology. Whee!

    • Re:All we need... (Score:5, Insightful)

      by millahtime ( 710421 ) on Monday January 12, 2004 @12:04PM (#7953444) Homepage Journal
      There are many organizations that need better ways to analyze their info. There are databases that are terabytes in size and have to do detailed searches. With SQL databases that can take a long time and any faster way can save a lot of time and money. There is a big need for this technology across many industries.
    • Re:All we need... (Score:5, Insightful)

      by xyzzy ( 10685 ) on Monday January 12, 2004 @12:20PM (#7953619) Homepage
      That's really funny that you mention "spam filters", since that is exactly the content categorization task that you are talking about.

      Automatic categorization of overflowing data is exactly what you need to do when you have too much to think about -- it allows you to triage your attention span, which is the most limited resource you have.
    • Re:All we need... (Score:3, Interesting)

      by redragon ( 161901 )
      I think the inverse is the case.

      The more chaotic (overloaded in your terms) that data tends to be, then the greater the information contained in that data (think compression). So what they're going after is not "catogorizing" the internet, they're going after making some sense out of all of that data. Information overload begins to necesitate an intermediary to help filter out the data that you're interested in.

      The interesting thing becomes what sort of biases are built into a system like this? That is
  • by Urkki ( 668283 ) on Monday January 12, 2004 @11:47AM (#7953236)
    They could certainly use this kind of techniques to improve their results...

    Then again, in a way they already use something like this, except they're only really concerned about links, not actual contents of pages...
  • by Rhubarb Crumble ( 581156 ) <r_crumble@hotmail.com> on Monday January 12, 2004 @11:47AM (#7953239) Homepage
    a huge system to turn all the unstructured info on the web into structured data

    In order to do this, they will use a scheme by which each document is referred to by a string including the transfer protocol, the host name, and a file path.

    oh, wait...


    • Some information at different paths might require cross-referencing. Thus, the scheme you propose should be extended so that there would be a way for text documents to contain links to each other.

      However, if you just take a big enough storage system and download all the documents from teh intterweb, you can have a flat directory containing all the documents. Woohoo, progress!

  • by Anonymous Coward on Monday January 12, 2004 @11:47AM (#7953245)
    IEEE reports that the first commercial use will be to track public opinion for companies.

    Word has it the first test case will be SCO. Web fountian: "Outlook not so good"
  • Get this setup (Score:3, Interesting)

    by millahtime ( 710421 ) on Monday January 12, 2004 @11:49AM (#7953274) Homepage Journal
    I wonder how long until IBM sells this setup. If it works well Logistics Orginazations would love to get their hands on it.
    • I mean by this that most Logistics Orgainzations will have propritary info that they won't let IBM house.
      • Re:Get this setup (Score:5, Informative)

        by orac2 ( 88688 ) on Monday January 12, 2004 @12:01PM (#7953409)
        Although the article didn't have room to go into this point (and I should know, I'm the author), IBM can completley compartmentalize competitors' data, even if hosted in house (IBM already does this in other parts of its business). If companies are still wary, they can host the data themselves and let WebFountain troll it on a need to know basis.
        • let WebFountain troll it

          I sincerely hope you meant trawl it. The last thing we need is for IBM to build and sell an automated system for trolling the entire internet!
  • Expensive (Score:4, Interesting)

    by starvingcodeartist ( 739199 ) on Monday January 12, 2004 @11:51AM (#7953289)
    In the article is says they plan on charging between $150,000 and $300,000 a year to use this super-search engine. They think corporate execs will pay for it. Seems really steep to me. BUT, for corporate execs, its probably not too expensive. They'll just outsource another 10-15 programming jobs to India to pay for it.
    • Re:Expensive (Score:5, Interesting)

      by orac2 ( 88688 ) on Monday January 12, 2004 @11:56AM (#7953349)
      The point is that it's not intended for use as a search engine, but a platform for doing computation intensive data mining and analysis. A search engine can tell you how many mentions of IBM appear on the web, but not how people feel about IBM.

      • "A search engine can tell you how many mentions of IBM appear on the web, but not how people feel about IBM."

        I give you googlism.com: http://www.googlism.com/index.htm?ism=ibm&type =2

        Googlism for: ibm

        ibm is even "officially" spineless
        ibm is still the 'king'
        ibm is shipping 2 new powerpc processors
        ibm is bullish on asps and hosted services in
        ibm is offering internship that supports grid
        ibm is my choice
        ibm is outstanding
        ibm is giving peace
        ibm is planning to ship new
        ibm is willing to help
        ibm is announci
  • corporate meddling (Score:3, Insightful)

    by commo1 ( 709770 ) on Monday January 12, 2004 @11:54AM (#7953322)
    One of my main concerns with search databases is the inhenrent ability for corporations to increase their visibility on the web by manipulating data to their benefit to bring their corporate page up first on the list. I wonder if there is a way for the database to have a scoring system based on the validity of the data: is the information there, or are there just highly develpoped metatags doing the work? If you do a search for a specific part number for an HP product, what are the cances of getting a) the HP home page where a further search would be necessary to find any relevant info or b) the big chains like Staples, Sircuit City who just want to sell you cartridges and have the time and resources to steer you in the right direction. How would the system be regulated? (kinda like Slashdot mods :P)? Who watches the watchers, and can information validity be electronically implemented? What kind of AI would be necessary?
  • Information wants to be... Fuscia!

    *shrug*

    e.
  • by ParadoxicalPostulate ( 729766 ) <saapadNO@SPAMgmail.com> on Monday January 12, 2004 @11:54AM (#7953330) Journal
    Are you telling me that there are programmers willing to go through [Insert Ludicrously Large Number Here] files and "annotate" them using XML to fit the new system?

    You would need an enormous workforce to do that.

    And if they don't plan on doing that, what about all the existing information? Is it going to be excluded from the database? Seems like much of a waste to me!

    Damn but I would love to have access to one of these, even if the amount of information available will be miniscule (relatively speaking) for the next few years.
  • Entirely unsuited (Score:4, Insightful)

    by happyfrogcow ( 708359 ) on Monday January 12, 2004 @11:54AM (#7953337)
    From the article, "But many online information sources are entirely unsuited to the XML model--for example, personal Web pages, e-mails, postings to newsgroups, and conversations in chat rooms."

    entirely unsuited? chrissake. email, unsuited. newsgroups, unsuited. chat rooms, unsuited. If personal home pages are unsuited, then so are corporate home pages, as there is nothing inherantly different about the two. All this from an IEEE article... I would have thought them to be more acurate and less misleading. I could put <popularmusic>Pink</popularmusic> in my HTML as easily as Amazon could in theirs.

    HTML is based on the XML model. HTML is used to create personal web pages. How on earth then, could personal web pages be "entirely unsuited to the XML model"?

    • Ummm. No. HTML predates XML.
      • details details...

        HTML (1992?) does predate XML (1996?). My point is that they are both SGML based, and a strict HTML 4.01 document is a valid XML document, unless I have something wrong in my understanding of all of this.

        Furthur, my point was not a debate on what is or isn't HTML considered to be derived or a subset of, but that personal web pages are not inherantly different from other web pages. To say a company can do something with their data that an individual cannot do, is misleading.
    • by orac2 ( 88688 ) on Monday January 12, 2004 @12:10PM (#7953525)
      Disclaimer: I'm the author of the article.

      Most people don't and won't tag as they go. (Except for those of us used to writing HTML-enabled comments on /. of course). Also, in order to be able to write <popularmusic>Pink</popularmusic>, and have it make sense, you'd have to be following a DTD.

      As anyone who's been involved in DTD formulation can attest, even for internal documentation, it can be a royal pain in the butt. I don't think the vast majority of on-line rapid content generators (all those bloggers, emailers, chatters) will ever use XML to routinely tag their content manually. The article isn't talking about machine generated or commercial content, like Amazon's, but the day to day stuff that gets put up in the time it takes to write it and click submit, and which is of most interest to market researchers.
      • More to the point, HTML tags for RENDERING, not semantics. To a first order, ALL HTML pages look alike.
      • Is it unreasonable to imagine a web community that advocates the use of some relavant DTD? On the nerdly end of things, if slashdot had their own DTD or used some other DTD, I might use it. It could ad value to the site from a usability perspective as well as economic value for the owners.

        I think that if it was suffiecntly easy for a person to know what tag to put around "Pink", and know that it would ad something to the usability and understandability (am i making up words?) they might do it.
        • On the nerdly end of things, if slashdot had their own DTD or used some other DTD

          Even back when the web was just composed and read by nerds, people still didn't follow the "rules" -- look at how HTML drifted from it's original use of marking up content to being a poor man's page layout language.

          they might do it.

          Sorry, I just can't believe it. Most contributors to the web (i.e. non computer nerds) are hard pressed to remember even a handful of HTML tags, let alone maintain a familiarity with a DTD, ho
  • Impact on Google IPO (Score:3, Interesting)

    by G4from128k ( 686170 ) on Monday January 12, 2004 @12:01PM (#7953412)
    This is the type of technology that could either ensure or derail Google's future (I'm not saying that it will, only that it could). Semantic analysis and clustering of web pages could improve search. I hope Google gets to use/create this type of tech.
  • Echelon? (Score:2, Interesting)

    This project sounds quite interesting -- it could really help out projects like Echelon [aclu.org] to help win the war on terrorism, if it's capable of understanding other languages of course, and could possibly build a whole database of information that's intercepted from other places. All that chatter, with the codewords they use, could possibly be understood by a football field full of Linux rackmounts, and might foil something.

    Of course, such power could also be horribly misused if it came into the wrong hands.
    • Re:Echelon? (Score:4, Insightful)

      by orac2 ( 88688 ) on Monday January 12, 2004 @12:26PM (#7953672)
      Disclaimer: I'm the author of the article.

      I know, from talking to the WebFountain team that they're very sensitive to privacy concerns. WebFountain obeys robots.txt and doesn't archive material which has vanished from the publicly visible web (if only for reasons of storage capacity!).

      The point is that all the information that feeds into IBM is already publicly availble. If wanted to go after Green Party members and if the Green Party posted it's membership roll on a webserver, I think they'd be able to get it, WebFountain or no.

      Of course, I suppose WebFountain could be used to construct a membership list by scanning people's home page's to find out if they say that they're a member, but again this is publicly declared information.

      Bottom line, as always: if you don't want it generally accessible to all, don't put it on a public web server.
      • The point is that all the information that feeds into IBM is already publicly availble. ... Of course, I suppose WebFountain could be used to construct a membership list by scanning people's home page's to find out if they say that they're a member, but again this is publicly declared information.

        But that's it, you can't just say "all I did was collect public data" so it can't have privacy concerns. It's obviously still got them (unless your collector is useless).

        For instance, I might say on /. that

  • by null etc. ( 524767 ) on Monday January 12, 2004 @12:03PM (#7953441)
    It would be nice if, in parallel to the Internet, another network was developed to hold only symantically organized knowledge. That network would be free of marketing and commercial business, and would ostensibly be the largest repository of organized knowledge in the planet. Think Internet2, based entirely in XML.

    Similar to HTML's current weakness in separating presentation from content, the web today has a weakness in separating content sites from sales sites. Do a search in Google, especially for programming or technical topics, and you're more likely to retrieve 100 links to online stores selling a book on that topic, than finding actual content regarding that topic. This lack of ability to separate queries for knowledge, verses queries for product sales literature, is especially frustrating for scientists and programmers. I think Google is taking a step towards this with Froogle, meaning that if Froogle becomes popular enough, it's possible that Google will strip marketing pages from their search results.

    Worse even, is when someone registers a thousand domains (plumbing-supplies-store.com, plumb-superstore-supplies.com, all-plumbing-supplies.com, etc) and posts the same marketing page content ("Buy my plumbing supplies!") on each domain. A search on Google will then retrieve 100 separate links containing the same identical garbage. You would think that Google could detect this "marketing domain spam" and reduce the relevancy of such search results.

    Anyways, I can't complain, because I can find nearly anything on the web I need, compared to 10 years ago.

    • by Tom ( 822 )
      Do a search in Google, especially for programming or technical topics, and you're more likely to retrieve 100 links to online stores selling a book on that topic, than finding actual content regarding that topic.

      (topic) -checkout -buy

      Other things that work well sometimes:
      (topic) site:.org
      (topic) -amazon
      (topic) -site:amazon.com -site:amazon.co.uk

      and posts the same marketing page content ("Buy my plumbing supplies!") on each domain. A search on Google will then retrieve 100 separate links containing the
    • utilising your own system is a start. on the desktop there's nat [nat.org] *Ximian* friedmans Dashboard [nat.org]
  • Researchers in Alabama are working on a system which converts all music on the internet into a single Menudo mp3 file. EIEIO reports the first public use will be to create a single mp3 file that results in trilllions of dollars in royalties to the RIAA when traded illegally.
  • i.e. nameprotect (Score:4, Interesting)

    by joeldg ( 518249 ) on Monday January 12, 2004 @12:09PM (#7953507) Homepage
    nameprotect does something similar, except they are looking for people violating copyrights.
    in addition I think they might be one of the most banned bots online.

    anyway, their users are all corporate entities who pay a lot of money to be able to auto-cease and desist copyright infringers..

    These same companies will pay IBM to tell them that since their cease and desist spree everyone hates them.

  • by DerOle ( 520081 )
    WebFountain [ibm.com]
  • Like NorthernLight? (Score:5, Informative)

    by dpbsmith ( 263124 ) on Monday January 12, 2004 @12:11PM (#7953527) Homepage
    This sounds very similar to NorthernLight.

    NorthernLight was (it still exists, but apparently is not available to the nonpaying public at all) a search engine that displayed its results automatically sorted into as many as fifteen or twenty categories, automatically generated on the basis of the search. (For some reason, they called these categories "custom search folders.")

    Since it's no longer available to the public I can't give a concrete example. I can't test it to see whether a search on "Pink" creates a couple of folders labelled "Singer" and "Color," for example. But that's exactly the sort of thing it does/did.

    I actually would have used NorthernLight as one of my routine search engines--it worked quite well--had it not been for another major annoyance: in the publicly available version, it always searched both publicly available Web pages and a number of fee-based private databases, so whatever you searched for, the majority of the results were in the fee-based databases and I would have had to pay money to see what they were. In other words, it was heavy-handed promotion of their paid services and had only limited utility to those who did not wish to by them).
    • Vivisimo [vivisimo.com] is doing sorting searches.

      Try it out, works quite often for me - beats Google for many queries, not in actual number of pages found, but in the time it takes me to find out whatever I'm looking for.

  • Gaming Webfountain (Score:4, Interesting)

    by G4from128k ( 686170 ) on Monday January 12, 2004 @12:11PM (#7953534)
    I wonder how long it will take sleazy e-commerce sites and p0rn sites to game WebFountain and turn it into SpamFountain?

    I suspect that this tool (and any like it) must make a core assumption -- that each webpage is about one semantic thing and that the creators are trying to communicate that one thought. In contrast, people who try to boost their page rank have no compuction about misleading people (or algorithms). Clever tagging and misleading verbage should be able to fool IBM's analyzer into clustering a site where it does not belong (but where the site owner wants it). The result is pages look like it is about another thing (some popular search term)while being about soemthing else (selling their junk or porn).

    Next will come high-priced consultants that tell you how to make you site pace highly on WebFountain (like the ones that currently game Google).
  • IBM's Pink (Score:2, Funny)

    by th77 ( 515478 )
    IBM should know that Pink was the predecessor to Taligent [wikipedia.org] which was the predecessor to absolutely nothing.
  • IEEE reports that the first commercial use will be to track public opinion for companies.


    Can't wait to see what the entry for SCO looks like...

  • "Things such as price or product identification numbers are identified by bracketing them with so-called tags, as in Deluxe Toaster , $19.95 ."

    They're "tags", not "so-called tags".

    Tags! Like those little things they hang on stuff at the store to tell you how much it costs. Tags.

    Of course, he may have been referring to their use in a "software program".

  • by dpbsmith ( 263124 ) on Monday January 12, 2004 @12:21PM (#7953631) Homepage
    As Google has discovered, it's only possible for simple heuristics and algorithms to "understand" the human content on the Web for as long as it doesn't matter.

    As soon as people become aware that Google or WebFountain or whatever is trying to evaluate web content, immediately they will begin trying to reverse-engineer and subvert the algorithms and heuristics that are used.

    And the stakes are much higher for gaming WebFountain than for gaming Google.

    For example, I'd imagine there would be big money for anyone who could convince companies that they know how to make it appear that a particular movie/song/toy/computer was "hot," so that the WebFountain-using Walmarts and Best Buys of the world would stock more of it.

    WebFountain will work well only until it is actually introduced.

    • Disclaimer: I'm the author of the article

      As soon as people become aware that Google or WebFountain or whatever is trying to evaluate web content, immediately they will begin trying to reverse-engineer and subvert the algorithms and heuristics that are used.
      .

      This could be tricky -- WebFountain uses a kitchen sink approach, with a varying palette of content discriminators and disambiguators. The developers are also savvy to downweight link farm type approaches. Of course, one could say, conduct a campaign
      • It's important not to underestimate people's ability to game systems, regardless of the thought put into them. The simple algorithm
        • Reconstruct algorithm.
        • Simulate algorithm and play with the inputs until the outputs match what you want.
        • Bring those inputs about.

        is extremely powerful, and note that as a "meta-algorithm" there's absolutely no way to completely shut it down.

        You have only four basic defenses against this:

        1. Keep changing the algorithm (expensive and large changes may not be possible if stabi
        • The thing is, that it's hard to do the second step of your general algorithm: Simulate algorithm and play with the inputs until the outputs match what you want.

          Determining the outputs and closing the feedback loop is hard -- getting WebFountain output is pretty pricey, compared to search engine results, where you can have a very low-cost feedback loop. This makes reconstructing the alogrithms hard, if not impossible. Also remember that the exact set of algorithms varies depending on the problem: because t
          • First, the "hardness" needs to be measured against the value of the benefit obtained from gaming. If it's large, more effort will be thrown at it.

            Second, you seem to have missed the implications of my carefully-chosen word simulate. You don't need to replicate the algorithm, just create something that mostly works in most of the situations that you care about. (Both "mosts" are important.) This is a significantly lower bar then "complete replication", and is one of the reasons it's so hard to combat this;
        • I second this sentiment, that gaming of any system is likely, not merely possible.

          This is because humans can be "gamed" in the real world. That is, one can fabricate a "buzz" about things, not simply by overt measures like commercials, but plants in social situations. Sony or some other consumer electronics companies planted people in Times Square and other highly visible situations to pretend to use some cool new gadget. Then people see it and tell their friends and then eventually, they hope, there

    • You mean kinda like how Google is getting ruined by scumbags who set up thousands of fake sites that just refer everything you've ever searched for directly to Amazon? Google has become almost worthless for product research anymore. Sure its still "better" than anything going, but the spammers and marketers have filled it with way too much garbage.
  • by Animats ( 122034 ) on Monday January 12, 2004 @12:22PM (#7953634) Homepage
    Search engine spiders need to understand more about sites. Things like this:
    • The site is selling something.
    • The page is composed of multiple unrelated articles or ads, each one of which should be viewed as a separate entity for search purposes.
    • The page is part of a blog.
    • Content on this site duplicates that found on other sites.
    • The site is owned by an organization with a known Dun and Bradstreet number. (If a site is selling something, and its Whois info doesn't match the DNB corporation database, it should be downgraded in search position. This would encourage honest Whois info.)
    • > The site is owned by an organization with a
      > known Dun and Bradstreet number. (If a site is
      > selling something, and its Whois info doesn't
      > match the DNB corporation database, it should
      > be downgraded in search position. This would
      > encourage honest Whois info.)

      This may be a question born of serious ignorance. If so, I'd really appreciate some enlightenment.

      This is also not so theoretical for me, as I am currently privately developing a product that I will eventually be selling online.
  • SCO (Score:5, Funny)

    by Zork the Almighty ( 599344 ) on Monday January 12, 2004 @12:22PM (#7953638) Journal
    IEEE reports that the first commercial use will be to track public opinion for companies.

    Searching "SCO"
    Found "Slashdot"
    ERROR arithmetic underflow.
  • by s4m7 ( 519684 ) on Monday January 12, 2004 @12:25PM (#7953661) Homepage

    Here's how it works:

    Executive Bob, who's paid IBM $150,000 for his enterprise liscence of webfountain, enters into his webfountain search box: "Pink the musician, not the color"

    IBM's powerful software parses this command into "pink music -color" and passes it to google, retrieves the results, removes Google's paid ads and replaces them with IBM's paid ads. The content is then served to Executive Bob, who shouts: "EUREKA" since within the top ten search results he finds "NUDE PICTURES OF RAPPER PINK!"

    IBM then lands a lucrative support contract with Exectutive Bob to remove all the viruses and spyware from his desktop PC. Rinse and Repeat.

  • by AndroidCat ( 229562 ) on Monday January 12, 2004 @12:27PM (#7953677) Homepage
    (Imperial or metric football fields?)
    IBM's breakthrough is called WebFountain--half a football field's worth of rack-mounted processors, routers, and disk drives running a huge menagerie of programs.
    Later:
    It uses a cluster of thirty 2.4-GHz Intel Xeon dual-processor computers running Linux to crawl as much of the general Web as it can find at least once a week.

    To ensure that WebFountain's finger is constantly on the pulse of the Internet, an additional suite of similar computers is dedicated to crawling important but volatile Web sites, such as those hosting blogs, at least once a day. Other machines maintain access to popular non-Web-based sources, such as Usenet (a newsgroup service that predates the Web) and the Internet Relay Chat system, known as IRC. The data is then passed into WebFountain's main cluster of computers, currently composed of 32 server racks connected via gigabit Ethernet. Each rack holds eight Xeon dual-processor computers and is equipped with about 4-5 terabytes of disk storage.

    That's a lot of stuff, but half a football field? Possibly they're including cubicles for the staff or did they just inherit some old Big Iron space that was that large?
  • by Mr_Silver ( 213637 ) on Monday January 12, 2004 @12:34PM (#7953752)
    IEEE reports that the first commercial use will be to track public opinion for companies

    You can do that already with Google:

    A search for "Microsoft is evil" gets you 600,000 pages.

    A search for "Microsoft is good" gets you 3,590,000 pages.

    Therefore Microsoft is more good than evil.

    Err ... that wasn't quite the answer I was expecting.

    (cue sounds of joke falling apart...)

    • The funny thing is: if you search for the above with usenet google will suggest an interesting list of newsgroups.
    • Nope,
      -searches for
      microsoft is evil
      and
      microsoft is good
      produce such results.
      BUT
      -searches for
      "microsoft is evil"
      and
      "microsoft is good"
      produce a different result:
      2070 and 1020 respectively, showing that:
      1/ microsoft IS evil.
      2/ good prevails over evil on the internet.
    • That sounds a whole lot like Google fight [googlefight.com] :)

      This [googlefight.com] wasn't the answer I was hoping for either ;)
  • The head of a research and development department could feed WebFountain all the e-mails, reports, PowerPoint presentations, and so on that her employees produced in the last six months. From this, WebFountain could give her a list of technologies that the department was paying attention to. She could then compare this list to the technologies in her sector that were creating a buzz online. Discrepancies between the two lists would be worth asking her managers about, allowing her to know whether or not the
  • It already exists (Score:3, Interesting)

    by claudebbg ( 547985 ) on Monday January 12, 2004 @12:45PM (#7953871) Homepage
    I've already seen/heard of such system, basically in the Business Intelligence field.
    In England, a systems like Autonomy [autonomy.com] (used by the police at the beginning) can crawl a mass of information with dedicated spiders (not only for the web, but also commercial databases, files...). Then, it structures all the content in thematics with links and proximity.
    I personnaly tested it some years ago, feeding it with information websites and asking some articles "close to" another one. The efficiency was amazing because it was able to make the difference between close terms that have really different meaning depending on the context. Usually, search engines are wrong because they can't use the context.
    I also set up some "agents" for recurrent searches (an agent is basically a search plus some training, letting Autonomy know what found document are close and not) and it was able to propose everyday a really good press review with nearly no wrong documents.
    As a complement to Autonomy, I know a BI team that uses some other tools like Pericles [datops.com]to feed the searches with "relevant" content, basically thematics that are "appearing" in the group of documents and are close to some interests.
    Such BI tools can already provide the kind of information cited, like a opinion movement against a company detected in the newsgroup or some websites. And IBM is certainly on the tracks to improve such tools with the techniques of their labs.
    I hope these tools won't be limited to PR articles on the web and/or private use by big corporations, because it could only be another Echelon with all its bad consequences:
    - bad use of public information
    - paranoia feeded with wrong scares
    - public/corp. power against the citizens
    If tools like echelon could be used by everybody, it would have to let much more privacy to citizens and the public leaders would have to explain the investments.
  • Sounds like CYC (Score:3, Interesting)

    by Sanity ( 1431 ) * on Monday January 12, 2004 @12:50PM (#7953922) Homepage Journal
    CYC [cyc.com] have been trying to collect all human knowledge for the last few decades and feed it into a knowledge base. They have even open sourced part [opencyc.org] of their database.

    Despite the apparent promise of the project, it is difficult to find actual examples of it doing really cool stuff.

  • semantic web (Score:2, Informative)

    by jonasmit ( 560153 )
    XML simply isn't enough. Structure != Meaning. Meaning must be inserted somewhere by someone. Trying to interpret HTML/natural language to form structured documents is a daunting task. If you want real meaning then the data needs to be described or translated into a meaningful form like RDF [w3.org] (yes represented by xml) when it is created so that intellegent agents such as this can *understand* the data. RDF uses triples (thing graphs) to describe relationships making use of URIs: Subject--Predicate--Object .
  • This technology should be made available to social scientists, anthropologists, cultural critics, etc. so that current social trends can be analyzed. Perhaps IBM would be kind enough to provide free access to this system to Universities?

    It is a pity that the WebFountain system is geared toward corporate users. Of course, there must be some ROI... but, still it makes me sad that every new technology seems to be driven by corporate desire for good PR and world domination.

    Interestingly, this article comes

  • Analytic tools can ferret out patterns in, say, a sales receipt database, so that a retail store might see that people tend to buy certain products together and that offering a package deal would help sales. ...
    This urban-legend example of people buying beers and diapers at the same time (hence the sections for beer and diapers should be close by, at least on Saturdays) has been beaten to death and beyond.
    A sentence that originally read "We visited Mount Fuji and took some photos" would become something like ?We visited Mount Fuji and took some photos.?
    I am not sure what the tags around "Mount Fuji" have added in this example. Only thing I can think of is that these are similar to the "smart-tags" of MS office that pre-populate straight forward relational data like a contact's email or address. Personally I would do a search for the latitude/longitude when I need this info in Google as "mount fuji latitude" and the first result I get is the one that gives me the latitude and longitude of Mount Fuji. What is the point of pre-feeding this info during the "markup"? And it bears repeating here that rather than complaining about results that you get with one or two keywords, think about adding keywords to narrow and specialize the search. Paris Hilton video is better than just Paris Hilton which might unnecessarily show you stuff about hotels.
    By the time the annotators have finished annotating a document, it can be up to 10 times longer than the original.
    So, a person was probably talking about a molehill, and the machine markup has changed that into a mountain. How much of the extra tags (even accounting for the verbosity of XML) have really added "meaning" to the document. How much of the "meaning" was intended and how much has been force-fed by the machine ?
    These heavily annotated pages are not intended for human eyes; rather, they provide material that the analytic tools can get their teeth into.
    This is where I think that they are using XML but going away from the XML concept. It was supposed to be human readable. If the IBM research group started focusing on how to help people make sense of the 1x material and 10x markup, they will be introducing the person at the right time in the analysis process - introducing a person at the last stage, esp in deriving "meaning" may not be the best strategy. The markups are just "filters" thru which when the material is viewed a lot of context becomes apparent. What we need to do is to let people start with the filters and then look for the material (top-down) or start with the material and look for filters (bottom-up) - sort of a more iterative procedure involving both these approaches.

    Google lets you do a keyword search (bottom-up) or via the directories - DMOZ (top-down). Vivisimo and Grokker were recently discussed on slashdot where they were creating dynamic categorizations, i.e. bottom-up. I think it would be better to let people analyze the markup (directory/top-down approach) or analyze the material (keyword/bottom-up) rather than mixing up the two and presenting the "results" to the person.

    E-mails or instant messages can't be labeled in this way without destroying the ease of use that is the hallmark of these ad hoc communications; who would bother to add XML labels to a quick e-mail to a colleague?
    This is the second place where energies should be focused. Where the document is created may mean a lot. It could be in which directory I create a new file inherits the path (hence context), or it could be as simple that on the top-right of the screen I create personal files, on the bottom right I create files about sports, on the left-bottom-middle I create files about java .. etc. I think this beats anyday the bot-annotators that come after me and add 10 times markup than the whole of the quick email that I sent to a colleague.
    • I think you are missing the point. The tags are not for people, but for data analysis software. Comparing a search engine to a general analysis platform (which is what WerbFountain is) is like comparing apples to oranges. The entire apparatus (WebFountain plus data mining software) is designed to produce high level reports that talk about data in the aggregate.
      • The entire apparatus (WebFountain plus data mining software) is designed to produce high level reports that talk about data in the aggregate.

        The tags are not for people, but for data analysis software

        My perspective is from the point of view of a business man trying to use the "data." This data must have some correlation to reality of the business, and most preferably illustrate some correlation or cause-effect that I could use to predict the future a little more accurately. This is where the theor

  • Trying to intelligently search for information in the universe is an age-old problem. How can my system be so smart to tell the difference between Pink the singer and pink the color (or colour if you prefer). Basically, it can't.

    Nothing is smart enough to tell the difference because the content is contextual (hence the name). In a corporation like the one I'm at now (a class A railway) we have hundreds of terabytes of information flowing through our systems on a regular basis. Trying to track it, categori

Professional wrestling: ballet for the common man.

Working...