Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet IBM

IBM vs. Content Chaos 216

ps writes "IBM's Almaden Research Center has been featured for their continued work on "Web Fountain", a huge system to turn all the unstructured info on the web into structured data. (Is "pink" the singer or the color?) IEEE reports that the first commercial use will be to track public opinion for companies. " It looks like its feeding ground is primarily the public Internet, but it can be fed private information as well.
This discussion has been archived. No new comments can be posted.

IBM vs. Content Chaos

Comments Filter:
  • by Urkki ( 668283 ) on Monday January 12, 2004 @12:47PM (#7953236)
    They could certainly use this kind of techniques to improve their results...

    Then again, in a way they already use something like this, except they're only really concerned about links, not actual contents of pages...
  • corporate meddling (Score:3, Insightful)

    by commo1 ( 709770 ) on Monday January 12, 2004 @12:54PM (#7953322)
    One of my main concerns with search databases is the inhenrent ability for corporations to increase their visibility on the web by manipulating data to their benefit to bring their corporate page up first on the list. I wonder if there is a way for the database to have a scoring system based on the validity of the data: is the information there, or are there just highly develpoped metatags doing the work? If you do a search for a specific part number for an HP product, what are the cances of getting a) the HP home page where a further search would be necessary to find any relevant info or b) the big chains like Staples, Sircuit City who just want to sell you cartridges and have the time and resources to steer you in the right direction. How would the system be regulated? (kinda like Slashdot mods :P)? Who watches the watchers, and can information validity be electronically implemented? What kind of AI would be necessary?
  • Re:All we need... (Score:1, Insightful)

    by geoffspear ( 692508 ) on Monday January 12, 2004 @12:54PM (#7953327) Homepage
    Oh yes, because there's such an enormous shortage of programmers right now. IBM should lay off all of these programmers so Microsoft will have a pool of available programmers who know nothing about OS security to work on security.

    And once all the game producers, who make a product we definitely don't "need" get rid of all of their programmers, there will be plenty of free people to work on anti-spam technology. Whee!

  • Entirely unsuited (Score:4, Insightful)

    by happyfrogcow ( 708359 ) on Monday January 12, 2004 @12:54PM (#7953337)
    From the article, "But many online information sources are entirely unsuited to the XML model--for example, personal Web pages, e-mails, postings to newsgroups, and conversations in chat rooms."

    entirely unsuited? chrissake. email, unsuited. newsgroups, unsuited. chat rooms, unsuited. If personal home pages are unsuited, then so are corporate home pages, as there is nothing inherantly different about the two. All this from an IEEE article... I would have thought them to be more acurate and less misleading. I could put <popularmusic>Pink</popularmusic> in my HTML as easily as Amazon could in theirs.

    HTML is based on the XML model. HTML is used to create personal web pages. How on earth then, could personal web pages be "entirely unsuited to the XML model"?

  • by null etc. ( 524767 ) on Monday January 12, 2004 @01:03PM (#7953441)
    It would be nice if, in parallel to the Internet, another network was developed to hold only symantically organized knowledge. That network would be free of marketing and commercial business, and would ostensibly be the largest repository of organized knowledge in the planet. Think Internet2, based entirely in XML.

    Similar to HTML's current weakness in separating presentation from content, the web today has a weakness in separating content sites from sales sites. Do a search in Google, especially for programming or technical topics, and you're more likely to retrieve 100 links to online stores selling a book on that topic, than finding actual content regarding that topic. This lack of ability to separate queries for knowledge, verses queries for product sales literature, is especially frustrating for scientists and programmers. I think Google is taking a step towards this with Froogle, meaning that if Froogle becomes popular enough, it's possible that Google will strip marketing pages from their search results.

    Worse even, is when someone registers a thousand domains (plumbing-supplies-store.com, plumb-superstore-supplies.com, all-plumbing-supplies.com, etc) and posts the same marketing page content ("Buy my plumbing supplies!") on each domain. A search on Google will then retrieve 100 separate links containing the same identical garbage. You would think that Google could detect this "marketing domain spam" and reduce the relevancy of such search results.

    Anyways, I can't complain, because I can find nearly anything on the web I need, compared to 10 years ago.

  • Re:All we need... (Score:5, Insightful)

    by millahtime ( 710421 ) on Monday January 12, 2004 @01:04PM (#7953444) Homepage Journal
    There are many organizations that need better ways to analyze their info. There are databases that are terabytes in size and have to do detailed searches. With SQL databases that can take a long time and any faster way can save a lot of time and money. There is a big need for this technology across many industries.
  • by orac2 ( 88688 ) on Monday January 12, 2004 @01:10PM (#7953525)
    Disclaimer: I'm the author of the article.

    Most people don't and won't tag as they go. (Except for those of us used to writing HTML-enabled comments on /. of course). Also, in order to be able to write <popularmusic>Pink</popularmusic>, and have it make sense, you'd have to be following a DTD.

    As anyone who's been involved in DTD formulation can attest, even for internal documentation, it can be a royal pain in the butt. I don't think the vast majority of on-line rapid content generators (all those bloggers, emailers, chatters) will ever use XML to routinely tag their content manually. The article isn't talking about machine generated or commercial content, like Amazon's, but the day to day stuff that gets put up in the time it takes to write it and click submit, and which is of most interest to market researchers.
  • Re:All we need... (Score:5, Insightful)

    by xyzzy ( 10685 ) on Monday January 12, 2004 @01:20PM (#7953619) Homepage
    That's really funny that you mention "spam filters", since that is exactly the content categorization task that you are talking about.

    Automatic categorization of overflowing data is exactly what you need to do when you have too much to think about -- it allows you to triage your attention span, which is the most limited resource you have.
  • by Animats ( 122034 ) on Monday January 12, 2004 @01:22PM (#7953634) Homepage
    Search engine spiders need to understand more about sites. Things like this:
    • The site is selling something.
    • The page is composed of multiple unrelated articles or ads, each one of which should be viewed as a separate entity for search purposes.
    • The page is part of a blog.
    • Content on this site duplicates that found on other sites.
    • The site is owned by an organization with a known Dun and Bradstreet number. (If a site is selling something, and its Whois info doesn't match the DNB corporation database, it should be downgraded in search position. This would encourage honest Whois info.)
  • Re:Echelon? (Score:4, Insightful)

    by orac2 ( 88688 ) on Monday January 12, 2004 @01:26PM (#7953672)
    Disclaimer: I'm the author of the article.

    I know, from talking to the WebFountain team that they're very sensitive to privacy concerns. WebFountain obeys robots.txt and doesn't archive material which has vanished from the publicly visible web (if only for reasons of storage capacity!).

    The point is that all the information that feeds into IBM is already publicly availble. If wanted to go after Green Party members and if the Green Party posted it's membership roll on a webserver, I think they'd be able to get it, WebFountain or no.

    Of course, I suppose WebFountain could be used to construct a membership list by scanning people's home page's to find out if they say that they're a member, but again this is publicly declared information.

    Bottom line, as always: if you don't want it generally accessible to all, don't put it on a public web server.
  • by benja ( 623818 ) on Monday January 12, 2004 @01:42PM (#7953843)
    The head of a research and development department could feed WebFountain all the e-mails, reports, PowerPoint presentations, and so on that her employees produced in the last six months. From this, WebFountain could give her a list of technologies that the department was paying attention to. She could then compare this list to the technologies in her sector that were creating a buzz online. Discrepancies between the two lists would be worth asking her managers about, allowing her to know whether or not the department was ahead of the market or falling dangerously behind.

    This is a potentially very useful money-saver. Currently companies employ hoards of middle-management people who do little else than detecting discrepancies between the technologies that their department is focusing on and those that are currently all the buzz. Now we can create an automatic boss that sends out e-mails like, "What's this IP-over-XML thing and why don't we use it and how soon can you have all our critical systems migrated to it?"

  • Scanalyzer (Score:1, Insightful)

    by Anonymous Coward on Monday January 12, 2004 @01:49PM (#7953911)
    Reminds me of the Scanalyzer service in John Brunner's book "Stand On Zanzibar." The supercomputer Shalmaneser analyzed millions of inputs and tried to make sense of them.

After Goliath's defeat, giants ceased to command respect. - Freeman Dyson

Working...