Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Semantic Web Under Suspicion 79

Dr Occult writes "Much of the talk at the 2006 World Wide Web conference has been about the technologies behind the so-called semantic web. The idea is to make the web intelligent by storing data such that it can be analyzed better by our machines, instead of the user having to sort and analyze the data from search engines. From the article: 'Big business, whose motto has always been time is money, is looking forward to the day when multiple sources of financial information can be cross-referenced to show market patterns almost instantly.' However, concern is also growing about the misuses of this intelligent web as an affront to privacy and security."
This discussion has been archived. No new comments can be posted.

Semantic Web Under Suspicion

Comments Filter:
  • Smarter Machines (Score:5, Interesting)

    by jekewa ( 751500 ) on Thursday May 25, 2006 @09:24AM (#15400888) Homepage Journal
    I personally fear the day that a machine or algorithm can better determine the purpose for my keyword-based search than I can. Sure, there's a lot of improvement that can be done to make the searches more precice, but certainly in the end it'll be my decision what's important and what isn't.

    What I really want to see is the search engine reduce the duplicated content to single entries (try Googling for a Java classname and you'll see how many Google-searched websites have the API on them), or order them by reoccurrance of the word or phrase giving the context more value than the popularity of the page.

  • Re:All Talk (Score:5, Interesting)

    by $RANDOMLUSER ( 804576 ) on Thursday May 25, 2006 @09:48AM (#15401066)
    I've always thought that the Table of Contents [notredame.ac.jp] for Roget's Thesaurus was one of the greatest works of mankind. I don't think many people realize just how difficult the problem really is, and how long it's going to take.
  • by Peter Mork ( 951443 ) <Peter.Mork@gmail.com> on Thursday May 25, 2006 @11:07AM (#15401777) Homepage
    The semantic web is a step up from XML. In an XML document, there is a great deal of information implicitly stored in the structure of the document. A human is (often) able to guess what the implied relationship is between the parent element and the child element, but machines are still poor at guessing. By making the relationship explicit (using RDF) a machine has a better chance of identifying the nature of the relationship. Of course, you still need standard tags, but it's easier to talk about named relationships rather than tacit relationships. (And my dissertation revolved around building Semantic Web infrastructure in a peer-to-peer setting.)
  • by Anonymous Coward on Thursday May 25, 2006 @11:53AM (#15402212)
    There are plenty of OWL editing tools, besides which most people won't be writing their own ontologies anyhow.

    There are already lots of inferencing engines, too - Sesame, cwm, etc. It's really not a big deal; the whole point of RDF is that the architecture makes this stuff easy.
  • by Anonymous Coward on Thursday May 25, 2006 @12:15PM (#15402444)
    There are plenty of OWL editing tools, besides which most people won't be writing their own ontologies anyhow. There are already lots of inferencing engines, too - Sesame, cwm, etc. It's really not a big deal; the whole point of RDF is that the architecture makes this stuff easy.

    CWM sucks big time. Just go ask the semantic web researchers out there how aweful it is and poorly it scales. In fact, google and see what results you find.

What is research but a blind date with knowledge? -- Will Harvey

Working...