Forgot your password?
typodupeerror

Challenging the Ideas Behind the Semantic Web 144

Posted by ScuttleMonkey
from the there-isn't-any-deception-on-the-internet dept.
mytrip writes to tell us that after a recent presentation to the American Association for Artificial Intelligence (AAAI) Tim Berners-Lee was challenged by fellow Google exec Peter Norvig citing some of the many problems behind the Semantic Web. From the article: "'What I get a lot is: "Why are you against the Semantic Web?" I am not against the Semantic Web. But from Google's point of view, there are a few things you need to overcome, incompetence being the first,' Norvig said. Norvig clarified that it was not Berners-Lee or his group that he was referring to as incompetent, but the general user."
This discussion has been archived. No new comments can be posted.

Challenging the Ideas Behind the Semantic Web

Comments Filter:
  • by CTalkobt (81900) on Wednesday July 19, 2006 @01:49AM (#15741458) Homepage
    is the users.

    Not the ones searching but the ones creating the content.

    They'll be some idiot out there (like there is now) that will code his data in a way that guarantees that he gets the most page views etc. So often searched terms will turn up on search indexes and other ilk.

    It's a loosing proposition unless you come up with filters but then they have their own set of problems.

    • I'm calling the Anti-Neutrality Web Designers of Amerika!

      Demands of inequality such as this should be allowed!

      (btw, the spelling doctor has "loosing" as in "loosing the hownds for the huhnt")
      • I don't think he's "another Anti-Semanticist". He's just saying that the whole semantic Web concept is based on this: that people will classify content properly and in good faith. Let's be fair, what are the chances of it not being abused? And if so, doesn't it mean that the semantic Web is doomed from the start?

        Think of all the things that were fouled by abuse. Email was a very sweet thing until it got perverted by spam. Newsgroups too. If the possibility for abuse exists, it will happen.
    • Could be worse. You could try to make it find something useful within the domain "myspace.com".
    • by CRCulver (715279) <crculver@christopherculver.com> on Wednesday July 19, 2006 @02:11AM (#15741497) Homepage

      ...is the users. Not the ones searching but the ones creating the content.

      Sure, the technical limitations of Joe Public might slow the growth of the Semantic Web on the whole, but what few people realize is that the Semantic Web has already existed for years in in-house or limited-audience networks. Just look at FOAFnaut [foafnaut.org] (an update in a few weeks will return it to full usability) or the very much real-world examples in Geroimenko & Chen's Visualizing the Semantic Web [amazon.com] (Springer, 2005).

      • The problem with users (authors) is valid when we consider individual authors creating data (RDF, HTML, ...) "by hand". TimBL has referred to the Semantic Web as a global database of knowledge (as compared to the current web of text content). The problem of incompetent users goes away and higher value of data is achieved when exposing already existing content and databases on the Semantic Web. Think sites like SlashDot, wordpress.com, amazon.com, NY Times, ...

        Authoring of RDF data is not so different from a
        • Here is a Tutorial on the Semantic Web [w3.org].

          Pay attention to the slide #22 which shows how data from different sources can be merged together. This is one of key differences between XML and RDF - to merge XML data from a number of different schemas one would need to create an application that processes data in these schemas and generate merged data (possibly inventing a new schema to represent the merged information).

          In RDF that happens "magically" - in order to merge heterogenous data you don't need to do *anyt
    • We already have that problem without the Semantic Web. Semantic Web coding is not a fix for that problem, it's a fix for other problems.

      This is like saying "Don't use Open Source software because people will do bad things with it". People will do bad things with or without Open Source software, and with or without the Semantic Web.

      Anyway the article isn't very clear... By "Semantic Web", are we talking about using <div>s and <p>s instead of <table>s and <br>s? Or are we t
  • by Thakandar2 (260848) on Wednesday July 19, 2006 @01:52AM (#15741464)
    "Norvig clarified that it was not Berners-Lee or his group that he was referring to as incompetent, but the general user."

    Here I was, thinking we were arguing over Semantics...
  • Damn (Score:5, Funny)

    by ErikTheRed (162431) on Wednesday July 19, 2006 @01:53AM (#15741465) Homepage
    "...Norvig clarified that it was not Berners-Lee or his group that he was referring to as incompetent, but the general user."
    Because Norvig vs. Berners-Lee going 10 rounds in a cage is something I'd pay to see.
    • Who would win? (/troll)
      • Re:Place your bets (Score:3, Interesting)

        by moyix (412254)
        Depends--if Norvig got Russell (co-author with him on Artificial Intelligence - a Modern Approach) to go in with him for a tag-team kind of thing, they'd probably win. On the other hand, Berners-Lee has the W3C on his side, a notoriously large and heavy organization, which could be hard to topple.

        As a side note, I heard from a friend who was attending that Norvig's opening comment about people always asking him "Why are you against the Semantic Web?" was a response to Berners-Lee's opening, 'Poeple always a
    • Re:Damn (Score:2, Informative)

      by salmon_austin (955310)
      In the U.S. regular cage fights are 3 rounds, with championship fights being 5 rounds. There are no 10 round cage fights that I am aware of anywhere in the world.
    • Do I hear a Googlefight in the making? Why yes, it's Norvig vs. Berners-Lee [googlefight.com].
  • by UR30 (603039) on Wednesday July 19, 2006 @01:56AM (#15741473) Homepage
    The current semantic web seems to offer a technology too fragile to use on the global scale. The complexity of various classification and ontological schemes, work needed to provide the metadata etc. Also, semantic web seems to offer great opporturnities for spammers and other mischief makers. Now we already have comment and reference spamming, but semantic web (on the global scale) raises the possibilities enormously.
    • by znu (31198) <znu.public@gmail.com> on Wednesday July 19, 2006 @02:36AM (#15741550)
      The full semantic web scheme really ignores a lot of what the Internet has taught us about what technologies succeed. It's not about grand visions and long specifications, it's about simple stuff that solves real problems of limited scope. Look at RSS, for instance; it's about the simplest thing which could do the job it does.

      I think we'll eventually realize most of the benefits of the semantic web, but it won't be a result of a grand vision imposed from the top down and implemented all at once. It'll probably be though increasing adoption of microformats [microformats.org], which don't try to classify and specify everything, and are implemented entirely using existing web standards.
      • At some stage, things get complicated, or you are left with a mess. This is the next step, and that step comes with classifiying everything. It is not such a grand vision, it just means adding extra information about objects on a page. This can be done in a number of ways, not necessarily that complicated. You just have to describe a bit better the page you put on, putting it into contect, so that the computer can interpret better what that page means. It is a good thing.
      • Look at RSS, for instance; it's about the simplest thing which could do the job it does.

        But may I point out, in addition to your comment, that such technologies have fared well as long as the human element is closely involved with them. RSS, social bookmarks, tags, microformats.

        On the other hand, Tim Berners-Lee seems to stress the fact that the semantic Web is all about AI doing content classification for us. So I think it's time we remember the old joke, "artificial inteligence is no match for natural st

        • by Bogtha (906264) on Wednesday July 19, 2006 @06:52AM (#15742161)

          Tim Berners-Lee seems to stress the fact that the semantic Web is all about AI doing content classification for us.

          I don't think I've seen him stress that in the sense that the users are dissassociated from the process. The Semantic Web is all about representing things like tags, microformats, etc, in a generic way.

          For example, if comment moderation was defined in terms of a relationship between a person, a comment, and an opinion, that doesn't mean a computer would be moderating comments, it just means that the same mechanism could be applied across multiple websites, without having to build moderation into the websites themselves. You could mod Dvorak -1, Troll, and everybody who lists you in their FOAF file using a browser that supports it, would see that moderation.

          Just because the focus is on making the software smarter, it doesn't mean that it's about replacing user opinions with computer opinions. In fact, the majority of Semantic Web stuff I've seen have been all about codifying user opinions to make them more accessible to computers, and thus, more easily exposable to the end-user in a useful way.

          • Fellow moderators, parent is the most Insightful comment I've seen in this thread; the Semantic Web is all about human-provided content represented in a common format, just like Web 1.0 was!

            Someone with points please mod it up!
    • As I pointed out in the previous comment [slashdot.org] authoring data on the semantic web is no more difficult than authoring RSS or XML.

      Yes, figuring out for the first time how to represent your data in RDF (or XML for that matter) can be difficult. Imagine if everyone was trying to come up with an RSS standard on his own instead of using RSS export functionality of his content management tool. That's why we need good guidelines how to publish information on the semantic web. And RDF export functionality (plugins) simil
  • Googlebombing (Score:5, Insightful)

    by QuantumFTL (197300) * <`moc.liamg' `ta' `kciw.nitsuj'> on Wednesday July 19, 2006 @02:11AM (#15741498)
    The biggest problem with the semantic web is spam. If you can trust the tags, it's a beautiful idea. If you can't, it's worse than useless - it's a waste of time. Google has the right idea, automatic extraction of semantics from content. If there's no real content, then (hopefully) that will be reflected in the semantic analysis.

    Me, I estimate we're 5-10 years away from doing anything terribly useful with all of this stuff, but I can definitely envision the day when an internet without semantics seems as distant as an internet without Google.
    • Re:Googlebombing (Score:5, Insightful)

      by Wastl (809) on Wednesday July 19, 2006 @02:28AM (#15741537) Homepage

      The "Semantic Web" is not about search engines, as you and many other posters seem to believe. It is about representing Web content in a structured, formal way that is more easily accessed by machines, going beyond simple presentation. This can be used for searching, but also for many other applications, e.g. integration, exchange, personalisation, ... .

      Spam content on the Semantic Web is in no way different to spam content on the normal Web (well, except that it is formal). This also means that a search engine that is capable of working with Semantic Web data has exactly the same issues with trust as traditional search engines. Except that on the Semantic Web, trust can be expressed formally as well. Similar to the authorities in Google, whose outgoing links make a statement about the trustworthiness of other sites, an "authority" on the Semantic Web can make statements about the trustworthiness of other sites. However, these statements are explicit, and they could also be used to state that another site is *not* trustworthy.

      Google has the right idea, automatic extraction of semantics from content.

      Google does not extract any semantics from content. It merely analyses the linking between websites and connects that with keywords. No semantics here.

      Sebastian

      • Re:Googlebombing (Score:5, Informative)

        by QuantumFTL (197300) * <`moc.liamg' `ta' `kciw.nitsuj'> on Wednesday July 19, 2006 @02:53AM (#15741578)
        Google does not extract any semantics from content. It merely analyses the linking between websites and connects that with keywords. No semantics here.

        I believe you are referring to PageRank, which is one of many algorithms used by google to determine search relevance. This article [seobook.com] discusses their use of Latent Semantic Indexing [wikipedia.org], which is a somewhat crude but effective form of sematic inference which is widely used in the field of NLP.
      • I don't get it. Most all of us these days are writing webapps that spit out xml and have a CSS style sheet that makes that stuff pretty. So what's left, standardizing how the xml should be structured? Maybe instead of dictating how it shall be the Semantic Web proponents should go out and look at what xml people are spitting out and do something useful with it. Then people will see that they too can offer users something useful by making their xml more readable by your tools. Why is it that folks like
        • Most all of us these days are writing webapps that spit out xml and have a CSS style sheet that makes that stuff pretty.

          XML has no semantics whatsoever. None. It's a way of serialising and unserialising a tree of elements and attributes. It's markup languages that are built on top of XML that contain the semantics. Part of the Semantic Web is finding a good representation for the deeper semantics that are pervasive on the web. Think less about "This bit of text is a paragraph" and more about "Thi

      • I agree. By default Google considers any link to a page an implicit endorsement of the page. Which is a problem, you see stuff like comment-spam attempting to increase page-rank by making it appear that slashdot endorses a certain site when that isn't the case.

        There's a extension to disable this, something like rel="nofollow" that says, essentially, the link should not be considered an endorsement.

        But even more useful would be the possibility to explicitly say what relation you have to some site.

      • Re:Googlebombing (Score:2, Informative)

        by navarroj (907499)
        > Google has the right idea, automatic extraction of semantics from content.
        >
        > Google does not extract any semantics from content. It merely analyses the linking between
        > websites and connects that with keywords. No semantics here.

        Google does extract semantics from content in a few particular domains: addresses and bussines info for Google maps, show times and additional information on movie searches, dates and appointments from Gmail to Google Calendar, ...

        The semantic web has already
      • Similar to the authorities in Google, whose outgoing links make a statement about the trustworthiness of other sites, an "authority" on the Semantic Web can make statements about the trustworthiness of other sites.

        Want to manage a $10 billion company in ten years ? Here is your plan...

        Just my two cents, soon to be gazillions...
      • And no semantics anywhere (outside of humans) either, just sets of relationships attached to symbols stored in computers. I think it's important to say, the semantics in OWL (for example FOL is the same) come from an agreement between and within communities in humans which are framed in natural language (not maths, the meaning of the maths is what we are agreeing) and are therefore subject to debate.

        As an example some people don't accept constructed proofs as valid. This makes a lot of physics and maths ina
      • It's trying to impose structure on something that is not very structured--human thought. Even the use of the word "semantic" points out the futility of the exercise, as it indicates language and changes in meaning--not structure.

        Semantics is a human discipline--it is focused inward, not outward. Likewise the proper place for semantic technology is in the client, not the content. Building "semantic web sites" makes no sense. Google is absolutely right on this one--Web sites should simply be what they are, an
    • The biggest problem with the semantic web is spam. If you can trust the tags, it's a beautiful idea. If you can't, it's worse than useless - it's a waste of time. Google has the right idea, automatic extraction of semantics from content. If there's no real content, then (hopefully) that will be reflected in the semantic analysis.

      yes, in theory, nobody needs google in a semantic web populated by lawful good users. But the power of google is in its verification, which will translate to the semantic world. u

    • Re:Googlebombing (Score:3, Interesting)

      by radtea (464814)
      Google has the right idea, automatic extraction of semantics from content.

      But content has no semantics.

      Meaning is a verb, and "to mean" is an action of a knowing subject. Communication is an attempt to stimulate the same meanings in multiple subjects--kind of a psychological choreography.

      As such, meaning is not extracted from content, ever. Rather, probable meaning is inferred from content, and the basis of inference is fundamentally psychological. What a given word, symbol, sentence, paragraph or page m
  • by rsidd (6328) on Wednesday July 19, 2006 @02:14AM (#15741502)
    Thanks for the illustration of what Norvig meant. How is "Google Director of Search and AAAI Fellow Peter Norvig" (original article) semantically equivalent to "fellow Google exec" (Slashdot summary)? The latter suggests that Tim Berners-Lee too is a Google exec, which would be news to him.
    • by TrappedByMyself (861094) on Wednesday July 19, 2006 @06:40AM (#15742132)
      Bingo! You've just proven that the incompetence spreads beyond MySpace.
      The problem with the semantic web movement is this: You have the web guys from the W3C who got famous by building kinda crappy, but effective technology (HTTP, HTML, etc...) going goo goo gah gah over PhD Ontologists from the AI community. They team up and build these great things that the average person (including the people who think they are really really smart, like the Slashdot editors), has no chance in hell of using effectively. What'll happen, is that eventually there will be useful Semantic content and Intelligent Agents doing great things, but that work will be done by a select few. The unwashed masses will still be the domain of Google.
  • Semantic webs (emphasis on plural) produced by editors such as those at /. or in the consumer-rated style of Digg, Del.icio.us etc might actually work. Trusting authors to do it right is a disaster, as Norvig suggests.
  • by IvyMike (178408) on Wednesday July 19, 2006 @02:26AM (#15741528)
    It's really, really difficult to get people to follow rules. We're lazy, we're incompetent (yes), and some of us are evil. I still don't think I truly understand how RDF is supposed to work exactly, and it doesn't even seem like it will be fun to try.

    On the other hand, it's really easy to release a million monkeys and let the create what they will. It's not so easy to sort through what they end up producing, but Google does a surprisingly good job of this.

    It reminds me of the early days of the Web, when companies like CompuServe and AOL wanted to design and own all content. On the other hand, an internet server with httpd let anybody make a ~/public_html directory and put up whatever they wanted to. The million monkeys won that battle. I think they'll win this one, too.
    • People who want to add extra information to there page can, it doesn't all have to change at once. The people who do add semantic information to there page, will be indexed better, or by a different browser which producers only relevant results - this is a huge advantage. Then, this will be popular, and more and more people will add the extra information (which for sure, takes extra time). If people spam the system, or put in incorrect information they are excluded. It is possible, and it is the next step -
    • It's really, really difficult to get people to follow rules.

      Especially if the rules appear to be an incomprehensible ad-hoc mix of principles taken from a dozen not-quite-fully-baked AI dissertations.

      I still don't think I truly understand how RDF is supposed to work...

      I don't think anyone does.

      I'm not saying that the semantic web is bullshit, but it does trigger my bullshit detector. At least one of them must be broken.

      • I still don't think I truly understand how RDF is supposed to work...

        I don't think anyone does.

        RDF's core idea is simple. Give everything a URI. Express relationships as a set of three URIs, (subject, property, value). So you might have (#me, #friend, #bob) expressing the idea that Bob is a friend of mine. Or you might have (#photo, #contains, #me), expressing the idea that I'm in a photo.

        RDF is little more than a mechanism for expressing relationships. It doesn't give software the abilit

        • This is a little bit like saying "Computer science is easy. It's all just one's and zero's."

          The representation isn't the problem. The problem is agreeing what the the relationships mean. What does "#friend" mean? Does it mean the same thing to program X as it does to program Y? How can you tell? What do you do when there's a conflict -- who gets to decide what #friend means, and whether this is a global or local definition? These are questions that I've never heard answered in any believable manne

          • The representation isn't the problem. The problem is agreeing what the the relationships mean.

            That problem is not the problem that RDF addresses. It just gives you the tools so that you can concentrate on solving that problem instead of worrying about all the crap underneath. It's like XML doesn't address semantics, it just gives you tools so you can focus on semantics without worrying about parsing.

            What does "#friend" mean? Does it mean the same thing to program X as it does to program Y? How c

            • I don't think I'm disagreeing with you.

              Like XML, the notation is just a beginning. It's nice if everyone agrees to use the same syntax to express information (even if it's somewhat gnarly, like XML) but that just saves everyone the effort of writing a bunch of boilerplate code. As someone who has been using IDLs and markup languages for decades, XML and/or RDF doesn't excite me much. It's those other problems -- the ones beyond their scope -- that remain unaddressed.

              Writing the URIs is where all the

              • Writing the URIs is where all the pain is. I don't see any difference here this and hashing out any other protocol spec.

                The difference as I see it is simply that the protocol is being specified at a higher level, which means that if you have the right libraries, it's just less work to implement.

                It's those other problems -- the ones beyond their scope -- that remain unaddressed.

                My perspective is that you don't stand a chance of solving the larger problems in a generic way until you solve the s

    • It's really, really difficult to get people to follow rules. We're lazy, we're incompetent (yes), and some of us are evil. I still don't think I truly understand how RDF is supposed to work exactly, and it doesn't even seem like it will be fun to try.

      It's not about following rules. It's about offering some kind of incentive. The major disincentives are that RDF is a confusing, poorly engineered spec and that it probably won't provide them any benefit. You can't call someone lazy or evil for having common se
    • On the other hand, it's really easy to release a million monkeys and let the create what they will ... and that's how Semantic Web is supposed to work.

      The SemaWeb is all about human-provided content represented in a common format, just like Web 1.0 was! HTML was the format for hyperlinked generic information chunks ("pages"), RDF is the format for hyperlinked metadata-anotated chunks.

      The main difference is that HTML was, at the beginning, a very simple common format (that's not true nowadays, though). Machi
  • by robolemon (575275) <nertzy@gMENCKENmail.com minus author> on Wednesday July 19, 2006 @02:30AM (#15741540) Homepage

    From http://www.7nights.com/asterisk/archive/2004/03/do nt-blame-the-users [7nights.com]

    Blaming the users for anything should raise a huge red flag that you've got some usability problems.

    Maybe the Semantic Web should aim to be useful to people rather than require people to be useful to it. There has to be a better way than trying to educate droves of people to a problematic and vulnerable design.

    • Blaming the users for anything should raise a huge red flag that you've got some usability problems.

      Bollocks! The fact that flying an F22 is probably fatal for untrained grandmothers does not mean it has "usibility problems" - not every task in life is meant to be done by idiots, and the more effort is put into idiot proofing software, the less is put into reliability, functionality, and extensibility for the rest of us. Some things are too hard for a segment of the population to do, and ontologically
      • Yes, great example. This invalidates his point completely.

        By definition blaming the user is wrong. If your grandmother is a user of a F22, then the machine should not stop her rom trying to fly it. A computer user should be able to use a computer, without getting an infected machine by checking their email, or going to a webpage. And this is what has happened. When you buy a computer today, it will come with a virus checked and spyware checked, and a better browser (hopefully) - why, what would you do. Just
        • There's a difference between a tool written for a task which requires prior knowlege, and insecurely written software. A decent email client wouldn't automatically open attachments by default without asking, and a decent web browser wouldn't run code using greater privileges than the current user in any case. I'll grant you that. However, stating that a CAD program is poorly designed because it's difficult for a new user to grasp would be incorrect. Not everyone is trained in CAD/CAM, so the interface of th
      • Bollocks! The fact that flying an F22 is probably fatal for untrained grandmothers does not mean it has "usibility problems" - not every task in life is meant to be done by idiots, and the more effort is put into idiot proofing software, the less is put into reliability, functionality, and extensibility for the rest of us. Some things are too hard for a segment of the population to do, and ontologically tagging complex relationships between data entries may simply be beyond the average user. That's not a bu

      • You know, there are some instances where the general population is right and the professional is wrong, and this is one of them. The normal people know that something that helps organizing a democratic medium needs to be democratic too - it needs to be understood by the masses, or it won't take off.

        As long as you stay in an ivory tower and target the semantic web only to you and your peers, you don't grasp what it is really all about. And it will just stay on the ground.
      • The context will tell you if you have usability problems or not.

        If an important group of users is grandmothers, trained or otherwise, and they can't use your product or service (call it F22 or a kettle) then you have got usability problems and you have got to address them.

        Insulting the intelligence of your intended audience is a typical no-no for somebody knowledgeable with the rudiments of usability theory and practice.
        • What if even the users themselves complain about incompetent users? Would you still say that there is no such thing as incompetent users?

          This is not a hypothetical situation: people on my forum complain all the time about idiotic posts on the forum, despite all the hundreds of man hours I spent into organizing the information in easy to find ways and redesigning the website.

          (FYI, I'm not talking about the Autopackage website)
          • What if even the users themselves complain about incompetent users? Would you still say that there is no such thing as incompetent users?

            Then some of your users are making the same mistake that you are. Please look past their opinions and think about whether you want more people to download your software. If you get enjoyment out of making fun of some people and calling them incompetent, then you're set. Otherwise, try to be humble and put yourself in these other people's shoes. How would you like it if t

            • Wanting to help people is good and all. In general, I still want to help my users. But after 3 years, things get tired quickly when you read the same question for the 84235823th time (despite massive efforts to redesign the website to make the answer to that question easy to find). Take a look at help desks. Have you ever seen a help desk operator who likes his job? I've never seen one, or even heard of one. Take a look at the administrators/moderators of some major forums where non-technical people go to.
    • This is precisely the point. If I had mod points, etc, etc.

      Far too many geeks have far too many ideas about 'socialising' (as in human relationships, not political agendas) the Internet, and their method of doing so is so far from related to normality it's not funny. People don't care about XML DTDs, FOAF, it just needs to work, or else we're just building a big database with odious standards for data normalisation and invasions of privacy that it's no longer a tool for us, but one against us (and I mean th

    • On my website, there are a few links, among which are:
      - Download
      - Forums
      The Download and Forums links are next to each other, and highly visible (48x48 icons with labels). But people go to the forum to ask where they can download my program! When I ask them why they didn't click on the Download link, they don't give an answer.

      If that isn't user incompetence, then what is it? And yes, this happened for real. In fact, it happens all the time, so it's not just 1 or 2 people.
      • If that isn't user incompetence, then what is it? And yes, this happened for real. In fact, it happens all the time, so it's not just 1 or 2 people.

        It's evidence that you should consider changing your layout.

        You know that they have trouble finding your download link, yet you're stubborn enough not to try to improve your site? That's pretty closed-minded.

        I know that it's hard to think that other people could see things differently than you do. Maybe if you want people to download your software more tha

        • You know that they have trouble finding your download link, yet you're stubborn enough not to try to improve your site? That's pretty closed-minded.

          Why do you think I don't try to improve my site? I do it all the time.

          A few points:
          1. I've already redesigned the website twice, and people still ask at the forum where they can download it.
          2. When I ask those people why they can't find it, they never give an answer! How am I supposed to know what they think when they don't even reply?
          3. I asked a lot of other p

          • First off sorry if my tone has been a bit combative. I'm very passionate about this issue and I'll try to tone it down a bit. My observations come from experience in user interaction design involving actual user interviews and watching people interact with sites. You'd be surprised what happens. Really. People aren't logical. There's a lot of good literature on it too if you're interested.

            1. I've already redesigned the website twice, and people still ask at the forum where they can download it.

            I underst

        • Are you just trolling, or are you really serious?

          The poster said that the links are next to each other. Unless you have seen the site in question, I don't think you are in any position to bash its layout.

          There are people I seriously think shouldn't be on the Internet. Heck, there are people I think shouldn't even own a computer. Besides IT-related issues, there are also people I don't think should be allowed to drive a car, use a credit card, raise children, have dogs, etc.

          An interesting aspect is that many
      • If your forum revolves around your package only, put a link or button in the same place where you put "Send" or "Reply" buttons.
      • If you're referring to the autopackage [autopackage.org] website I think I know why you're getting those questions.

        There's more than a dozen hyperlinks on the main page. None of them say "Download".
        Okay. I'll go to the "Help & Support" section. None of the links there say "Download" either.
        What's left on the main page that seems vaguely relevant? "Packages; various packages"? I don't want various alternative packages, I want autopackage.
        Okay, I'll check the FAQ link on the main page. Do a search for the word "Down

        • "If you're referring to the autopackage [autopackage.org] website I think I know why you're getting those questions."

          No, I'm not referring to the Autopackage website. In fact, Autopackage is not supposed to be downloaded by end users.
      • Not sure if you mean this autopackage page [autopackage.org] or not, but that doesn't have a button that says "downloads"... In fact, it doesn't say downloads anywhere until you click on Packages, then see "Downloads" as the page title. If the question is truly one of the most asked questions, it's not under the "Most Asked Questions" section. It should at least have one of those big buttons on the right... "Download now" etc.

        Again, I don't know if that's the site you're referring to.
    • Take a look at this [sourceforge.net]. It was posted on a support forum. If that isn't an incompetent user then what is it?
  • Web of Trust (Score:5, Interesting)

    by VDM (231643) on Wednesday July 19, 2006 @02:40AM (#15741558) Homepage
    In one of the very first papers [w3.org] mentioning the Semantic Web, some paragraph was devoted to something then lost in the hype around the semantic web: the Web of trust, which had to be something like a certification of metadata. This is perhaps to be again regarded as important for the semantic web and the web in general (although not easy to manage).
    By the way, Norvig is not only a Google exec, but also a well known AI researcher, author of one of most important books [berkeley.edu] on that subject.
    • Re:Web of Trust (Score:3, Interesting)

      by cardpuncher (713057)
      Indeed. It's noteworthy that a lot of the work being done on "The Semantic Web" is by academics. They come from an environment in which there are peer-review mechanisms and established publishing channels which ensure that "trust" is the norm. Outside that world, information is generally less trustworthy but it may still be relevant. The research challenge is to make use of "trust" where it can be proved to exist but not assume it elsewhere. In commercial terms, though, it may not be worth even trying to do
    • Google are apparently already on the way to TrustRank [slashdot.org]. I can't wait to see how this works out.
  • by tfinniga (555989) on Wednesday July 19, 2006 @02:43AM (#15741563)
    Slightly offtopic. Peter Norvig gave a talk at my university on similar topics, and there was a short Q&A afterwards.

    One of the students asked him what he did for his 20% project. He said that he was usually too busy keeping tabs on what the other employees were doing with their 20% time, so he didn't quite get around to working on his. He told us what he wanted to do, as motivation for himself.

    The basic idea is that when he used to work for NASA, it'd always make him upset when people saw faces in random spots on the moon's terrain, and claimed it was aliens that NASA was covering up, or similar. So, he was planning on taking facial recognition software and running it on all of google earth. I think it'd be pretty awesome..
    Any progress yet, Mr. Norvig? I'd love to see the results.. :)
  • by Anonymous Coward
    The jig is up!
  • by Mofaluna (949237) on Wednesday July 19, 2006 @02:59AM (#15741585)
    It's the business users too that are a problem. I'm currently trying to get a project on the rails based on semantic web technology, and I'm confronted with an IT department where some are even struggling with the difference between subtyping and instantiation- let alone more advanced modelling issues... It doesnt help ofcourse that most people never even heard of conceptual modelling languages such as ORM [orm.net] but instead were thought to use uml and ER where it's the modellers' responsibility to make a distinction between what is conceptual, logical and physical which ofcourse most never did.

    In regards to the google issue I think the idea that you should crawl everything is faulty cause you need to be able to trust the source. Most ontologies will simply be restricted to a certain domain and corresponding user group, often in a b2b context. Integrating every man and his dog, the lawnmower and the kitchen sink with some kind of top level ontology is merely a nice-to-have philosophical issue that I dont expect to be solved in the near future, if only cause we havent seen much advances since Aristole started toying around with the idea. In other words, at google they are worried about an issue that's atleast a decade away from now, probably even more.
  • Hmph... (Score:5, Funny)

    by Jello B. (950817) <jellobmello.gmail@com> on Wednesday July 19, 2006 @03:06AM (#15741596) Homepage
    That anti-semantic bastard...
  • by AlXtreme (223728) on Wednesday July 19, 2006 @03:07AM (#15741598) Homepage Journal
    The semantic web is, in my eyes, a typical chicken & egg problem. You've got loads of content on one side, yet current search engines work well enough to not worry about representing that content in a structured way in a markup language like OWL. On the other side, you've got embarassingly few semantic web applications that use structured content. How is a typical web developer going to justify structuring the content on his side if he can't point to an example how it could improve shareholder value? What would exporting our databases in OWL currently solve?

    True, the web had a similar problem, however creating a webpage is a lot more interesting (you see the results directly, how terrible they might be you do see a result) than structuring data. The latter takes a lot more work, and the direct benefit just isn't there.

    Sem-Web-like standards like RSS, XML and SOAP have become mainstream, but primarily because they fill a gap. The adoption of RDF or OWL simply doesn't solve anything. Yet. It would be cool to let agents loose onto the semantic web and retrieve them together with a summary on a certain subject using a multitude of sources, but as long as it's easier to Google I don't think it would generate any interest outside academia.

    Feel free to prove me wrong though.

  • Even if we are inherently lazy, and even though some people seem to be generally against the idea, it doesn't make any sense to me not to employ this and experiment with it. Norvig is an AI guru, and his ideas on the Semantic Web may be interesting, but Google is not against the idea. Google's GData looks to me like a primitive Semantic Web. Even if only 10% of web masters adopt the system, querying to find a set of results that have been tagged as certain meta-data can come up with some interesting resu
  • by Anonymous Coward on Wednesday July 19, 2006 @04:09AM (#15741733)
    The idea of RDF is applicable to much more than public innerweb content. I've spent the last 7 months researching and developing an RDF backed system for my company's core products. Everyone should think of the value of RDF beyond the scope of trust, and then it becomes easy to realise methods of simple non-web implementation. We can all spend the next 5 years pondering how we're going to figure trusted content providers for RDF web data, or we can just start developing apps for sources which understand themselves as trusted (ie. data input from an individual, employees of a company, and any group where the individual must be accountable for their actions). Whats more important than the blind trust of sources, is data verfication. There are ways to run data input from one user by another user, without doing it in an infringing, demanding way, for validation. I'd like to go into detail of exactly what I mean by all this, but I don't want to violate any portion of my NDA or tip off industry competition (I know that sounds retarded, sorry). If RDF does gain popularity, I can say it will from within the private sector, not the public. Genious implementation may bring RDF to the public sector, but thats not something I would say is guaranteed to happen.

    Current technical obstacles to creating any RDF applcation: The matter of complexity of its integration into DB backed systems (popular methods), and instatiated class marshaling within not-so-object oriented languages. The technical design and implementation of a standards compliant RDF system has been extremely difficult for me. I don't think it would ever be possible to get RDF data represented nearly as minimally as you could with simple relational tables (although formally no more bloated than bloaty XML). RDF also creates many long linked relationships; this tends to create some serious performance issues in querying the data. Lastly, I hate XML, and you can't always correctly export from RDF to XML (capable type to incapable type) in a correct manner.
  • by Anonymous Coward on Wednesday July 19, 2006 @04:20AM (#15741769)
    This remind me of the famous Semantic knigth [xml.org] parody...
  • by Anonymous Coward
    Do not forget that the semantic Web is not a replacement of the existing technologies: HTML contents will always be there but, What if these little 'metadata' description where added to ALL the Web Pages? In this case, the pages could be categorized, analysed and searched much more easily, and the algorithms related to these operations would be better. In such an scenario, the use or one or another Web search engine would be irrelevant because all of them would have powerful and acurate algorithms. Maybe a
  • Brilliant! Blame the user. No, it's not that you don't have a rational data model (you know, so that those "semantic" tags actually *mean* something) or that you haven't done squat to even suggest a proper UI, it's the user's fault.

    And it *certainly* couldn't be that HTML is a piece of fucking garbage and that trying to kludge semantics into the spec is an effort doomed from the beginning.
  • Some well-known researcher called the emperor naked. Maybe they believe him more than they did the practicioners that pointed the Semantic Web's problems out long before. Here we'll see that fairy tales are not true -- a small child is not sufficient, we need a bigshot to notice.

    News at 11...

  • Although trust is certainly a issue when it comes to the Semantic Web, the real problem is that its design is not a true abstraction, but is nothing more than more metadata. And like the actual textual data in a typical web page, it suffers from all the same problems, save for one: being unstructured (and thus not truly parseable).

    IMHO, the Semantic Web is solving one problem (the lack of structure and descriptive context in textual HTML content) in a very hard way (asking the entire web to implement this
  • pardon my ignorance (Score:3, Interesting)

    by plopez (54068) on Wednesday July 19, 2006 @09:24AM (#15742735) Journal
    But what, exactly, is the definition of the 'Semantic Web'? How is it different from what has been done in the past? Is there any agreement of any sort as to what it means? If yes, please let me know. If not, then how can we achieve this goal if we do not know what it is?

    I am confused, I really do not see too many differences in the web in the last few years. Nothing 'Earth Shattering' anyway.
    • Do you understand the difference between |-separated configuration files and XML configuration files? Both are equivalent in that they provide constants for a program; but XML files can be processed by a generic parser, while "pipe" files need an ad-hoc parser.

      RDF and the layers on top of it (OWL, DAML...) try to achieve the same for other tasks. Instead of having to build separate applications to achieve the same task again and again for every website, you can reuse a generic code by having all the meaning
  • The Semantic Web is in the old AI tradition of grand overhyped promises with little results to show for them many years later. AI had managed to moved away from this practice that had led to the crisis in funding in the 80s, when people woke up to the fact that AI did not deliver as promised. Here at AAAI there is a sentiment that the semantic web is a step in the wrong direction and Tim Berners-Lee talk here was presented as such. Here's the abstract from the program:

    The relationship between AI and the sem

  • ... are still unsolved. The problems of data inconsistency (from bogus or fraudulent data entry) are bad enough, but the semantic web idea has problems even if you assume all the data is valid. There are some theoretical results on inheritance networks (a classic AI predecessor to semantic web representation) from the 1980's and 90's that are rather depressing:
    • Touretzky's dissertation where he shows that if you allow exceptions, it's hard to keep inheritance networks globally consistent
    • Another result I ca
  • If we want to get users to enter in metadata, we need to do three main things:

    - create editors that automate the syntactical complexities of RDF/OWL, like what blogs have done for HTML.
    - make entering metadata entertaining somehow.
    - make some killer apps that show to regular users the usefulness of the semantic web.

    Then we'll have a semantic web. Problems like spam can just be addressed as we come to them, but Web of Trust is probably a good start.
  • All I would ask of the Semantic Web evangelists is that they go off together and build a network of Semantic Web systems that proves the following: 1. It works 2. It is more useful than currently existing practices 3. It is more cost-effective than currently existing practices 4. There is a killer application that is not possible using currently existing practices If they can do #1 and any one of the other 3, then maybe people will see the value and start adopting it in the real world. Until then Semanti

"I'm not afraid of dying, I just don't want to be there when it happens." -- Woody Allen

Working...