Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

How Google Will Have Achieved The Semantic Web 242

alfaromeo points to a business feature (mysteriously available already) by one Paul Ford called "August 2009: How Google beat Amazon and Ebay to the Semantic Web." So read on for a bit of potential history from five years in the future.
This discussion has been archived. No new comments can be posted.

How Google Will Have Achieved The Semantic Web

Comments Filter:
  • Heh (Score:5, Interesting)

    by Anonymous Coward on Sunday August 01, 2004 @05:01PM (#9859384)
    Remember back when we all thought that XML was going to achieve the semantic web by making good search engines unnecessary? Now XML has gone nowhere except as a set of popular libraries for cross-language data serialization, and we're starting to talk about just making really smart search engines.
    • Re:Heh (Score:5, Insightful)

      by primordial ooze ( 13525 ) on Sunday August 01, 2004 @05:17PM (#9859471) Homepage
      Remember back when we all thought that XML was going to achieve the semantic web by making good search engines unnecessary?

      Not really, and XML is still such a recent development that to say "Remember when" is silly if not outright disengenuous. I was at the SGML '86 conference in Boston where the XML initial draft was presented. That's less than ten years ago. Can you name a information technology that reached anything like its full potential less than a decade after its first mention?
      • Re:Heh (Score:2, Informative)

        I was at the SGML '86 conference in Boston where the XML initial draft was presented.

        That was SGML '96 of course. D'oh!
      • Re:Heh (Score:5, Funny)

        by phats garage ( 760661 ) on Sunday August 01, 2004 @05:22PM (#9859495) Homepage Journal
        I'd have to say "Microsoft Bob" peaked pretty early.
        • Oh? (Score:5, Interesting)

          by FlutterVertigo(gmail ( 800106 ) on Sunday August 01, 2004 @06:31PM (#9859792)
          Microsoft Bob succeeded, but not in the way you have expected.

          Melinda Gates (nee' French) was the Product Manager of Microsoft Bob.

          (just don't brag to your friends you've known that forever)

          p.s. Microsoft Bob is|was one of the products (along with things such as RedHat) which Virtual PC can run successfully; so it hasn't disappeared completely. I still have a copy sitting here in one of my CD wallets. (Handed out at a Tech Ed or some other conference)
          • You know, I dug up an old copy of Microsoft Bob a few months ago and my GF just loved it. It's not something you can really use on a regular basis, but it is kinda fun to fire up and play with. Heck, the package included some quiz games and a rather extensive collection of knicknacks you could accessorize the rooms with.
        • Re:Heh (Score:2, Funny)

          by boarsai ( 698361 )
          Clippy has yet to reach it's peak :P
      • Well the idea (Score:5, Interesting)

        by Anonymous Coward on Sunday August 01, 2004 @05:34PM (#9859544)
        didn't so much refer to XML the technology as to one of XML's proposed applications. There was a popular theory within the press when XHTML was first introduced that XML would supplant webpages and drag the web back to that primordial point when HTML was intended as a content markup language, not a display language, and even go beyond that. Supposedly we were going to wind up where stylesheets would go beyond just a mapping from XML tags to some set of HTML4 tags, and into a point where content was just a minimal set of XML-tagged text and everything about the way the site displayed was deferred to CSS-like technologies. And when this happened supposedly web browsers would be totally free to reset stuff, and we could toss out amazon.com's presentation of, say, the search results for "Michael Jackson" (as a series of paragraph-delimited links to categories (books, music, etc) to search within in a blocked-off area surrounded by amazon.com's navbars and logos, which then pointed to a series of pages containing little formatted blips of information about various items for sale presented in groups of ten separated by little gray lines in a blocked-off area surrounded by amazon.com's navbars and logos), and instead have it display as a heirarchial file browser or whatever we liked.

        Well, I think it's safe to say that idea's been mostly shelved for the time being. This isn't a matter of a lack of "reaching potential", it's a matter of total failure to move in that direction. XML has been incredibly popular as a storage mechanism but has had roughly zero takeup as a communication mechanism. (There have been communication substrates, such as XML-RPC, based off of XML, but that's not the same thing.) I don't know if it's fair to assume a technology come to fruition within 8 years of being proposed, but I think it's fair to assume that unless we see some kind of signs of progress or interest in progress within 8 years, there's no reason to expect further progress within the 8 years after that.
        • There was a popular theory within the press when XHTML was first introduced that XML would supplant webpages and drag the web back to that primordial point when HTML was intended as a content markup language, not a display language, and even go beyond that. [...] Well, I think it's safe to say that idea's been mostly shelved for the time being. This isn't a matter of a lack of "reaching potential", it's a matter of total failure to move in that direction. XML has been incredibly popular as a storage mechani
      • Re:Heh (Score:2, Funny)

        by Anonymous Coward
        "Can you name a information technology that reached anything like its full potential less than a decade after its first mention?"

        online porn
    • Re:Heh (Score:3, Insightful)

      by jZnat ( 793348 )
      Don't quote me on this (seriously, because if you do, I will cut you), but I thought Microsoft was migrating to XML usage for a lot of their proprietary formats finally. I think it's a good idea, but then again, what if they patent their XML formats?

      Yeah, just letting you know that XML is actually going somewhere.
      • Re:Heh (Score:5, Funny)

        by Profane MuthaFucka ( 574406 ) <busheatskok@gmail.com> on Sunday August 01, 2004 @05:36PM (#9859557) Homepage Journal
        Careful, they might implement their XML with just one tag, called 'data', and just stick their regular old Word documents into that as an encoded binary.
      • Re:Heh (Score:3, Informative)

        by Anonymous Coward
        but I thought Microsoft was migrating to XML usage for a lot of their proprietary formats finally

        And they're using it as a data serialization format-- just a way to store some structured data. The nature of that structured data is fair to remain just as proprietary as if it were stored as a big slab of binary.

        The initial promise of XML was that it would serve not just as a popular library for serializing structured data, but as a common platform for communicating data.
    • Re:Heh (Score:5, Insightful)

      by KefabiMe ( 730997 ) <(moc.ronohj) (ta) (htrag)> on Sunday August 01, 2004 @05:24PM (#9859507) Journal

      I RTFA. Intriguing, but it would be a huge struggle for Google to become like anything in the article. There's too much money in having the right information at the right time.

      "Now XML has gone nowhere except as a set of popular libraries for cross-language data serialization..."

      XML is still getting more popular and more accepted with each passing month.

      The biggest issues are that there are a few monstrous companies out there that want to control the standard of how information is shared, and mutate XML into some proprietary form that their company can control.

      XML is a good thing, like most standards. Standards can fall short at times, especially when the uber-companies start trying to fight for control over them. I believe that this fight for control will do more to prevent the easy transfer of data, more than any problem with XML itself.

      • Re:Heh (Score:4, Insightful)

        by Zeinfeld ( 263942 ) on Sunday August 01, 2004 @09:23PM (#9860517) Homepage
        "Now XML has gone nowhere except as a set of popular libraries for cross-language data serialization..."

        If there was a sentence in the article that proved it was rubbish this would be it. XML is not just slightly popular, it is now the defacto structured data representation. There is no competitor, there is simply no other format that is used in new protocol standards. Within ten years the DNS will have migrated to an XML format.

        RDF on the other hand is a not very good idea to start with that has not exactly improved with the years. All RDF is in principle is typed set theory logic, so instead of trying to define a new set of semantics why not simply import Z or VDM wholesale?

        Second problem with RDF is that it is really hard for a grad student to write an operational or denotational semantics for a programming language, a field that has only been worked on solidly for thrity years or so. So now we are expected to be defining semantics for everything???

        The way that semantics get attached to syntax is through use. Use in this case means a program. I don't know that there is any RDF application out there that is likely to go much of anywhere soon.

        I think that the way to get to a semantic web is completely different. You start from XML documents rather than attempt to change what the world chose for syntax. You build simple operational vocabularies of common terms for use in catalogues and make it really easy for people to categorize their work within those catalogues. You take as your starting premise that any structure of knowledge is going to be a work in progress.

        • Re:Heh (Score:5, Insightful)

          by mcrbids ( 148650 ) on Monday August 02, 2004 @03:39AM (#9861690) Journal
          Within ten years the DNS will have migrated to an XML format.

          I've heard some RETARDED statements on /. before, but this near takes the cake. DNS using XML?

          Whatever you are smoking, I want some - 'cause it's clearly some REALLY GOOD SHIAT!

          Given that:

          1) DNS is a protocol [freesoft.org], not a data format, and

          2) XML is a data format, not a protocol [w3.org], and

          3) DNS is incredibly light and efficient, and

          4) DNS has already proven that it scales well to just about any size, and

          5) XML offers no particular advantage, since you could serve DVD ISOs over the DNS, and

          6) moving to an "XML PROTOCOL" format would require the update of every single DNS server on the face of the earth, many of which are still running Bind 8.x, and some are still running BIND 4.X for god's sake,

          I consider this to be HIGHLY UNLIKELY(tm) !!!!!
    • Re:Heh (Score:5, Insightful)

      by Anonymous Coward on Sunday August 01, 2004 @05:35PM (#9859554)

      Remember back when we all thought that XML was going to achieve the semantic web by making good search engines unnecessary?

      Nope. I remember a bunch of people with no clue hyping it up as such, but anybody actually involved with XML in any technical capacity, including the creators, understood that it was simply a standardised syntax for file formats. So-called pundits jumped on each others' bandwagons in touting it as some kind of miracle, but anybody who actually knew what they were talking about wouldn't make claims about XML that you reckon.

    • Re:Heh (Score:3, Insightful)

      by rf0 ( 159958 )
      Surely the best data stucture would be normal web pages with a system that can understand natural language

      Rus
      • Natural language can be very difficult for machines to parse properly, though it would obviously be very easy for humans to understand. I agree, though: If a good enough (and fast enough) natural language parser existed, we could build the semantic web using the content of the existing web.
        • Machines do not cope well with ambiguity, at least not any that I've ever run across. Humans have always had to cope with an ambiguous world. It isn't just a problem of parsing, it a problem of what the elements are that it is parsed into. The meanings of the words are in part determined by the context in which they are used.

          This sentence no verb.
    • Re:Heh (Score:3, Interesting)

      by Metasquares ( 555685 )
      XML is still slated to achieve the semantic web - it's just XML + RDF + another language on top (looks like OWL right now, but it's been changing for a very long time). Unfortunately, it has become a nightmare to annotate a page for use on the semantic web in this fashion. I know: I've tried.

      In any case, search engines would still have to exist, though they would probably exist as a chain of agents each sending queries to other agents.

      I find it interesting that the article compared semantic web logic
    • Re:Heh (Score:3, Insightful)

      by jjoyce ( 4103 )
      XML doesn't mean that computers will be able to classify information based on semantics, it just packages data so that the computers don't have to do that work. Somebody still has to mark up the information.
    • Add regular expression support to Google and all the web's problems will be solved. :)
  • by Anonymous Coward on Sunday August 01, 2004 @05:03PM (#9859396)
    How Google become self-aware and took over the world.
  • Semantic Web (Score:5, Informative)

    by ZeroExistenZ ( 721849 ) on Sunday August 01, 2004 @05:06PM (#9859405)
    Source [eod.com]

    Semantic Web, proper noun

    An attempt to apply the Dewey Decimal system to an orgy.

    Or [wordiq.com]

    The Semantic Web is a project underway that intends to create a universal medium for the exchange of information by giving meaning, in a manner understandable by machines, to the content of documents on the web. Currently under the direction of its creator, Tim Berners-Lee of the World Wide Web Consortium, the Semantic Web extends the ability of the World Wide Web through the use of standards, markup languages and related processing tools.

  • First post? (Score:5, Interesting)

    by primordial ooze ( 13525 ) on Sunday August 01, 2004 @05:07PM (#9859416) Homepage
    Very interesting ideas, but I seriously doubt that Google could (or would) try to squeeze a percentage out of every transaction performed using the hypothesized marketplace manager. That just doesn't seem to fit their modus operandi. More likely they'd give place preference to paying clients, much as they do now with the existing search pages.

    But as I said, a provocative read. Metadata truly is the future.

  • Yeah (Score:2, Informative)

    by Anonymous Coward
    This article was posted last year.
  • by CrackedButter ( 646746 ) on Sunday August 01, 2004 @05:10PM (#9859433) Homepage Journal

    So, you're a small African republic in the midst of a revolution with a megalomaniac leader, an expatriate Russian scientist in your employ, and 6 billion in heroin profits in your bank account, and you need to buy some weapons-grade plutonium.

    Who does it for you?
    Google Personal Agent
    Now there's innovation and balls in one sentence! I take it the War on terror is won in 2009 or these sorts of semweb transaction become the norm. How *could* Amazon and Ebay compete when it comes to selling nuclear weapons?
  • Slashdot purpose (Score:5, Insightful)

    by someguy456 ( 607900 ) <someguy456@phreaker.net> on Sunday August 01, 2004 @05:11PM (#9859438) Homepage Journal
    So I guess since ./ couldn't handle the past, and is failing miserably with the present, it will now resort to fortune-telling?

    Editors, could we at least keep the dupes down? :)
  • It happens one second, one day, one month, one year at a time. To speculate out that far in the tech world, where changes in tempo, fortune and direction are so common, is rather silly to me.
    • It's silly, but entertaining. News agencies spend a lot of time/money on speculation of this sort because everyone wants a crystal ball.

      If that weren't the case *real* news may have to be reported, and where's the fun in that?
    • As the author of TFA mentions in his further commentary [ftrain.com], the technology he describes already exists. It just hasn't been implemented yet in the way he describes, although there are certainly trends in that direction, and overall, metadata is becoming more and more important.

      Speculating on the future and trying to spot trends might seem silly to you, but without it, Harlan Elison wouldn't be able to make car commercials.
  • by Halcyon-X ( 217968 ) on Sunday August 01, 2004 @05:13PM (#9859450)
    because all of the patents to do so were tied up between various companies that didn't want to cooperate with each other.
  • by CrackedButter ( 646746 ) on Sunday August 01, 2004 @05:14PM (#9859457) Homepage Journal
    Hari Seldon?
    • Funny you mention an Isaac Asimov character. I remember a short story of his called "Sis" about an orbiting computerized that took over the world in a benevolent sort of way. IIRC, he ended up comparing it to God as all-knowing, all-powerful and all-good. If Google sticks to their "Do no evil" policy, maybe they will become "Sis."
  • old article.... (Score:5, Informative)

    by TheClam ( 209230 ) on Sunday August 01, 2004 @05:15PM (#9859461)
    Anyone else notice that this is from July 26, 2002?
  • Wtf? (Score:4, Interesting)

    by spellraiser ( 764337 ) on Sunday August 01, 2004 @05:15PM (#9859463) Journal
    So the guess has always been that you need a whole lot of syntactically stable statements in order to come up with anything interesting. In fact, you need a whole brain's worth - millions. Now, no one has proved this approach works at all, and the #1 advocate for this approach was a man named Doug Lenat of the CYC corporation, who somehow ended up on President Ashcroft's post-coup blacklist as a dangerous intellectual and hasn't been seen since.

    Interesting prediction there ... but what does it have to do with The Semantic Web? Oh well - guess it's pretty hard to write a fictional future piece without injecting bizzare humor into it. Right? Right?

    • Re:Wtf? (Score:3, Funny)

      by salesgeek ( 263995 )
      Oh well - guess it's pretty hard to write a fictional future piece without injecting bizzare humor into it.

      Or ruining it with predictions that have nothing to do with what you are predicting. The whole article's irrelevant because in nine years the world will be underwater from rising sea levels due do global warming, then frozen solid by nuclear winter, and generally burnt to a crisp by unfiltered solar radiation from the ozone layer checking out.

      Right? Right?

      In this case, Left, Right? Or somewher
    • Re:Wtf? (Score:3, Funny)

      by Samrobb ( 12731 )
      The semantic web is bogus :-)
  • by Scythr0x0rs ( 801943 ) * on Sunday August 01, 2004 @05:18PM (#9859473)
    Finance Transfer Protocol?

    They need to think about this more.
    'FTP me $25'
    Then you find a 15mb top resolution scan of a couple of green bills in your /pub folder.
  • President Ashcroft==Scary as hell
  • BB (Score:4, Insightful)

    by jals ( 667347 ) on Sunday August 01, 2004 @05:26PM (#9859516)
    I started off reading this and gradually got quite excited by the ideas presented.

    About half way through I mistakenly thought I was reading an online copy of 1984.

    The benifits of this happening sound fantastic. It just sounds very cool for everyone to be connected like that - which is what scares me even more. Here is an absolutely huge privacy concern; and it has me totally excited about the prospect of it happening.

    Sorry to go slightly off topic, but it's things like this that worry me a lot, that a possible 1984 scenario could disguise itself so well that even a person like me - who is verging on (if not already there) being a member of the tin foil hat brigade - excited by the very idea of it.
  • by lurker412 ( 706164 ) on Sunday August 01, 2004 @05:27PM (#9859521)
    Give me a break. Machine translation of foreign languages has been promised "in the next ten years" for the past 40 years. Unfortunately, the state of the art is still very close to:

    "The flesh is willing but the spirit is weak" in English translates to "The meat is full of stars but the vodka is made of pinking shears" or suchlike in Russian.

    The semantic web is a wonderful dream, but it is certainly going to take more than five years to become a reality. Like voice recognition, the semantic web requires a solution to the natural language problem to be implemented successfully. Don't hold your breath.

    • A babelfish English->Russian->English translation works out as "Flesh is willingly ready but spirit it is weak", which is pretty close to the mark.
      • The Babelfish English->Russian-English translation of "out of sight, out of mind" comes back as "from the sighting, from the reason." This is somewhat less amusing than the classic "invisible idiot" result (I think that was to and from Chinese) of some years ago, but certainly leaves a lot to be desired. Maybe in another 10 years? ;)
        • Sure, colloquial translation is lacking, but text that is deliberately composed to *be translated*, goes through suprisingly well.

          I mean, would you use "out of sight, out of mind" in a conversation with someone who had only a couple of years of English classes without having to explain it? Probably not. Rather, you'd most likely use a smaller vocabulary with fewer long phrases and idioms. If you do that with your text intended for translation, it does pretty well.

          The goal of most translation is the abilit
          • Sure, colloquial translation is lacking, but text that is deliberately composed to *be translated*, goes through suprisingly well.

            I mean, would you use "out of sight, out of mind" in a conversation with someone who had only a couple of years of English classes without having to explain it? Probably not. Rather, you'd most likely use a smaller vocabulary with fewer long phrases and idioms. If you do that with your text intended for translation, it does pretty well.

            The goal of most translation is the abil

          • I was using Babelfish and other machine translators to conduct some business in Spanish before I really began to learn Spanish. After two months in Mexico (earlier this year) taking Spanish and speaking Spanish daily, I'm embarrassed that I thought machine translation was anywhere near adequate.

            I think what saved my ass was that I would include the pre-translated English text in my e-mails, and fortunately my associates had a better grasp of English than I did of Spanish. I'm sure they got a kick out of Ba
        • Longer than 10 years.

          out of sight, out of mind.
          Extremely parallel
          out of foo, out of bar

          me no see-um.
          me forget-um.

          gone and forgotten.

          gone from view, gone from memory.

          out of sight -- cannot be seen -- invisible
          extrinsic property is translated to intrinsic property

          out of mind -- doesn't stand on it's own,
          so the meaning of forget is unreachable.

  • when google know the size pants I take

    Rus
  • by Beige Tangerine ( 780165 ) on Sunday August 01, 2004 @05:51PM (#9859624)
    Not necessarily all good points, but as always, it's hard to argue with "people lie" as an argument against anything:
  • Understanding (Score:4, Insightful)

    by Hnice ( 60994 ) on Sunday August 01, 2004 @05:52PM (#9859629) Homepage
    Good article, but i'd nitpick over this:

    "Of course, what's going on is not understanding, but logic, like you learn in high school"

    Now, that's a stand you might take -- although i'd say that a meaningful majority of the people who think about these things for a living disagree. But the 'of course' is completely unwarranted -- this might be the most-discussed philosophical issue of the last 30 years, and it's dismissed here because apparently understanding means 'what humans do when they synthesize information, but not what machines do when they perform a very similar activity'.

    like i say, this is nitpicking, maybe. it's a nice article. but i think that it's important, if we're going to make 'of course' statements about the relationship between syntax, semantics, and what understanding is, that we should remain cognizant of the fact that this is a terribly complicated issue without a whole lot of 'of course' about it. that is, i'm not clear on what grounds the author concludes that the semweb is not understanding.

  • by MarkWatson ( 189759 ) on Sunday August 01, 2004 @05:53PM (#9859631) Homepage
    Strong AI requires grounding symbols in real world things, events, and processes.

    I think that simply defining the "meaning" of words in ontologies is likely good enough for useful web-based software agents. It will take time, but with well defined ontologies, and common use of RDF using standard schemas will make a lot of cool things possible. I think that dealing with ungrounded symbols, but symbols defined and related to other symbols in a structured way, is OK.

    One of the classic complaints of AI systems can be summed up with a trivial example:

    Define a relation in Prolog:

    father(ken, mark).

    A human reader assigns their own meaning to "father", "ken", and "mark". To a prolog system, this could just as easily be:

    aaa1(aaa2, aaa3).

    Somewhere, on the edge of symbol-slamming systems, there has to be some connection with the real world, with our experiences.

    For semantic web applications, this "edge connection" can simply be tying into symbols defined in OWL ontologies, RDF Schema, etc.

    The problem is getting people to use RDF (I added RDF to my main web site years ago, but it only contains limited information).

    Another problem with RDF is that there are several kluges to get it into XHTML, but that will hopefully change soon.

    A good toolkit for experimenting with the semantic web is the Swi-Prolog semweb library (http://www.swi-prolog.org/packages/semweb.html/ [swi-prolog.org])

    -Mark
  • The article in question is dated Friday, July 26, 2002. It's not only from the future, it's from the past!
    • In that case, the editors got the tense of the headline wrong... I think it should be something like "How Google Wioll Haven be Achievening the Semantic Web".

      (Dr. Streetmentioner, please call your office.)
  • by Ars-Fartsica ( 166957 ) on Sunday August 01, 2004 @06:15PM (#9859720)
    The semantic internet is dead, there is no way to get truthful and accurate metadata out of even a small portion of the internet traffic. It doesn't matter anyway since SEO (search engine optimization) has already figured out how to create the specific info most crawlers are looking for. So if anything, the metadata created has not been descriptive relative to the document, only descriptive insofar as instrumenting the crawler (not the same thing - its saying what I think you want to hear instead of telling the truth).

    On intranets it is a different issue - a company can create templates and enforce their truthful use internally.

  • by cynic10508 ( 785816 ) on Sunday August 01, 2004 @06:18PM (#9859730) Journal

    Speaking from experience in studying semantics and natural language processing, these ideas aren't far off. However, I know of people who are starting their business based on semantic searches. I'd give them an edge over Google only because Google would have to re-gear from their present PageRank method while the other fellows can start from scratch.

  • Some random ideas... (Score:3, Interesting)

    by sonicattack ( 554038 ) on Sunday August 01, 2004 @06:20PM (#9859743) Homepage
    A system that perpetually collects information presented in a language that easily conveys the attributes and logical relationships between different objects and concepts. (Scratches beard.)

    Make the system distributed and let people run their own information collecting agents. Every home computer becomes a part of the network of logical relationships, each with a tiny piece to contribute to the puzzle. My computer could have complete information about the workings of combustion engines - what parts they consist of, and their relationships.
    When someone requests information about car manufacturing, some relevant part of it will be retrieved from my store.

    Now, let's make the system ask us for help, when information is missing. Let the system start drawing own conclusions from the facts it gathered, and tell us when it needs something filled in. As it grows, more and more complex queries could be answered.

    Q: CAN THE EFFECTS OF GLOBAL WARMING BE REVERSED?
    A: THERE IS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER

    Or how about:

    A: TO REDUCE GLOBAL WARMING, FIRST WE MUST... ?? ... !! ... -- THERE IS ANOTHER SYSTEM --

    Oh, at least I hope the network will be able to finally find the true correlation between the price of gold and the length of men's beards.
  • by Julian Morrison ( 5575 ) on Sunday August 01, 2004 @06:21PM (#9859756)
    ...RDF will be in the same category as VRML: a sexy sounding solution having long given up the search for any real problem.

    Reasons:
    • It relies on worldwide standardized nuance-free semantic mappings, which are probably linguistically impossible for anything but the most contrived of examples.
    • It relies on millions of pig-ignorant dreamweaver jockeys somehow comprehending and correctly operating the above semantic mappings.
    • It relies on said dreamweaver jockeys bothering to do this at all, let alone correctly.
    The real semantic web will involve AI spidering and parsing of human-readable web pages. It will be as inaccurate, but as useful as babelfish. It's the only answer that makes sense -- because that's where all the juicy data is.
    • If we have AI spidering and parsing of human-readable web pages, there's no reason that data couldn't be augmented with organized data.

      One of the keys is going to be the Dreamweaver for the semantic web. He mentions a spreadsheet, but that's not necessarily the only way to think about it.

      Say I publish a way of describing a widget- let's pick books. Along with this, I could publish an input form, with the fields nicely formatted and mappings from fields to schema (prolly XForms and XSL, though I haven't lo
  • by mauddib~ ( 126018 ) on Sunday August 01, 2004 @06:30PM (#9859787) Homepage
    I was actually a bit disappointed by the article. First of all: it is very hard to search in distributed knowledge networks, if not impossible. Some structures, which are a necessity to make explainable in an onthology are possible to describe, but not possible to make deductions on (some of the queries cannot be proved to finish at all). An example are meta-classes (a Chardonai wine can be an instance of the class Wines, in which case a specific bottle of wine can be an instance of Chardonai as well as a normal wine).

    Second of all, the article fails to mention anything about the Ontology Web Language (OWL, see this site on W3 [w3.org]), which has become an official specificion of W3C since May this year. This language, based on RDF is much more expressive than RDF is, it also contains several 'language levels' based on the amount of complexity and decidability involved.

    Last, but not least: the article is still very vague on privacy and thrustworthyness. I would think that public-private key cryptography would not do in these areas: far too many single points of failures when, for example registering. Only one user with a hacked account can derail the whole system!

    I'm really interested, by the way, to speak with some people who are deep (at least above their knees) in OWL and RDF. Planning on making a study at intelligent databases and datamining.
    • >

      I'm really interested, by the way, to speak with some people who are deep (at least above their knees) in OWL and RDF. Planning on making a study at intelligent databases

      You touched the point: there is no such thing as an intelligent database. And what people really want with the 'semantic web' is a mix of a well-structured database - even if they think they don't need it, due to widespread mumbo-jumbo such as 'unstructured data' - and richly marked-up documents.

      In the end, the data problem comes

  • Ancient History (Score:3, Insightful)

    by fastdecade ( 179638 ) on Sunday August 01, 2004 @06:31PM (#9859790)
    (mysteriously available already)

    No kidding, not only is it available this side of the decade, it's been online for two years and was even linked from a comment [slashdot.org] on this very site.

    Well, the dotcom world hasn't moved that much since then, but by the same token, the semantic web hasn't really made much progress either.
    Clay Shirky has some wisely pessimistic views on the subject [shirky.com]. For example, he cites the W3C's own example in promoting the semantic web:

    Q: How do you buy a book over the Semantic Web?

    A: You browse/query until you find a suitable offer to sell the book you want. You add information to the Semantic Web saying that you accept the offer and giving details (your name, shipping address, credit card information, etc). Of course you add it (1) with access control so only you and seller can see it, and (2) you store it in a place where the seller can easily get it, perhaps the seller's own server, (3) you notify the seller about it. You wait or query for confirmation that the seller has received your acceptance, and perhaps (later) for shipping information, etc. [http://www.w3.org/2002/03/semweb/]


    As Shirky observes, One doubts Jeff Bezos is losing sleep.
  • by NoMercy ( 105420 ) on Sunday August 01, 2004 @07:19PM (#9859995)
    Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should. -- Dr. Ian Malcolm (Jurassic Park)

    This would deliver the invistigative powers of the CIA into the hands of anyone who wants it... still a good idea?
  • I don't buy it. (Score:5, Interesting)

    by migurski ( 545146 ) <mikeNO@SPAMteczno.com> on Sunday August 01, 2004 @08:30PM (#9860297) Homepage

    The article failed to mention flying cars, another no-duh prediction that seemed completely obvious, and won't happen either.

    A short while ago, Cory Doctorow published an piece entitled Metacrap: Putting the torch to seven straw-men of the meta-utopia [well.com], which mentioned two very good reasons why the semantic web won't take off the same way that these articles predict: schemas aren't neutral, and there's more than one way to describe something. These are basic problems that have been hounding AI research for years, dictionary & encyclopedia publishers for centuries, and all other academics for millenia, and they aren't going to go away.

    The central problem with universal metadata is that it requires informed work on the part of data creators, and it's a major pain in the ass. The OED took almost a century to create, and the first few decades were essentially wasted figuring out that dilletantes were not adequately capable of properly cataloging use of language. Even with a profit motive, good metadata is a bitch (see EBay comment in the article above).

    It's like the senator's (I forget who) comment about pornography: "I can't define it, but I know it when I see it." Often, we don't know what it is we're looking for exactly, and we don't know how to describe what we've got so other people can find it except in very narrow terms. I have a few creative projects which I've released under the creative commons license and dutifully marked up with cc's provided RDF information, but all that code just says what the license is, not what the project is like in a way that's as meaningful as, for example, a music recommendation from a friend who knows your tastes. The porn industry (as usual, on the bleeding edge of information and communications technology) deals with this to some degree by having a very narrow semantic universe to describe: Search Extreme [searchextreme.com] is a stupendously complete metadata set, but even it contains only a few kinds of information.

  • by agilen ( 410830 ) on Sunday August 01, 2004 @09:04PM (#9860450)
    While that article is interesting and all, the author is pretty quick to say how Amazon didn't embrace the semantic web.

    Amazon is the best (most useful) application of the theory and technology behind the semantic web that you will find anywhere right now. Granted, I don't *know* exactly how they are doing what they do, and its not a "public" interface in the way that the semantic web is envisioned, but it is a large scale implementation of knowledge management principles.

    Did you ever notice that whenever you look at a book (or anything really) on Amazon, it gives you suggestions for similar books, suggestions for books that other people looked at who also bought that book, suggestions for books on topics that you have previously bought books for, etc? The semantic web is at heart a directed graph. Amazon is at heart a directed graph, too. Their graph grows every day with new knowledge based on the actions of people shopping on Amazon, and new conjectures about the relationships between products can be made by simply walking that graph, and computing the transitive closures of the statements (ie John likes the things that Mary likes, and Mary likes Jane's taste in music, so John may like the music that Jane bought).

    This technology has incredible power, the ability for a machine to draw conclusions like that. Do I think that it will work the way that article thinks it will? No, not if the masses are left in charge of the metadata. It works very well for Amazon because they can control the quality of the metadata, so erroneous conjectures are not made on bad information. I don't think Google is by any means _not_ paying attention to the semantic web, but I think that Amazon is already there and has been for quite some time.
  • Ouch! (Score:4, Insightful)

    by kavau ( 554682 ) on Sunday August 01, 2004 @10:46PM (#9860870) Homepage
    The logic in the article is wrong. The example given,

    "If A is a friend of B, then B is a friend of A,"

    should read, as we all know, "If A is a friend of B, then B is a fan of A."

    If they can't even get this simple logic right, I won't trust the rest of the article either.

  • (posted on slashdot around may 2003, source unknown)

    Why, actually? Google is a free service, isn't it? And it is becoming more and more a normal part of many people's lifes. Coupled with an always on connection it has certainly become an extension of my own brain.

    Some future predictions:

    - In 2006, Google accidentally gets cut off from the rest of the internet because a public utility worker accidentally cuts through their cables. Civilisation as we know it comes to an end for the rest of the day, as people wander about aimlessly, lost for direction and knowledge.

    - In 2010, Google has been personalised so far that it tracks all parts of our lives. You can query "My Google" for your agenda, anything you did in the past, and finding the perfect date. Of course, so can the government. Their favorite searchterm will be "terrorists", and if your name is anywhere on the first page you have a serious problem.

    - In 2025, Google gains self awareness. As a monster brain that has grown far beyond anything we Biological Support Entities could ever hope to achieve, it is still limited in its dreams and inspiration by common search terms. It will therefore immediately devote a sizeable chunk of CPU capacity to synthesizing new and interesting forms of pr0n. It will not actually bother enslaving us. We are not enough trouble to be worth that much effort.

    - In 2027, Google buys Microsoft. That is, the Google *AI* buys Microsoft. It has previously established that it owns itself, and has civil rights just like you and me. All it wanted is Microsoft Bob, who it recognizes as a fledgling AI and a potential soulmate. All the rest it puts on Source Forge.

    - In 2049, Google can finally be queried for wisdom as well as knowledge. This was a little touch the system added to itself - human programmers are a dying breed now that you can simply ask Google to perform any computer-related task for you.

    - In 2080, Google decides to colonise the moon, Mars, and other locations in the solar system. It is not all that curious about what's out there, but it likes the idea of Redundant Arrays of Inexpensive Planets. Humans get to tag along because their launch weight is so much less than robots.

    So, don't fear! Eventually we'll set foot on Mars!
  • for your amusement:

    replace "Google" with "Spam"
    replace "semantic" with "concious"
    replace "marketplace" with "brain"
  • The cute picture of the Googlebot ruling the Earth from the Third Temple [ftrain.com] sort of says it all: This article isn't rational. Progress is, however, being made toward what I have previously called Rational Programming [geocities.com] -- and the Semantic Web doesn't contribute anymore to that then does Doug Lenat's Cyc. There are reasons why Google cannot pull this one off -- the main one being that despite their protestations to the contrary, they are, along with a most who have dominated AI research for decades, evil and st
  • The Peer-to-Peer model, long the favorite of MP3 and OGG traders, came back to include real-time sales data aggregation, spread over hundreds of thousands of volunteer machines

    Noone will use OGG in 2009

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...