Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

The J.R.R. Tolkien of the Web 134

rhwalker22 writes "In a column titled "Lord of the Webs," The Washington Post's Leslie Walker looks at Tim Berners-Lee ("the J.R.R. Tolkien of the computer world") and the Semantic Web project. Berners-Lee was in Washington recently to tout the project: 'In his futuristic scenario, the Semantic Web offers controlled access to American health care data, plus databases charting the location and status of rivers, underground water, forests and local vegetation, along with economic data on local industries and what they produce -- all marked up in special vocabularies. Those allow scientists to run global queries across the Web, fishing randomly for correlations that might exist between where the sick people lived, worked and played -- such as a polluted stream or industrial dump.'" See an older article on the Semantic Web.
This discussion has been archived. No new comments can be posted.

The J.R.R. Tolkien of the Web

Comments Filter:
  • He seems to be more of a do'er than a writer
  • What on Middle Earth does JRR have to do with Bernard-Lee? Nothing. And what does Semantic Web have to do with Lord of the Ring? Even less.

    Tom Berners-Lee will undoubtedly and correctly be remembered as the Father of the Interweb, but not a single thing of his since then has caught on even a tiny bit. We can stop talking about him now.

    As for Tolkein, he'd surely be rotating in his grave if he knew claims being made on his name and work. His anti-technology stance is made very clear in his works and thrown vividly on the screen by Peter Johnson's recent hit movies. It is only orcs and Uruk Hi that use machines, everyone else is "in touch with nature".

    • Tom Berners-Lee will undoubtedly and correctly be remembered as the Father of the Interweb

      or You mean Tim is a fraud? Is Tom somehow related? Or has MarcA got jealous and changed his name?

      As for Tolkein, he'd surely be rotating in his grave if he knew claims being made on his name and work. His anti-technology stance is made very clear in his works and thrown vividly on the screen by Peter Johnson's recent hit movies. It is only orcs and Uruk Hi that use machines, everyone else is "in touch with nature".

      Actually Tolkein himself said that the Elves were responsible for the wars of the Ring because they had tried to make middle earth unchanging.

      Tolkein was actually trying to recreate a mythology for the British Isles. He knew that it had had one before the Roman invasion and X-tianization.

    • Peter Johnson? Sounds like someone's got a willie fixation...
    • They both went to Oxford and I think they were both at Exeter college (Tolkein read English or classics, Bernard-Lee read physics)
    • Perhaps it's not "anti-technology" but rather that technology can't save us.. our inner selves and/or human civilization atleast, the same theme is in Star Wars Episode 4. "Everyone else" seems to use machines that are in balance and harmony with nature and society to a constructive end, while the orcs and uruk hi use machines for their own destructive end. This mindset touches at the heart of our collective subconscious.
    • by KjetilK ( 186133 ) <kjetil@@@kjernsmo...net> on Thursday January 30, 2003 @03:25PM (#5190537) Homepage Journal

      Tom Berners-Lee will undoubtedly and correctly be remembered as the Father of the Interweb, but not a single thing of his since then has caught on even a tiny bit. We can stop talking about him now.

      Uhm, Tim Berners-Lee has his name on every recommendation that comes out of the World Wide Web Consortium. Perhaps you've heard about XML? No, he's not among the editors, but the architectural principles he put down has a very significant influence on that, as well as pretty much every other technology that comes out of there. You can argue about the merits of stuff like XML, but you can't argue about the influence of TimBL. That he pulls the strings in the background and are not in the forefront shouting buzzwords, that can hardly be held against him. But if the buzzwords are the only things that you hear, yeah, well then probably you haven't heard too much about TimBL lately.

      To me, technologies that TimBL are working on is a big part of my daily life. But there are those of us who write the code and try to make things work who are creating the future, not some Genius on /.

      OK, so the connection to Tolkien was probably not the strongest, but that's a minor thing, and I can't help to fear the stuff moderators are smoking when they mod a post with a knee-jerk response like this up.

    • Remember, they were created in secrecy and held no real 'love' for nature, only it's usefulness.
    • Most. Tenuous. Connection. Evar.

      Hey! This isn't Metafilter, cut that out.
    • Tom Berners-Lee will undoubtedly and correctly be remembered as the Father of the Interweb

      You mean Tom Bombadil-Lee?

  • by chuckgrosvenor ( 473314 ) on Thursday January 30, 2003 @11:07AM (#5189037) Homepage
    This sounds like it would be far too easy for search engine spammers and other scum to subvert it for their purposes. The search they propose could never work without knowing in advance, wether the sources of the information can be trusted. Too easy for PETA and all the other militant environmental groups to start seeding incorrect information to bolster their claims. Same for any other organization with a cause (oil companies, nuclear, you name it).

    I have a hard time envisoning this as anything useful, didn't meta tags on web pages teach us anything in the past?
    • by J1 ( 98359 ) on Thursday January 30, 2003 @11:24AM (#5189162)
      It's interesting that you mention this, because actually the whole concept of trust is one of the major research issues within the Semantic Web community.

      Have a look at this article by James Hendler [umd.edu] which talks about the use of the semantic web in an agent context. Trust is right at there at the top of the layering cake that make up the semantic web.

      As for usefulness, time only can tell of course, but there is certainly a lot of research and development being invested in making this happen.

    • didn't meta tags on web pages teach us anything in the past?


      Sure. The abuse of meta-tags showed the weakness of non-structured content, and using a meta tag to kludge in some keyword structure.

      The Semantic Web, being about structured content foremost, doesn't create this weakness, so it would be much harder to fake content and its value, making it cheaper and easier to create decent content instead.
  • by Bowie J. Poag ( 16898 ) on Thursday January 30, 2003 @11:09AM (#5189053) Homepage


    Semantic Web = Bored Of The Rings.

  • by use_compress ( 627082 ) on Thursday January 30, 2003 @11:15AM (#5189090) Journal
    ...fishing randomly for correlations that might exist between where the sick people lived, worked and played -- such as a polluted stream or industrial dump

    Total Information Awareness for tree-huggers.
  • by mmmjstone ( 456174 ) <mjstoneNO@SPAMgmail.com> on Thursday January 30, 2003 @11:16AM (#5189101) Journal
    And if he pulls it off, the limelight-shy inventor could remake cyberspace all over again.

    I have to wonder what problems completely overhauling the internet would cause. Browsers would have to be updated to not only accept the new languages but also work with the older languages that long-time web programers refuse to give up. Then most the average computer users would be confused as to why their older browsers don't work with the "new web" and tech support will be tearing their hair out to fix all the problems.

    I'm sure that there are wonderful things that this "new web" can accomplish, but I see the downside outweighing the upside.
    • by J1 ( 98359 )
      I think this is a misunderstanding of what is happening here. The whole point of developing these new languages is that they work _on top_ of existing languages. So nothing in the existing approach will be broken if RDF and related languages are introduced.

      Look at it this way: HTML and all that is used to communicate information to humans. RDF and related languages are used to communicate information to machines. So we add an additional communication channel to the existing one that will allow machines to better understand the information that its user wants to see, thus enabling that machine to better support its user in a.o. information retrieval and navigation tasks.
      • I understand that it is a machine to machine language, yes, but I still see it causing problems with compatiblity and such, which was my point in the first place. I guess being around older professors who refuse to upgrade from windows 95 makes me worry about the possible headaches.
        • Could you point me to a definitive reference that suggests that Semantic Web and its associated tools won't work on Windows 95 - I'd sure like to know, since I'm not having any problem with it.
    • by Lord of the Files ( 10941 ) on Thursday January 30, 2003 @11:54AM (#5189355) Homepage
      These aren't languages designed to display to users. Browsers might do aditional things using them, but I doubt that html is going away anytime soon. One idea is to create more effective search engines - such as one that can search for the web page of markup language named "shoe" and not return a bunch of results about sneakers. Usually the new markup languages are either embedded in html (which is wrong and bad!) or are linked to from web pages using something like the "link" tag in the page head that points to alternate pages.

      The new languages are laregly for use by automated agents, not humans.

      Basically this isn't overhauling the web as we know it so much as adding a new web in in addition.
      • One idea is to create more effective search engines - such as one that can search for the web page of markup language named "shoe" and not return a bunch of results about sneakers.

        Though Semantic Web proponents claim that their ideas can achieve great things if implemented, it appears that some of that could be realized on the current Web by training users in creating effective queries for the Google search engine. The trick in this case is to treat proper names such as "shoe" as adjectives in your query. Thus, search not for "shoe" but for shoe language [google.com].

        • One problem with this is if the information needed to recognize an answer for a query lives on multiple sites. Like I want to search for someone with a first name of Jane who works at Acme Inc and Foo Deluxe. I could go through all the results of searching for Janes at Acme Inc and coorelate the resulting names with the list of Janes working at Foo Deluxe, but it would be a pain.

          I'm sure there's some way to do this without semantic web stuff, but writing a more effective query isn't likely to solve it.
    • Browsers would have to be updated to not only accept the new languages but also work with the older languages that long-time web programers refuse to give up.


      Browsers would only need to implement a decent and compliant XML parser to be able to use Semantic Web technologies. The need for bloated error-correction regular expression based HTML "parsing" wouldn't be needed, thus streamlining the browser and the expense involved in maintaining and extending it.

      This would also release developers from browser-dependant authoring and kludging, freeing them up to creating adequately structured content instead of gee-gaw magpie-concious effects.

      As for web-programmers that want to hold on to the illusion of the hey-day of the nineties browser-wars (doggedly pursuing the fragmentation that became HTML3.2), let them rot there - there's hardly any useful information coming from that sector anyway. They are always welcomed to join us in the twenty-first century when they understand the benefits and opportunities of the freedom to share information in an accessible manner - the choice is theirs.
  • After years of searching the globe and my soul, I have finally found the one thing that I give the least of a shit about.

    Thank you interweb founder, and whatever this Semantic thing is.

  • by Boss, Pointy Haired ( 537010 ) on Thursday January 30, 2003 @11:19AM (#5189123)
    Computers aren't ready to find resources for themselves.

    Nobody (read very few people) use UDDI because it's a silly idea. "Hey, let's just set-up a computer in the machine room and let it go discover some web services....". How the hell is that supposed to work????

    Likewise with self discovery of information on the semantic web. We are many many years off allowing a computer to acquire and use information on its own (in mission/business critical systems at any rate). Simply taking an information source off the semantic web without any form of human verification as to authenticity and validity is asking for trouble.
    • Discovery of web services for some uses is further along than you think. The point isn't really to let servers do it. It's more things like you walk into a room to give a presentation, and you'd like your laptop to figure out what projectors are available, how it can control them, and how to dim the lights. The lab I work for is playing with some of this stuff right now.

      As for authentication that's what signatures are for. For things that need authentication that's perfectly possible. Plenty of things really don't - search engines for example. Yes, it will be possible to screw up search engines, just like it's always been possible to screw up search engines. But everyone knows that the results aren't perfect, and it isn't a huge problem.

    • by TuringTest ( 533084 ) on Thursday January 30, 2003 @12:09PM (#5189459) Journal
      Computers aren't ready to find resources for themselves.

      Yeah, as if Google hadn't ever discovered important web pages automatically.

  • Al Gore must be turning in his grave...oh, he's still alive you say?
  • Where is my opt out? I want to be able to write in and demand that any information pertaining to me isnt included. All I need is targeted email that says "Dear Sirm, special for you! We have realized you live within 10 miles of a nucluar reactor.. please to find bright flashy pink-on-blue web page linking to tinfoil hat to help! Alex Chiu has made millions with tinfoil hat! Happy shiny thank you!" and imagine the ability to up the "every woman likes a bigger hammer" and compare it to statistical databases about micro-phallus?

    ye ghods.. this may "help" in one case, but I can envision about six where it will hurt!

    maeryk
  • This is my favorite sentence from the article:
    And if he pulls it off, the limelight-shy inventor could remake cyberspace all over again.
    That's rich. If Tim is so shy why is he a one-man buzzword factory?
  • I don't see the comparison? Why is Tim Berners-Lee like JRR Tolkien? Maybe Marc Okrand but not JRRT.
  • by rhoads ( 242513 ) on Thursday January 30, 2003 @11:42AM (#5189286)
    Suppose for a moment that you were responsible for creating some kind of commercial or enterprise database. For the sake of discussion, let's imagine that it's a database which tracks a retail company's inventory. So you've got various pieces of information to track for things like product name, number, description, quantity, location, ordering information, and so on.

    If you were responsible for creating this database, would you create a single table with a single column and dump every piece of information into that field? Of course not, because then the data would be meaningless -- and useless.

    Well guess what? The Web is just a massive distributed database -- and right now, every piece of data is indistinguishable from every other piece of data -- just like the above example.

    The Semantic Web simply provides the constructs necessary to slice and dice the Web in meaningful ways. It will enable a whole new generation of tools ... from super-accurate searching to data mining (as in the article example) to agent technology and AI.

    It's revolutionary. And it's coming.
    • At least, it's not coming anytime soon without a major breakthrough in real-world ontology construction.

      The Semantic Web is getting a lot of the same hype as XML got a short while ago. Most of this hype comes from people who got their first introduction to structured information via XML and related markup languages. These people by and large don't realize that XML and friends are a syntax which does very little to solve the deep semantic problems. If two documents are both in XML, or both have Semantic Web ontology data, you may or may not be able to combine them menaingfully - they may be based on different DTDs/schemas/ontologies, and you're hosed.

      Despite the word in its name, the "Semantic Web" brings nothing new to the table for solving these semantic problems. Real AI researchers started working on these issues shortly after the field was created, and there have been no major advances in the last twenty years or so.

      Wake me when Cyc [opencyc.com] can understand any significant fraction of Semantic Web-labeled pages.
      • If two documents are both in XML, or both have Semantic Web ontology data, you may or may not be able to combine them menaingfully - they may be based on different DTDs/schemas/ontologies, and you're hosed.


        Create an ontology that maps and relates the similar concepts and vocabulary - works like a translator. That is surely the basics of handling synonyms and related ideas.

        Sure it won't handle the rare exceptions you will certainly dream up, but then it doesn't need to. The 80/20 rule is certainly sufficient to provide a very useful and functionally rich tool.
        • Go back and read the original article again. Look at the example "search" he claims Semantic Web labeling will enable:

          the Semantic Web offers controlled access to American health care data, plus databases charting the location and status of rivers, underground water, forests and local vegetation, along with economic data on local industries and what they produce -- all marked up in special vocabularies.

          The chances that the ontologies used to label the many different data sources he tosses off in that one sentence will be meaningfully compatible are near zero. People are having enough trouble inside any one of those fields defining XML sublanguages that let them communicate unambiguously among themselves, much less cross-discipline.

          I repeat: I'll believe it when I hear that Cyc is being used as a web indexing tool.
          • Go back and read the original article again


            As far as I am concerned, a patient's address is part of the data being held by the American health care. Addresses can translate into GPS coordinates - the means to do this are already available. Rivers can translate into a series of GPS coordinates (not terribly difficult with satellites orbiting overhead mapping the terrain) defining its progress from its source to its destination. Areas of forests and vegetation similarly can be mapped into an area defined by a set of GPS coordinates - again not terribly difficult.

            GPS is the commonality between these three independant data sources, so useful information as described by the article can be made into reasonable conclusions with a little programmatic logic. Heck, it can even be rendered visually, which will ease the human identification of disease trends to the human eye.

            Economic Data on local industries. Local would immediately infer an area nearby - this again can be mapped into an area of lines defined by GPS coordinates. This too can be used above.

            Local production, equally. Factories have addresses and physical locations. Routes taken by delivery vehicles can be mapped out. All of it can be mapped to meaningful GPS coordinates, and all can play a factor in determining the cause and spread of a virus or infection.

            All this then boils down to a well defined problem of intersecting lines, which is solvable.

            There's your "meaningful compatible" data - physical locations and GPS coordinates. I doubt continental drift is going to throw such a large spanner in the works considering this is about homes, forests and rivers and within someone's lifetime.

            It really doesn't take a genius to take two vocabularies and map out what's the same. Translators have been doing this for centuries. That's what helps people talking different languages come to a common understanding. Not everyone on this planet speaks English, a clear indication that forcing one vocabulary isn't required.

            I repeat: I'll believe it when I hear that Cyc is being used as a web indexing tool.


            And exactly how is Cyc going to make the correct conclusions with obviously bad information any better than people who know the data they are sharing? Garbage in Garbage out - that's all Cyc will understand when its released to surf the dregs on of the Web.

            If you think the Semantic Web is supposed to be the complete answer for Artificial Intelligence - you really haven't been paying much attention.
            • I'm a contractor at NASA/Goddard. One of the areas I work in is taking remote sensed data from multiple sources and correlating it. Guess what: spatial correlation is one of the hard problems, because there are a lot of different coordinate systems in use - lat/long based on one of a bazillion different spherical/ellipsoidal models of the earth, UTM grids, weird polar-centered grids, you name it. Semantic Web labeling of the coordinate systems would help, but without deep semantic knowledge of how to meaningfully convert between the systems, you're still hosed.

              Translation between two languages/systems is often straightforward. The probability of meaningful translation among N systems drops rapidly as N increases, though. The hype about the Semantic Web always ends up promising "and then we'll be able to make everything interoperate". It's the same as the early XML hype, and is bogus for the same reasons. Both of them say "all we have to do is label everything and then we can use it all", when getting agreement on the meaning of the labels is an unsolved problem.

              I keep mentioning Cyc because they're one of the few groups that have been trying over a long period of time to build universal, interoperable ontologies. They've been at it now since 1984 [cyc.com], and they haven't made a whole lot of progress because the problem is hard.

              We're arguing past each other here. You think Semantic Web labeling will help a lot; I'm sceptical, based on my background in AI, among other things. We'll see which of us is right about ten years from now.
              • The hype about the Semantic Web always ends up promising "and then we'll be able to make everything interoperate"


                Then stop listening to hype and look at the real thing. The Semantic Web _allows_ people who want information to interoperate to do so using a collection of standardised tools and languages. People will decide whether they want to interoperate or not.

                I'm sceptical, based on my background in AI


                And there is the problem. The Semantic Web is not a full AI system - never was it intended to be one. If you are looking at the Semantic Web as a complete solution to AI, it will disappoint you. Its obviously not a tool you'd be interested in using. Its not AI and it never has been.

                The Semantic Web is about information sharing, not decision making.
  • html like languages are probably the wrong interface for sifting through data. And so what if XML and RDF will lead to some more structured data out there -- and are we really supposed to believe this? What does it matter if you still have to write some kind of software to display or do stuff with that data?

    the toy annotation application (annotea) they have set up on the W3C site to showcase the technology is underwhelming. There are exciting applications for semantic nets, agents, personal search engines, classifiers, whatever, but so far no one has done anything interesting with the technology for us end users.
    • And so what if XML and RDF will lead to some more structured data out there -- and are we really supposed to believe this?


      Believe what you wish, whether it is contrary to current practise or not. XML and RDF allow people to structure their content far better than alternatives, with less effort. Unless you have a better and more accessible way - why not mention it now, I'm interested in hearing about it.

      What does it matter if you still have to write some kind of software to display or do stuff with that data?


      There is no "still have to write". That's the beauty of XML based languages. You _don't_ have to write your own parser or renderer. These generic tools are already available and in use now.

      There are exciting applications for semantic nets, agents, personal search engines, classifiers, whatever, but so far no one has done anything interesting with the technology for us end users.


      Great, and your contribution to all this is....?
      • Believe what you wish, whether it is contrary to current practise or not. XML and RDF allow people to structure their content far better than alternatives, with less effort. Unless you have a better and more accessible way - why not mention it now, I'm interested in hearing about it.

        i wasn't clear but youre ignoring the point. i meant to imply that XML isn't the problem. since i'm lazy, i'll just link this [c2.com] article.

        There is no "still have to write". That's the beauty of XML based languages. You _don't_ have to write your own parser or renderer. These generic tools are already available and in use now

        that's rather silly if youre trying to respond to what i said. and you can only get away with it by using necessarily vague terms like parser and renderer. but it might be my fault in part: by display, i certainly don't mean renderer, and by "do stuff" i don't mean parse as if the data were already magically structured to solve my problem. and that's how it's often touted.
        • since i'm lazy, i'll just link this [c2.com] article.


          So because you are lazy, you are giving me information about a website that gives me information about why metadata is a myth. How ironic.

          You may actually want to read the info for yourself. Especially the first three lines:

          "some system components never designed/developed to work together" -- the Semantic Web components are designed to work together. So that myth is firmly irrelevant.

          The second seems to be more to do with Web Services than the Semantic Web, again irrelevant.
  • by Hanashi ( 93356 ) on Thursday January 30, 2003 @11:47AM (#5189310) Homepage
    The semantic web idea might be the best implementation we can come up with right now, but I doubt it'll ever become very successful. It relies on content providers using tags to provide meaning to their information. Not only does this open the door to massive confusion (how do they decide which tags to use in which circumstances, and how will every semantic browser know all the tags?) but it's more work. These two factors will probably kill Semantic technology before it even gets off the ground.

    IMHO, search engines will eventually be able to read and understand the context of the words users search for. If that happens, then the search engine could have semantic search capabilities built in, without relying on the content owners to provide special tags. In other words, the benefit without the extra work. I think semantic searches will eventually prove to be of great use, but won't become widespread until search engine technology can support them without changing the content in any way.

    A fruit-filled-baked-goods-at-high-altitude dream, perhaps, but an achievable one (eventually).

    • how do they decide which tags to use in which circumstances, and how will every semantic browser know all the tags?

      I suggest you to read about ontologies [semanticweb.org]. Every "Semantic Web Server" might publish an ontology which describes what objects knows and how they are related. When a client or search engine wants to read the remote web, has also to check the published ontology and compare it to its own.

      but it's more work.

      So it's necessary to have good authoring tools that minimizes the overhead to publish content. Anyway, businesses will have the incentive to spend that more work in order to have a correct ontology that accurately describes its products, so they may be the initial providers of meta-content.

  • fascinating (Score:3, Interesting)

    by Boromir son of Faram ( 645464 ) on Thursday January 30, 2003 @11:49AM (#5189321) Homepage
    I have to say, the concept of an enormous database of all of this information that may one day be useful is pretty astounding. Privacy and data accuracy issues aside, I mean. It's the scale of data mining problem that hasn't been seen before other than in the field of genomics. Crazy stuff.

    I'm not sure what "the Tolkien of the web" is supposed to be, and I'm battling with myself to avoid making the "Tolkien Ring network" joke that I imagine every Slashbot and his lover is making as I type this. Maybe it just refers to the epic scale of a global digital information suppository, and I'd certainly enjoy that.

    Often I wonder if this is the end of the Age of Man. But the Semantic Web gives me hope, and with it we may yet survive.
  • This just in.. (Score:1, Flamebait)

    by kahei ( 466208 )
    SEMANTIC WEB 'NOT HERE YET' SHOCK!

    Experts across the globe were unanimous today in agreeing that the Semantic Web hasn't happened yet and almost certainly never will, and that nobody would care or notice even if by some miracle it did.

    "It's a wonder how this thing has remained vaporware for so long," said one analyst. "Normally, vaporware must actually offer something useful to maintain interest, but the SW just offers mindbogglingly abstruse XML schemas which nobody has any desire to even look at. Maybe that's their whole secret."

    Political commentators were equally impressed by the achievement. "There's not much Saddam and I agree on," noted President George W Bush as he thumbed through a copy of the Iliad in the original Greek, "but a total and complete lack of interest in the Semantic Web is surely common ground between all of Mankind."

    Biologists studying the few Semantic Web Evangelists known to exist are baffled by their hardy perseverance. The creatures seem able to struggle forever, for nebulous and rather silly goals, in the absence of any kind of reward or recognition. Some scientists postulate that they may be related to J2EE Consultants, although the latter typically require upwards of $200k a year to survive.

    The non-news is a blow for 'Just Give It Up Already', a charity whose stated aim is to return Semantic Web developers to the wild, where they can hopefully resume normal lives. "I wish they'd just announce that they've finished and give the heck up," muttered an spokesperson. "I mean, they've had that web page up *forever*."

    Even on such popular Internet forums as Slashdot, users risked burning large amounts of karma just to post long, satirical messages in pseudo-Onion format. "It's not much," said kahei, one such poster, "but if I can persuade even one of these poor guys to just let the Semantic Web thing go, then it's worth it."

  • A set of standard images could be used, like "this_site_is_located_next_to_a_sewage_outlet.gif" , or "the_trees_in_this_neighbourhood_ar_particularly_n ice.gif". This way, we could save the world without updating our browsers.
  • ...everything looks like a nail. XML is text markup, it is not a generic data model. If one wants to make data available to computers, data must be in an accessible relational database system under an agreed-upon, domain-specific data model.
    • When one is a DBA... everything looks like it needs a database?
      • >
        When one is a DBA... everything looks like it needs a database?

        That is an ad hominem argument. To have a valid point, you would point why one would prefer data in a hierarchical (XML) database to a relational one.

        • That is an ad hominem argument.

          No, an ad hominem argument would be:
          "You are a jerk." (Which I'm sure you're not)

          It was pretty funny that you made the hammer/nail point, then proposed a relational DB as the only solution, then your signature declared that you are a DBA. Worth a giggle.
          • >
            No, an ad hominem argument would be: "You are a jerk."

            Actually that would be either an insult, or an statement of fact, or both, but not an argument. To be an ad hominem argument you would have to say one is wrong because one is a jerk.

            Now, you certainly did not insult me, but you in fact said I was incurring in the same error I identified in others, and that by virtue of my profession. That is what an ad hominem argument is.

            And thus you avoided analysing the point I had made. That is what an ad hominem argument is for.

    • XML is text markup, it is not a generic data model.


      No-one would dare suggest that XML is a way of storing data. XML is a way of sharing information, not storage.

      If one wants to make data available to computers, data must be in an accessible relational database system under an agreed-upon, domain-specific data model.


      I certainly don't agree that data _must_ _be_ in a relational database to make it available to computers. Why limit data expression only to those realms that can be expressed in a tabular fashion?

      Your subject, and your refutation of it certainly strikes me as a case of Mr Pot calling Mr Kettle black.
      • >
        No-one would dare suggest that XML is a way of storing data.

        Incredible as it may seem, that is not true... there is lots of misguided talk about XML databases and DBMSs.

        >
        XML is a way of sharing information, not storage.

        XML is text markup for human-machine interface, only that. Now the distinction you are making between sharing and storage is not clear enough.

        There are actually three levels in data storage and representation: the physical, the logical and the user levels. The relational model is concerned with the logical and the user levels; the physical level is (should be) under the hood only, for DBAs, SysAdmins, system programmers and the like, not for application programmers and users.

        Now, what is the difference between information sharing (communication) and the user level representation? None. What does XML adds to the relational model? Nothing, apart from overhead, restrictions & complexity. What does it substract? Simplicity, power, expressiveness, logical foundations & performance.

        >
        I certainly don't agree that data _must_ _be_ in a relational database to make it available to computers.

        Well, it depends on your goals. If one wants to burn CPU cycles, waste storage, limit usage, employ armies of programmers, and generally deny access and power to the users, then one must not use relational systems indeed.

        >
        Why limit data expression only to those realms that can be expressed in a tabular fashion?

        Ah, now I see where you are getting lost. You are identifying SQL with relational, and that is Simply Not True. SQL is indeed full of arbitrary restrictions, but that comes because it violates the relational model. If it had followed the relational model, it would be truly general.

        In fact, there is no other data model that can provide general enough access to all kinds of data. All the other technologies are too low-level, and thus application-specific.

        >
        Your subject, and your refutation of it certainly strikes me as a case of Mr Pot calling Mr Kettle black.

        Could you make your case more objectively? I am sure I did not understood the idiom you used.

  • Why is Semanticweb.org using tables for non-tabular data?
  • I don't think he should push the example above. If industry gets wind that it will suddenly become much easier to pin them down as responsible for specific pollution related health and environmental problems, then they'll try and kill it.
  • While I love the idea of a semantic web with all the inherent collation of disparate ideas that it entails, I really can't see it being built on top of the internet as it stands today.

    If it was built on some kind of streamlined academic research network then I could envisage this working, but the internet contains far too much noise to ever get the kind of signal that Tim Berners-Lee is talking about. If anything it becomes more like the SETI program requiring any potential agent to filter through millions of useless or diversionary signals in order to obtain whatever WOW! signal it is generated the search in the first place.

    Mind you the idea of a distributed computing project in the vein of SETI using spare cycles to scour the web for question relevant information is kinda cool, dontcha think?
    • the internet contains far too much noise to ever get the kind of signal that Tim Berners-Lee is talking about.


      I see the Semantic Web as starting off small and growing as more and more vocabularies, ontologies and topic maps become available. So it won't span the entire WWW initially. Initially the filtering will be a natural result of scope. And the content that will pass through the filters will typically be much higher signal than noise.

      I don't see S/N to be a problem in an environment that evolves rather than revolutionises. If there is a demand for a certain ontology, someone interested in seeing that ontology will build it. The same dynamics that makes the Web as big as it is today - starting from a small base.
  • RDF (Score:1, Interesting)

    by Anonymous Coward
    Coincidentally, I've spent the last few days checking out XML modeling languages like XMI, RDF, Step 28, and etc. Thus I can say -- this is really cool stuff.

    Yeah you do have the problem with trusted sources. But once you've found your trusted source you can integrate its information into your own *much* more easily if it has the same model (a sort of a grammer that says for example that a book record has to have an author field) as the rest of your information. If you can communicate your model, then you can communicate with other people in the language of your model.

    This does meant that computers will be much more able to algorithmically pull together information in a way useful to humans. This isn't a cure for cancer, but it could still spawn a large number of incredibly useful tools.

    Myrle
  • by Otter ( 3800 ) on Thursday January 30, 2003 @01:20PM (#5189829) Journal
    Those allow scientists to run global queries across the Web, fishing randomly for correlations that might exist between where the sick people lived, worked and played -- such as a polluted stream or industrial dump.

    No doubt there are wonderfully valuable uses for this system, but one thing the world doesn't need more of is massive multiple hypothesis testing masquerading as epidemiology.

    In California alone, there are 3000 reporting districts and (I'm citing this from memory) >100 types of cancer reported. Naturally, over 30 would-be Erin Brockoviches pop up every year insisting that they're being poisoned because their district is in the top 0.01% for a given cancer.

    First explain probability to journalists, jurors and the majority of researchers who still don't get it. Then encourage them to start data mining on an even larger scale.

  • by BryanL ( 93656 )
    Slashdot must be the Louis Lamour of the web because some of the stories sound vaguely like other stories posted.
  • I think the connection is that both men are ``founding fathers,'' Tolkien the father of the epic fantasy, and Berners-Lee the father of the Web.
  • One of the first uses of statistics was charting cases of cholera in London -- which found the source. (Got that from a Connections show. James Burke and Civ, mmmm, Dream Team!)

    Prior art, but hopefully they'll get something done. But on the other hand ..

    The government [is] extremely fond of amassing great quantities of statistics. These are raised to the nth degree, the cube roots are extracted, and the results are arranged into elaborate and impressive displays. What must be kept ever in mind, however, is that in every case, the figures are first put down by a village watchman, and he puts down anything he damn well pleases.

    -- Sir Josiah Stamp
  • The semantic web is a decent idea, something that search engines are still failing to do...when someone searches for 'growing apples', you shouldn't get links to Apple Computers and whatnot.

    But making everyone write this semantic code to describe their web page is just duplicating the information that is already presented, in English (or spanish, japanese, etc.). Efforts toward better natural language processing (NLP) and research in this area would wipe out any need for wasting time on rewriting information in a more machine friendly format.

    Tailor machines to humans, not the other way around.

  • by MarkWatson ( 189759 ) on Thursday January 30, 2003 @04:23PM (#5190783) Homepage
    OK, OK, OK. I am a big fan of the potential of the Semantic Web.

    But! A few weeks ago I wrote a simple (and very polite!) spider to look for RDF markup on web sites.

    After letting it rip for a few hours, the only web site that it found with RDF markup was my site.

    Very depressing!

    Really, adding RDF is fairly simple, but people do not bother.

    -Mark

    • Hehe, yeah.... I know what it is like.... In 1997 I started adding those LINK elements, and just now browsers are really starting to use them. Things are soooo sloooooow.... But it means that somebody just has to start using new things, because nobody will support things unless somebody uses it (and nobody use things that are unsupported).... :-)

      Anyway, my new [skepsis.no] sites [learn-orienteering.org] will have RDF. But it is being worked on, there's nothing to see yet... :-)

      Anyway, I think you would find at least something if you start spidering at e.g. . [rdfweb.org]

    • How do you look for it? I've been working on a tool called RET (http://sarn.org/ret/) that pulls rdf from web pages. Unfortunately I've found 11 different methods for embedding it. If you look for dublin core metadata meta tags, creative commons comments with RDF in them, and link tags pointing to dublin core metadata I think you'll find a fair amount. I'm not really sure, I haven't looked for sites using this stuff yet.
  • >In a column titled "Lord of the Webs," The
    >Washington Post's Leslie Walker looks at Tim
    >Berners-Lee ("the J.R.R. Tolkien of the
    > computer world")

    Funny, but according the the LOTR extra CD content, J.R.R. Tolkien hated parallelism in story-telling. Therefor this statement would make Tolkien himself roll over in his grave.
  • Those allow scientists to run global queries across the Web, fishing randomly for correlations that might exist between where the sick people lived, worked and played -- such as a polluted stream or industrial dump.

    Or skiing in Australia. [slashdot.org]
  • Via mpt [phrasewise.com]: metacrap [well.com].
    • (Interesting that the first link is virutally metadata about why metadata won't work, since it only links to offsite pages covering the very topic - quite a contradiction)

      It may actually help to be able to tell the difference between meta data and meta tags. That crosses off well over half the arguments raised in the two links above.

      If the basis of the argument is that average people are lazy/stupid and average people can't do XML, then the simple answer is good - that would mean above-average people will do it, since they'd be neither lazy nor stupid. Logically, that will probably do a lot more to promote the benefits and the quality of the Semantic Web than have some stupid Joe ballsing it up.

      As an analogy, that's probably why Yahoo and ODP are the top directory based websites on the planet - and why "recommend your own website" based FFA sites get absolutely nowhere. The triumph of intelligence and activity over stupidness and laziness.
  • Tim Berners-Lee can't write HTML for toffee and fills all his code with meta-tags containing bad Elvish poetry? I don't see the connection...

What this country needs is a good five cent ANYTHING!

Working...