Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Searching the 'Deep Web' 193

abysmilliard writes "Salon is running a story on next-generation web crawling technologies, specifically Yahoo's new paid "Content Acquisition Program." The article alleges that current search services like Google manage to access less than 1% of the web, and that the new services will be able to trawl the "deep web," or the 90-odd percent of web databases, forms and content that we don't see. Will access to this new level of specific information change how we deal with companies, governments and private insitutions?"
This discussion has been archived. No new comments can be posted.

Searching the 'Deep Web'

Comments Filter:
  • by Trigun ( 685027 ) <evil AT evilempire DOT ath DOT cx> on Tuesday March 09, 2004 @08:51AM (#8509015)
    being pretty much total crap, I'd really hate to see the other 90%!
    • by Zone-MR ( 631588 ) <slashdotNO@SPAMzone-mr.net> on Tuesday March 09, 2004 @09:27AM (#8509341) Homepage
      It could actually be useful content.

      Let me give you an example. I run a forum. The main index page doesn't contain much information, just an overview of the latest posts and a brief introduction.

      The rest of the content is what people submit. Here is the problem. The pages are generated dynamically. They end up having url's like http://domain/index.php?act=showpost&postid=12 44

      Google sees index.php as one page, and does not attempt to submit any data via get/post. This means that effectively the most valuable content is missed.

      Of course making it crawl /?yada=yada links has problems, namely the possibilty of getting stuck in an infinite loop where data and links are tracked using sessions, and an infinite number of URLs could potentially yeild valid, although very similar results.
      • http://domain/index.php?act=showpost&postid=12 44

        Google sees index.php as one page, and does not attempt to submit any data via get/post.


        Hmm... I see plenty of pages in Google that have URLs with GET parameters, so there must be some way of getting it to crawl them. Or am I misunderstanding what you're saying? Maybe the key here is to provide an alternate route to those pages without doing anything fancy (drop-down menus, radio buttons, etc.). Just generate another page that contains a regular link
        • <i>Hmm... I see plenty of pages in Google that have URLs with GET parameters, so there must be some way of getting it to crawl them. Or am I misunderstanding what you're saying? Maybe the key here is to provide an alternate route to those pages without doing anything fancy (drop-down menus, radio buttons, etc.). Just generate another page that contains a regular link to all your pages. You could hide that page from your regular users by, say, linking it to a 1x1 pixel transparent GIF. A robot will fin

          • print("Some page... [slashdot.org]"); ... and looping google?


            As I understand it, looping is in fact a big problem for robots. There are a number of ways of getting around it. A brute-force method would be to just limit the search tree depth to say, 20 levels or so (I pulled that number out of my butt, of course, so it would need some tuning based on how many levels you're likely to see on a real site).

            It wouldn't surprise me to learn that more sophisticated robots (e.g., Google) actually do fairly sophisticated cont
            • Google is so fast, it can probably (almost?) search its own database to see if it has seen the link already. If the load is too much, then restrict the search to a fraction, such as only once per 25 links in a search branch, or once per second, or maybe just random inspections. Then the robot will loop for a bit, but that's it.

              If google is smart, then they'll have robots close to as many servers as possible, preferably at least 1U colocated at each more than insignificant hosting provider, so that crawling
          • Yeah, I can see that google sometimes lists pages with get content in it's index. It doesn't want to do it for a lot of pages though, and I haven't figured out why. There seems to be nothing different in the HTML.

            One word: backlinks. Pages, even with request parameters, that get linked to from lots of popular (high-pagerank) sites get indexed.

      • I've noticed some ?parameter=value URLs from Google, usually for Slashdot, so I'm guessing they enable that for certain sites by hand.

        However, you can modify Apache and/or PHP to use URL/URI names for dynamic pages. You could remap your example query to http://domain/showpost/1244/ and the engines will probably index it. I'm not sure why more message board software doesn't do this. (Okay, probably because it requires httpd & server-side processing coordination.)
      • I agree that the search engines do not index dynamically generated pages very well. This page on my site http://www.dealsites.net/index.php?module=MyHeadli nes&func=view&myh=menu&gid=22&pid=2&eid=504&tid=30 0&context= [dealsites.net] hasn't seemed to attract any of the search engines yet. I'm not sure why, the data changes hourly and I have a direct link to that page on my site.

        However, when search engines do start doing deep crawls, especially if they do POSTs and GETs, then the bandwidt
  • Deep Web? (Score:5, Insightful)

    by Traicovn ( 226034 ) on Tuesday March 09, 2004 @08:52AM (#8509022) Homepage
    I bet you this new 'Deep Web' search technology would be something that does not observe robots.txt...
    • Re:Deep Web? (Score:3, Insightful)

      by Anonymous Coward
      Good. If you leave things publically accessible on an open web server, that's your own damned fault. Let the engines crawl where they please.
      • Re:Deep Web? (Score:2, Insightful)

        by AndroidCat ( 229562 )
        # go away. No, really - this means you!
        User-agent: *
        Disallow: /

        And if they don't listen, feed them a huge maze of generated links that eventually lead to goatse or something. Or just block their crawler at the router and they can search their intranet.

      • Re:Deep Web? (Score:3, Informative)

        by JDevers ( 83155 )
        If I'm not mistaken, the original reason for robots.txt was to prevent endless loops from confusing spiders, not to "cover" some information that would otherwise be easily accessible. Of course, others use it for other things now...
        • Re:Deep Web? (Score:2, Insightful)

          by Anonymous Coward
          Well, I know that we use robots.txt to cover some directories that are both publicly accessible, and that we want people to be able to get the data in, yet that data is pretty useless unless you are visiting it from our link. We do signal processing, and looking at our data tables and our raw log files would be completely useless and can really alter a web search.
      • If you have a public mail server, you deserve any spam you get...
    • Re:Deep Web? (Score:2, Interesting)

      by Anonymous Coward
      User-agent: *
      Disallow: /s3kr3t/

      trawler: "Hey cool, thx for the tip I never would have thought to try /s3kr3t/"
    • Doesn't observe it? It probably relies on it - tells you where the good stuff is!
    • The Deep Web, aka crapflooding submission forms
  • ignore robots.txt (Score:1, Informative)

    by Anonymous Coward
    These new deep-web crawlers try and ignore the robot access control files. They try and intelligently determine if they're in some type of infinite looping situation, but basically this is how they work.
  • Damn ... (Score:2, Funny)

    by Anonymous Coward
    I remember browsing the WWW directory in '93 and being able to scroll through all the sites on my VAX session at university. Are you telling me I am one of the few people who actually ever reached the end of the internet?
  • by stienman ( 51024 ) <adavis&ubasics,com> on Tuesday March 09, 2004 @08:54AM (#8509055) Homepage Journal
    Will access to this new level of specific information change how we deal with companies, governments and private insitutions?"

    Yeah. It means I'll be able to use someone else's credit card for more of my transactions, since finding credit cards, SSNs and other...uh...'deep web' stuff will be so much more accessable.

    -Adam
    • by dsanfte ( 443781 ) on Tuesday March 09, 2004 @09:17AM (#8509236) Journal
      I wish you luck using that credit card number without the appropriate expiration date. The FUD spreaders rarely mention the fact that exp dates are almost never stored with the numbers themselves.
      • The FUD spreaders rarely mention the fact that exp dates are almost never stored with the numbers themselves.

        If by "almost never" you mean "usually", I'd be inclined to agree with you.

        We're talking about application designers that are foolish enough to store credit card numbers in a publicly accessible location to begin with. Do you really think any of them have given thought to deliberatily obfuscating the data model enough to store expiration dates somewhere other than right next to the CC numbers and
    • So are you implying that you're credit card information is currently availible on web pages, with no password protection, and the only thing stoping hackers is that it isn't listed in a search engine?
  • Deep Web? (Score:2, Funny)

    by dingo ( 91227 )
    Why do I get the feeling that you will get a lot more search results for Linda Lovelace when searching the "Deep Web"
  • by Anonymous Coward
    If you don't want it indexed and looked at, don't put it on the web in the first place.
  • Deep web? (Score:5, Funny)

    by hookedup ( 630460 ) on Tuesday March 09, 2004 @08:55AM (#8509061)
    Doesnt crap sink? Not sure I want to know what the other 90-odd percent is. After tubgirl, goatse, etc.. what else could possibly be next..
  • deep web? (Score:5, Funny)

    by rjelks ( 635588 ) on Tuesday March 09, 2004 @08:55AM (#8509065) Homepage
    Is it just me, or does this sound like we're gonna get more pr0n when we search?

    -
  • No... (Score:1, Interesting)

    by Anonymous Coward
    but it will get us 90% more useless results. The regular search spam on Google is bad enough (it's getting to the level of bad results AltaVista had before Google took over the throne) without this extra noise...
  • so maybe that's why google never tells me anything about servicing this teletype machine...

    it's amazing to think how much more information we'd have access to if google (or another search engine) could search 90% of what's out there. i mean, just at 1% we already say, "google knows all"
  • by robslimo ( 587196 ) on Tuesday March 09, 2004 @08:56AM (#8509072) Homepage Journal
    ...but I don't want to see the guts of a web form. If I userstand correctly, they're talking about crawling into databases, actually parsing a Microsoft Access file, for instance. I see that as having dubious merit, and potentially pissing of web site owners. Web site designers go to a lot of trouble to provide the interface they want you to see to their data. This would just sidestep the interface and dump you into the data.

    It the very least, it might require an overhaul or extension to the robots exclusion specification to keep spiders out of your data.
    • There is plenty of very good information out there that isn't indexed. For example, I found a lot about the top level finances of my company, including compensation of the president and vice presidents, that was made a matter of public record when they filed the information as part of an IPO. However, unless I had found the IPO on the SEC website because I found a financial site that let me search for IPOs, I would have never known that the information was available to the public. No search engine would fin
  • by oneiros27 ( 46144 ) on Tuesday March 09, 2004 @08:57AM (#8509082) Homepage
    Of course, it's nice to know that the content's there, but how many children are now going to be able to bypass the disclaimer pages on porn sites because of deep linking?

    I could care less about Ticketmaster whining out their deep linking, but there's probably some stuff out there that if it isn't taken in context to their intended point of entry may have other problems.

    I'm afraid that this is going to give people more reason to go back to using frames, and 'detecting' if their content has been hijacked, and writing more bad code that causes multiple windows to pop up all over the place, and/or crash browsers.
    • [i]Of course, it's nice to know that the content's there, but how many children are now going to be able to bypass the disclaimer pages on porn sites because of deep linking?[/i] ... because so many teenage children will be determined by the disclaimer. "Oh, damn, I'm not 18 so I can't see her titties".

      Also there is nothhing from stopping the sites from checking the refferer to display the disclaimer on first EXTERNAL entry. Also search engines at present are hardly intelligent enough to automatically avoi
      • by oneiros27 ( 46144 ) on Tuesday March 09, 2004 @10:11AM (#8509834) Homepage
        It's rather stupid, but it has to do with legal practices.

        If you have no warnings, then someone can claim that you forced your content on them, and they didn't know what they were getting into, and it was offensive.

        By putting up warnings, which inform the user that they shouldn't enter your site if it's illegal for them to do so shifts part of the burden of responsibility to them, and away from you.

        So, if you're sued for having distributed offensive material, you can claim that you provided warnings, and that the person chose to disregard them. [Sort of like putting up 'wet floor' signs -- if someone gets hurt, they made an active decision to ignore the sign]
    • > but how many children are now going to be able to
      > bypass the disclaimer pages on porn sites because
      > of deep linking?

      How many children want to read a disclaimer page anyway? Or agree that they are not old enough to do something?
    • by CAIMLAS ( 41445 ) on Tuesday March 09, 2004 @01:07PM (#8511381)
      but how many children are now going to be able to bypass the disclaimer pages on porn sites because of deep linking?

      Hello, 1996 is calling; they want their paranoia back!

      Goodness, you aren't serious, are you? Have you used a search engine in the last couple years? Have you not ever looked for porn yourself? Just hop over to images.google.com and enter the name of a porn star - bam, shitloads of smut. Not only that, but search google.com for a porn star's name (many of which you could easily find by searching for 'famous porn stars', I'm sure) and you'll find gallery after gallery of porn, open and free.

      There is no such thing as protecting your kids from porn on the internet anymore. If you don't want to have them looking at porn, don't let them online or police their actions.
  • PHP? (Score:5, Interesting)

    by TGK ( 262438 ) on Tuesday March 09, 2004 @08:57AM (#8509086) Homepage Journal
    Since I moved my site over to a php bases sytem, nothing beyond my index page gets a second look from google. As web content moves away from static pages to more dynamic solutions (particularly XML) a more sophisticated crawler is neeeded, one that can read over this bewildering malstrom of data and extract form it meaning and content.

    While I find it highly unlikely that this system will do well with large databases (or even databases at all for that matter) it is a step in the right direction. Google will probably have their version up on labs inside a month.

    • Since I moved my site over to a php bases sytem, nothing beyond my index page gets a second look from google

      Perhaps you are doing something wrong? All the dynamic PHP sites I know of are fully indexed by Google.
      • Re:PHP? (Score:2, Insightful)

        by andygrace ( 564210 )
        Well the front pages might be, with a few top stories, but the real problem lies in getting at all the information that is stored in SQL databases ...

        There is reams of stuff in there that a search engine can't see. XML could be used to deep search these entire databases, rather than just the stuff that's pulled into the UI by the PHP code.

        • Re:PHP? (Score:5, Informative)

          by Xner ( 96363 ) on Tuesday March 09, 2004 @09:17AM (#8509243) Homepage
          I'm not exactly sure what you mean. If it is accessible by clicking on links, most search engines should be able to index it. If you want to be extra-friendly you can use $PATH_INFO to make dynamic pages look more like static ones, e.g.:

          http://site.com/blah/prog.php/stat/1
          instead of
          http://site.com/blah/prog.php?stat=1

          I use it all the time and it works really well.

    • Re:PHP? (Score:5, Interesting)

      by DeadSea ( 69598 ) * on Tuesday March 09, 2004 @09:14AM (#8509218) Homepage Journal
      Keep in mind that googlebot comes in two flavors, freshbot, and deepbot.

      Freshbot is meant to update the google cache for pages that change frequently. Freshbot may pull pages as much as every couple hours for really popular pages that change frequently.

      Deepbot goes out once every month or two and follows links. The higher your pagerank, the deeper into your site it will go. If you want more of your site to get crawled here are some tips:

      1. Make your pages *look* static (end in .html)
      2. Avoid CGI parameters except for handling form data (no ? in url)
      3. Put all pages in the document root, or in very shallow subdirectories. Google goes after less and less as the directories get more.

      It is likely that deepbot just hasn't run since you updated your site, so freshbot is just pulling your front page occasionally.

      BTW: I noticed you have a link to my cheet sheet on your links page. Thanks! :-)

        1. Make your pages *look* static (end in .html)

        Another way of looking static is to a, say, "index.cgi" within a subdirectory, and then only link to the subdirectory name. For example, a typical month's archive at my site kisrael.com has the URL like http://kisrael.com/arch/2004/03/ even though it's all dynaimcally generated. (I wasn't smart enough and/or didn't have enough access to my rented webserver to pull off that trick where that URL ends up going to, say, arch/index.cgi and /2004/03/ get interpret

      • > 1. Make your pages *look* static

        I have not ran across a lot of pages that actually need to be dynamically generated. Shopping carts and account settings need it, but if you make everything dynamic, like most misguided web developers do these days, you simply succeed at slowing your site down to a crawl and evoking a long stream of curses from people like me, who still think that broadband access is not worth $60 a month.
    • As web content moves away from static pages to more dynamic solutions (particularly XML) a more sophisticated crawler is neeeded, one that can read over this bewildering malstrom of data and extract form it meaning and content.

      It's all in how you build your pages.

      For PriorArtDatabase.com [priorartdatabase.com] there is only a handful of actual 'pages' ... everything is actually pulled from source XML files. But the URLs are created in such a way that it appears to be separate pages to a search engine. I've seen the googlebot

    • Since I moved my site over to a php bases sytem, nothing beyond my index page gets a second look from google.

      Have you considered using mod_rewrite or a similar solution to convert your complex URLS with query string parameters aplenty into something that looks like a vanilla filepath?

      For example, using mod_rewrite the URL of the page I'm typing this on

      http://slashdot.org/comments.pl?sid=99804&op=Re p ly &threshold=3&commentsort=0&mode=flat&pid=85090 86

      could be rewritten to look like
  • From the article (Score:5, Insightful)

    by sczimme ( 603413 ) on Tuesday March 09, 2004 @08:58AM (#8509089)

    Those of us who place our faith in the Googlebot may be surprised to learn that the big search engines crawl less than 1 percent of the known Web. Beneath the surface layer of company sites, blogs and porn lies another, hidden Web. The "deep Web" is the great lode of databases, flight schedules, library catalogs, classified ads, patent filings, genetic research data and another 90-odd terabytes of data that never find their way onto a typical search results page.

    There is a reason for this: a Google search should turn up pointers to the items in the so-called "deep web" (*gag*). To use one of the examples above: if I am looking for information on patents, the search terms I use should point me to the US Patent and Trademark Office [uspto.gov]. It shouldn't have to point me to all 12 bajillion patent filings.

    Besides, what makes anyone think this is going to fly after all the hubbub over "deep-linking"?
    • Right.
      But if you are interested in a specific subject..
      Let's say you have a technical problem.
      Chances are somewhere on the planet someone submitted the same problem on a web-based forum.

      Now you want google to give you THAT specific message.
      You don't want google to tell you "hmmm... I guess the solution must be in one of those zillions of forums here, here, and here".
  • Spiders? (Score:4, Interesting)

    by Vo0k ( 760020 ) on Tuesday March 09, 2004 @08:58AM (#8509095) Journal
    ...and I wonder about something different.
    Has anyone tried this yet? Change your user agent string to one matching the googlebot and crawl the web. I'm pretty sure many "registration only" websites would magically open themselves, but I wonder about other differences too :)
    • Re:Spiders? (Score:3, Interesting)

      by MyHair ( 589485 )
      Good question. I haven't tried it yet, but I've run into several sites that Google indexes but the site refuses me entry until I register (which I don't). Some of them are clever enough to put Javascript (or something) in to prevent you from looking at Google's cache of that page. Yeah, I could get around that, but usually by then I figure I don't care what that site has to say.
    • Re:Spiders? (Score:3, Informative)


      I can't speak for everyone, but here we check not only a spider's User Agent string, but also whether the request is coming from Google's IP range or elsewhere. So your results may not be so great.

      Then again, I've defeated many registration (er, pr0n) gateways by just seting a Referer header identical to the URL I'm requesting, so some defenses are better than others...

    • Change your user agent string to one matching the googlebot.... I'm pretty sure many "registration only" websites would magically open themselves...

      Indeed. I do exactly this to access the Insiders Only content on IGN [ign.com]. (You'll also need to disable javascript). I'd feel bad about it, but this pricks clearly intend to deceive. I find links to interesting content through Google, but the link leads somewhere else. I don't mind paid content (I pay for two online magazines), but attempting to mislead both G

  • I'd happily pay Google a monthly fee to gain access to extensive databases of information that take money to aquire and maintain... as long as this fee was reasonable. The current Google searches should stay as is, but if people want access to do a time consuming search on every single slashdot message ever posted, for example, the advertising would not pay for this effort. However, I wouldn't pay Yahoo! for this in a million years. Premium google searches might include Pages not ranked high enough to be
  • Privacy and Crap (Score:3, Interesting)

    by jackb_guppy ( 204733 ) on Tuesday March 09, 2004 @09:00AM (#8509110)
    Going after the other 90% does not mean that new things will come to top. Oh there maybe a few cool items like "Who realy shot JFK" or launch code for a trident.

    But in reality the other 90% most likely be best left un-found. Who really wants to know that parents were not married as in the manor that they told.

    Just is in archology, you will find a nice vase or two... but the rest is rumble.

    You understand that digging a garage dump is the best place to find things in archology, because people clean their house then too. That is what other 90% is... a dump of information.
  • Google (Score:3, Insightful)

    by nycsubway ( 79012 ) on Tuesday March 09, 2004 @09:02AM (#8509121) Homepage
    Generally, google finds the pages that the authors want to be searched. Thats why you submit your site to google. Even if you dont submit your site to google, if it's on a domain that google searches and there is a link to it, it'll be found.

    With google storing more than 4 billion web pages, I'd hate to see what kind of crap the other 99% is.

    Perhaps they count each iteration of a dynamic page as a seperate page? Even so, google's news page does a great job searching in real time for pages that change dynamicaly.

  • Top 4 (Score:5, Informative)

    by UncleBiggims ( 526644 ) on Tuesday March 09, 2004 @09:02AM (#8509123)
    About.com lists the top 4 places to search the deep web as:Anybody use any of these sites? Are they any good? Just wondering why this is getting to be news if sites like these already exist.

    Are you Corn Fed? [ebay.com]
  • 1 percent,? (Score:5, Insightful)

    by zonix ( 592337 ) on Tuesday March 09, 2004 @09:02AM (#8509125) Journal
    The article alleges that current search services like Google manage to access less than 1% of the web [...]

    1 percent, and I still don't have a problem feeling lucky almost every time I do a search on google.

    z
  • Relevancy (Score:4, Insightful)

    by Traicovn ( 226034 ) on Tuesday March 09, 2004 @09:02AM (#8509126) Homepage
    Judging by the problems with relevancy that often occur in current search engines, (I think of the problem with meta keywords, which for many search engines are now completely useless, and google-bombing) why would a customer pay to add more data to the search engine? The idea of course is 'because they'll be more relevant and because they have more information will come up more often', however, if search engines start searching more and more of this 'deep web' how badly will relevancy be affected? I mean, the more data that is in there, the more chances there are of relevancy being broken, and if the weighting is in favor of this 'featured' searches, then relevancy may be even more broken. Sure, these companies will have more traffic directed to them, but will it merely be useless traffic by frustrated users searching for something else?

    I run a search engine for an educational institution, and I will admit, Google misses a significant number of our documents, on the other hand, some of those documents are scripts that when queried will create an (virtually) infinite amount of data (calendar scritpts, etc). How deep do we really need to go though? Do we really need to include calendar entries for the year 2452?

    I'm also confused, is this search service 'pay by the searcher' or 'pay by the content provider'. It seems to be content provider to me.
  • by PingKing ( 758573 ) on Tuesday March 09, 2004 @09:07AM (#8509169)
    One limitation of Google is that fact that a site that bases its navigation through a drop-down menu or submission form (i.e. choose a section from the list and click Go) cannot be spidered by Google.

    Personally, I find this infuriating. A site I once worked on was available in numerous languages, which could be chosen by choosing from a drop down list box. The upshoot of this is that Google has only cached the site in English, meaning users who would use the other languages do not get my site returned when they search in Google.

    We need an open-source alternative that can address these problems, as well as get rid of the security concerns and mysterious methods Google uses to rank sites.
    • by Stiletto ( 12066 ) on Tuesday March 09, 2004 @09:33AM (#8509401)

      Solution: Web designers, stop trying to be so clever.

      If you want your site to be spiderable, don't hide it behind javascript and flash!
    • I considered this very thing when designing my webpage, where the menus are javascript-drawn.

      My solution: load the links normally inside a <div id=...>, but after the page loads and the JS menus are drawn, it replaces the contents of the DIV using the innerHTML function. Consequently, web spiders are able to crawl down to my sub-pages despite not having JS (not that any engines *have* crawled them, mine's just a small personal site hosted on my university account, please don't /. it!), but anyone vis
  • Article (Score:3, Informative)

    by Anonymous Coward on Tuesday March 09, 2004 @09:09AM (#8509187)
    When Yahoo announced its Content Acquisition Program on March 2, press coverage zeroed in on its controversial paid inclusion program, whereby customers can pony up in exchange for enhanced search coverage and a vaunted "trusted feed" status. But lost amid the inevitable search-wars storyline was another, more intriguing development: the unlocking of the deep Web.

    Those of us who place our faith in the Googlebot may be surprised to learn that the big search engines crawl less than 1 percent of the known Web. Beneath the surface layer of company sites, blogs and porn lies another, hidden Web. The "deep Web" is the great lode of databases, flight schedules, library catalogs, classified ads, patent filings, genetic research data and another 90-odd terabytes of data that never find their way onto a typical search results page.

    Today, the deep Web remains invisible except when we engage in a focused transaction: searching a catalog, booking a flight, looking for a job. That's about to change. In addition to Yahoo, outfits like Google and IBM, along with a raft of startups, are developing new approaches for trawling the deep Web. And while their solutions differ, they are all pursuing the same goal: to expand the reach of search engines into our cultural, economic and civic lives.

    As new search spiders penetrate the thickets of corporate databases, government documents and scholarly research databanks, they will not only help users retrieve better search results but also siphon transactions away from the organizations that traditionally mediate access to that data. As organizations commingle more of their data with the deep Web search engines, they are entering into a complex bargain, one they may not fully understand.

    Case in point: In 1999, the CIA issued a revised edition of "The Chemical and Biological Warfare Threat," a report by Steven Hatfill (the bio-weapons specialist who became briefly embroiled in the 2001 anthrax scare). It's a public document, but you won't find it on Google. To find a copy, you need to know your way around to the U.S. Government Printing Office catalog database.

    The world's largest publisher, the U.S. federal government generates millions of documents every year: laws, economic forecasts, crop reports, press releases and milk pricing regulations. The government does maintain an ostensible government-wide search portal at FirstGov -- but it performs no better than Google at locating the Hatfill report. Other government branches maintain thousands of other publicly accessible search engines, from the Library of Congress catalog to the U.S. Federal Fish Finder.

    "The U.S. Government Printing Office has the mandate of making the documents of the democracy available to everyone for free," says Tim Bray, CTO of Antarctica Systems. "But the poor guys have no control over the upstream data flow that lands in their laps." The result: a sprawling pastiche of databases, unevenly tagged, independently owned and operated, with none of it searchable in a single authoritative place.

    If deep Web search engines can penetrate the sprawling mass of government output, they will give the electorate a powerful lens into the public record. And in a world where we can Google our Match.com dates, why shouldn't we expect that kind of visibility into our government?

    When former Treasury Secretary Paul O'Neill gave reporter Ron Suskind 19,000 unclassified government files as background for the recently published "Price of Loyalty," Suskind decided to conduct "an experiment in transparency," scanning in some of the documents and posting them to his Web site. If it weren't for the work of Suskind (or at least his intern), Yahoo Search would never find Alan Greenspan's scathing 2002 comments about corporate-governance reform.

    The CIA and Dick Cheney notwithstanding, there is no secret government conspiracy to hide public documents from view; it's largely a matter of bureaucratic inertia. Federal information technology organizations may not solve that proble
  • Bad kitty! (Score:4, Interesting)

    by Underholdning ( 758194 ) on Tuesday March 09, 2004 @09:17AM (#8509235) Homepage Journal
    There's a perfectly good reason why a webcrawler doesn't (and shouldn't) crawl the backend databases. I have customers with items and prices in their database. They update that on a daily basis. I have customers that provide directory solutions. We update that information on a daily basis. Now, imagine the turmoil that will arise, when people find outdated items using their favorite search engine which crawls the database once in a blue moon. Nuff said. Bad idead.
    • Re:Bad kitty! (Score:3, Informative)

      by cowscows ( 103644 )
      Exactly. The article mentions things like flight schedules and classified ads. Those sorts of rapidly and constantly changing infor sources need a completely different system to effectively search them. Fortunately, they've already been invented. Orbitz, and cheap tickets, and expedia are a few of many that handle flight schedules. Any website for a local newspaper probably does a decent job with classified ads.

      If I want to find cheap airline tickets, I put "airline tickets" into google, and it'll give me
  • by Alomex ( 148003 ) on Tuesday March 09, 2004 @09:17AM (#8509245) Homepage
    The article alleges that current search services like Google manage to access less than 1% of the web.

    There's a useless statistic if you ask me.

    I just wrote a cgi script that, upon requesting the url "http://bogus.com/nnnnn" returns a page with the text "nnnnn" where nnnnn is any number up to 1000 digits long. So there, I just added 10^1000 pages to the "deep web" of which google indexes none! (gasp).

    So there, Google now indexes less than 0.001% of the deep web.

  • by andygrace ( 564210 ) on Tuesday March 09, 2004 @09:19AM (#8509260)
    I dont think most posters understand the issue - most websites are now run out of content management systems, and search engines just trawl the web storing current pages. This is fine in a static internet, but with pages changing on a minute by minute basis; for example a new site that pulls out the latest headlines - all you're going to have indexed in Google is what's on the page today.

    Now say I was looking for info from a few weeks ago - Google is not necessarily the best way of finding this info. It's all still sitting there in the database, but it's not on the site's front page. archive.org may have a copy of it, but it would be much better to have google.com talk XML in a standard method to the news site's content management system, and have ALL the data there for a search.
    • it would be much better to have google.com talk XML in a standard method to the news site's content management system, and have ALL the data there for a search.

      Then what would be the user's motivation to come to the news site, and spend any time there? They could just go to Google and leech all the same content for free.

  • Funny (Score:5, Interesting)

    by BenBenBen ( 249969 ) on Tuesday March 09, 2004 @09:21AM (#8509270)
    Google's always been [google.com] good [google.com] enough for me.
  • the internet is only 90terrabytes?

    that is what salon says, and I think that is bull, given my favorite porn site offers 20gigs of raunchy action.


  • So instead of 5,234,169 search results returned, we will see 45,961,384 results?

    Yippee!!!!!

  • The article alleges that current search services like Google manage to access less than 1% of the web

    Surely that should be 10%, given the 90% statistic mentioned later on?
  • by saddino ( 183491 ) on Tuesday March 09, 2004 @09:53AM (#8509616)
    99% of the "deep web" probably looks like this [kentlaw.edu]. Indexable? Sure. Necessary? No.
    • Oh, you can do better then that. Consider this site [siroker.com]. How deep is it? As deep as you want it to be. Useful? Less so.

      I remember one that actually did sentence fragments but I can't find it in Google. (Probably because the search terms I'm using are flooded with other relevant hits.)
  • How?? (Score:3, Interesting)

    by Haydn Fenton ( 752330 ) <no.spam.for.haydn@gmail.com> on Tuesday March 09, 2004 @10:01AM (#8509709)
    I think i have a pretty good understanding of how google works..

    People submit their site, google goes to their site and visits every link it can find on the main page, then every link it finds on those other pages etc. So that pretty much the whole site is included.

    This obviously means pages which are not linked do not get included in googles search, so i'm not surprised at the fact that less than 1% is ever crawled.

    So how does this new method of crawling work? How can it possibly know what files are on the server if they are not linked in any way. The only way I can think of is a brute-force type method, which seems extremely stupid to me, since that would consume so much of the search engine's resources.

    This also brings me onto the next point, like a few people have mentioned, there are certain pages on the web which append string onto the end or before the beggining of the URL, for example yourname.ismyfriend.com or www.somegamesite.com/attack.php?player=bob&attacks =5 so how many times would the crawler decide was enough to move onto the next link?

    Also, since most of the internet is porn, and this new found technology will reveal another 90% or so percent of the internet, are we suddenly going to be showered with explicit sites?
    • Re:How?? (Score:5, Interesting)

      by MImeKillEr ( 445828 ) on Tuesday March 09, 2004 @10:41AM (#8510037) Homepage Journal
      People submit their site, google goes to their site and visits every link it can find on the main page, then every link it finds on those other pages etc. So that pretty much the whole site is included.

      Google doesn't just search pages submitted - I've got an Apache webserver running a home, doling out pages for family photos and stats for a local UT2K3 server. I hadn't enabled robots.txt to stop search engines from crawling it (didn't think I needed to) and one day entered my URL in google, only to find it.

      I've never submitted the URL to google.

      Should we assume that Google's already crawled a majority of the sites out there?

      BTW, Yahoo has no record of my site in their database.
      • > I've never submitted the URL to google.

        Google has submission page, but it doesn't really do much. The way it works is that a page gets indexed if and only if inbound link is found in Google's current index.

        That means ..., yes, there are number of pages that are not indexed in Google, simply because no one or no page links to those pages/websites.
    • As you've said, web spiders typically work by following links from one page to another.

      But "a href" is not the only way to get from page to page on the Web. There are also form submits, DHTML, and a hundred varieties of Javascript tricks and techniques.

      Deep-linking would presumably try to simulate human interaction well enough to take advantage of these more complex methods. For closed-ended systems, eg select one option from this pull-down menu, deep-linking will probably work well, but for more open-e
  • another form of DOS (Score:2, Interesting)

    by ramar ( 575960 )
    If the boys with fat pipes start indexing "deeper" into sites, I think we're going to see a lot of sites going offline until they've been refactored to handle this sort of thing.

    The frontend webservers that serve the static pages are fine (they're already being spidered now), but the dynamic content, largely dependant on databases and such, very likely wasn't built to handle this sort of load. Once the new engines get their hooks into these pieces, they're going to be in trouble.
  • So Google and Yahoo want to suck all the data out of my database, eliminate the middle man (me and my crazy web page interface to my data), and serve the world my data, denying me the ability to interact with my own customer base?

    I just don't think that is going to fly.

  • ...results in more porn I'm all for it. You can never have too much porn.

    Max
  • On a related note... (Score:5, Interesting)

    by cr0sh ( 43134 ) on Tuesday March 09, 2004 @01:05PM (#8511352) Homepage
    What about the "invisible web"?

    The so-called invisible web is indirectly related to the "deep web", with the exception that most of it isn't connected at all to the main web. Slashdot has had some articles regarding these hidden segments of the web - but has any progress been made on finding these "lost networks"?

    Current theory on networks explains how and why these networks form and separate from the main web of connections, mainly due to loss of one of the tenuous threads from a supernode to the outlyer nodes. When this loss occurs (an intermediary site goes offline, or popularity wanes, or a large meganode dies or stagnates), the network fragments - and getting back to the pages/sites within is nearly impossible, unless you already have a link to the inside, or a friend provides it to you.

    Now, it is a good thing that this phenomena exists - it seems to exist in all robust, evolving networks - whether those networks be electronically connected, socially connected (ie, Friendster, Orkut, or plain-ole social groupings), or bio/chemo connected (ie, the brain, the body, etc).

    Even so, I wonder at all the information out there which I *can't* access, because it isn't indexed in some way. Sometimes you come across fragments and echos in other archives (news, mail, irc) that lead to these far-off and displaced "locations" - but it is rare, and tedious to do unless you are looking for very needful information.

    So I ask again, has anything been done to further the "searching" within/for the "invisible web"?

    • That's an interesting question, but ultimaitly, I can't see there being anything interesting in these invisible sections.

      It will only take one link to reconnect a seperate section. Whilst this may not be much for many networks, with search engines that walk the entire network, it's then going to re-enter the indicies. At this point, it's connected by more than one link, and thus a bit more robust.

      So, these invisible sections will only contain things that no one links to - which is a pretty good definiti
  • Why does everyone assume the top 10% of results on Google must be all the best information? Some people even said that in the same breath they complained about Google Spam. Ridiculous!

    The fact is there is TONS of great indepently published stuff that will never be found through Google because the author doesn't take the time to play the SEO game and advertise their page all over the web. Google's algorithm is far from the final word in relevancy algorithms. The evolution will continue until we have sea
  • google should start a 'google development' search engine. normal google would still be available, but the googledev would have the same initial database, but use different algorithms and procedures with which it would classify material, thus yielding different results for the same searches... 'cutting edge' google. or it could even have it's own search crawler, for that matter. that way they can start finding new ways to combat spammers.
  • I've had a lot of actual experience with this. I've been researching a bunch of stuff on the history of Quebec city, and been using the Internet for most of it. Using Google, a few other search engines, I'll find a lot of information but most of it is second-hand, urban legend and, often, completly wrong. Not that I don't expect that, but I'd also expect to find good sources listed; they do exist.

    For example: try finding a biography on 'Louis Hebert' on the net. You'll find a few pages, some of them go
  • A couple of years ago, I went to the H2k2 [h2k2.net] conference here in New York City. I saw a fascinating talk there where I first heard the term "deep web" and some of its ramifications for national security. National security was very much on our minds at the time being only roughly a mile and a half from what we call "Ground Zero" (never liked that term).

    The guy giving the speech claimed that he was a retired FBI agent and seemed to have a great deal of insight into the inner workings of national intelligence. As

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...