Become a fan of Slashdot on Facebook


Forgot your password?
The Internet

Building a Bigger Search Engine 278

skreuzer writes "Wired is running a story about a distributed web crawler called Grub. People who choose to download and run the client will assist in building the Web's largest, most accurate database of URLs. This database will be used to improve existing search engines' results by increasing the frequency at which sites are crawled and indexed. Conceivably, Grub's distributed network could enable state information to be gathered on every document on the Internet, each and every day."
This discussion has been archived. No new comments can be posted.

Building a Bigger Search Engine

Comments Filter:
  • by Blaine Hilton ( 626259 ) on Saturday April 19, 2003 @10:17PM (#5766968) Homepage
    I started to use grub, but then questions started cropping up. First we are using this to further a commercial organization. This is not research such as SETI or Folding At Home; this is doing the dirty work of a large commercial search engine. There is not even any potential reward such as with

    Also the grub engine crawls everything, including adult content and other questionable content. They have a setting to turn it off, but it does not block it. With the current questioning of international law relating to accessing illegal websites this could have major consequences for the average user.

    So for the time being I have stopped using the grub client until some serious questions are answered. It's an interesting concept and if it was being used in more of an academic setting it could be interesting. However I believe that search engines like Google are doing pretty good themselves.

    Go calculate [] something

  • by dtolton ( 162216 ) on Saturday April 19, 2003 @10:17PM (#5766973) Homepage
    LookSmart hopes to tap the altruistic nature of many Internet users.

    That unfortunately seems like a naively optimistic hope. While the
    vast majority of people may be altruistic, it only takes a few
    unscrupulous individuals to completely undermine a fair result.

    It's interesting that this idea is an extension to Google's model in
    many ways. Essentially Google is able to index so much of the
    interent by having 50,000+ servers. I don't think that's what makes
    Google such a useful search tool, rather I think it's accuracy and
    relevancy. If my search results started getting poluted with bogus
    hits, I would stop using it almost immediately.

    Unfortunately, by letting people run the client on their machine and
    having it send the results back to the server, I think spoofed
    results are inevitable. I don't think it will be possible to
    safeguard the results either, it will be interesting to see how well
    this project survives *when* people start spoofing results. It's
    been a problem for SETI@home, and it's something that undermined some
    peoples faith in the project as a whole. If the spoofed results are
    more widespread and have a larger impact as they would in a system
    like this, it may ultimately prove fatal to the project.

    One factor that has been asbolutely critical to Google's success has
    been their ability to remain resistant to spoofing attempts. It's
    still a question mark how well grub will perform in that context.

    • Looksmart (Score:3, Interesting)

      by Ark42 ( 522144 )
      Isn't Looksmart/Sprinks a big pay-per-listing deal? The looksmart logo in the upper right corner was enough to make me just close that page right away without any second thought.
    • Altruistic? (Score:5, Funny)

      by sulli ( 195030 ) on Saturday April 19, 2003 @11:44PM (#5767302) Journal
      That's the dumbest thing I've heard in ages. Why should I help out a for-profit company for free?

      (Oh, I can't remember. Have I MetaModerated Recently?)

      • Re:Altruistic? (Score:4, Insightful)

        by eversunsoft ( 651496 ) on Sunday April 20, 2003 @02:36AM (#5767769) Homepage
        Well, because web searching, to this day in age, has been a free service. Supposing that the index is built as the result of donated searches, it would be ethically in very bad taste to act against this trend.

        Of course, I am the first one to question this trend. Has anyone else considered the possibility that one day we'll wake up, and notice that google is charging for access to it's basic searching services?

        I for one, would probably pay. I have become so dependent on it. What price? That's a good question...

      • Well why not? Is it better that your resources sit there idle helping nobody at all to do anything?
      • by R0 ( 40549 ) on Sunday April 20, 2003 @06:19AM (#5768144)
        Notice []
        The main executable has been renamed to "grubclient" out of respect for the GNU Grub bootloader, who's executable is named "grub". They were out first, so we decided to pick another name. If you have a catchy suggestion for a new name, please let us know.

        I nominate "parasite".
    • by Nickilo ( 636747 ) on Sunday April 20, 2003 @02:42AM (#5767781)
      "The General's Dilemma" would solve this problem. The story goes something like this: The general needs to get urgent information to one of his officers, however, he suspects saboteurs are present among his messengers. In order to insure the information gets through accurately, he sends the same message with several men. The officer on the other end collects all the messages and goes with the majority. (And, presumably, kills the others.)
  • by ( 114827 ) <> on Saturday April 19, 2003 @10:20PM (#5766978) Homepage
    So Grub goes out, uses bandwidth, and then returns some results to the home base. It's really distributed bandwidth more than distributed computation.

    I bet one of the big successes in Folding and is that many people run the clients on work boxes, knowing that there's little actual overhead incurred to their work. How different that is for a URL sucker.

    I wonder what broadband ISPs think of Grub.

  • Haiku :-) (Score:5, Funny)

    by Ignorant Aardvark ( 632408 ) <cydeweys&gmail,com> on Saturday April 19, 2003 @10:20PM (#5766979) Homepage Journal
    Grub searches the web
    Sniffing out all the good porn
    Not just bootloader

    I love being a Slashdot subscriber - it gives me fifteen minutes to figure out a good joke before anyone has a chance to post!

    Seriously though, shouldn't they change the name? "GRUB" is already a bootloader. They should change the name ... and I have a suggestion. Has anyone written a program called "E-Coli" yet? No? I can just imagine my mom ...

    "Agh! You have E-Coli on your computer!"
    • by Anonymous Coward
      How about 'SARS'? Four letters, indicates something that spreads quickly...
    • by Anonymous Coward on Saturday April 19, 2003 @11:01PM (#5767141)
      Seriously though, shouldn't they change the name? "GRUB" is already a bootloader. They should change the name ...
      I'm wondering if the Grub bootloader developers will throw a tantrum and flood the Grub crawler developers' e-mail addresses, claiming that this will confuse people and harm the bootloader project.

      Hee hee.
    • by Unoriginal Nick ( 620805 ) on Saturday April 19, 2003 @11:09PM (#5767176)
      Seriously though, shouldn't they change the name? "GRUB" is already a bootloader. They should change the name ...

      How about Firebird? I'm sure that won't cause any problems :-)

  • Business Plan? (Score:2, Insightful)

    by Anonymous Coward
    What are sensible business plans for this type of endeavour?

    Should we expect to see many commercial efforts focussed on providing similar "crawl" or "index" capabilities, but each honed to a specific niche market? A scientific crawler? A retail links database?

    One could argue that similar efforts targeting music resources have resorted to less automated techniques, i.e. human-driven sharing.

  • by bergeron76 ( 176351 ) on Saturday April 19, 2003 @10:22PM (#5766990)
    until someone figures out a way to compromize their local client's results and "escalate" their fave URLS.

    It still sounds like a really cool idea though.

    • Grub's clients don'tcome up with a ranking for each website they crawl; rather, they check to see if this website has changed since the last time it was crawled. For any website that has changed, the client notifies the server. The search engine asks the server which sites in its index need to be updated, and the server gleefully replies.

      Clients artificially increasing their ranking isn't an issue, since the client has nothing to do with a site's ranking.
  • by stock ( 129999 )
    Grub is the GRand Unified Bootloader, a GNU project, so the name is already taken.

    Hmm searchengine eh? Why don't you call it grab ?


    • What's the deal with names lately? Who cares!

      I don't see Phoenix being used for BIOS and a browser as a problem, I don't see Firebird being used for a database and a browser as a problem, and I don't see grub the bootloader and grub the web spider conflicting. They're entirely different products, and there are only so many words out there. Here [] is one of a million examples of a name that is taken by tons of different companies.

  • by carl67lp ( 465321 ) on Saturday April 19, 2003 @10:23PM (#5766998) Journal
    1. Tech-savvy people will install this.
    2. Tech-savvy people tend to be loners.
    3. Loners most often search for porn.

    C1. Tech-savvy people search for porn.

    4. Items searched for most often reach the top of the list.
    5. Porn is searched for often by tech-savvy people.

    C2. Porn will be easier to find with this new search engine.

    Count me in!
  • This is going to challenge Google's search, which will entice them to cut loose some of those really cool google labs concepts. Froogle, Google News, and all of the other cool things that they are working on are great services and are going to be the focus of innovation over at Google.

    Also, Looksmart needs to develop and release an API for this system. You can only use the google api for 2,000 searches per. day. If they allowed unlimited usage, it would get a lot of developer backing.

    • You can always use the Google API for more than 2,000 searches per day if you pay licensing fees for it. That's just Google ensuring that it can remain a viable company. Little text-box advertisements just don't cut it in this day and age where blatant pop-ups and colorful banner ads don't even have much turn-around. That's not the point though.

      The point is that I wouldn't look anytime soon for LookSmart to allow unlimited usage of this API. It's too large of a project for them to just let people use i
      • Little text-box advertisements just don't cut it in this day and age where blatant pop-ups and colorful banner ads don't even have much turn-around.

        This I dispute sir. Targeted keywords on google, where my clickthrough ratio has averaged 1.3-1.5%, are a goldmine for my site and money very well-spent (averaging $500 a month on those ads, paying .05 in 97% of all cases.)

        I've been a google advertiser since Feb. 02, consider their program extremely lucrative, and I guess they like me 'cause I got a pictur

  • grub has been crawling my site for weeks if not months now. How is this news? Because someone at Wired wrote about it? Geesh.
  • Grub (Score:3, Funny)

    by squiggleslash ( 241428 ) on Saturday April 19, 2003 @10:27PM (#5767019) Homepage Journal
    Ok, so how are they going to store this giant search engine in the boot sector of an ordinary hard drive?

    Oh wait, you mean it's not related to GRUB, the Linux/etc boot loader. *slaps forehead* But I guess this solves everything - we can call Phoenix "Grub" too, and just treat it as the generic name to call everything we're having problems thinking up a name for...

  • Firewalls? (Score:5, Insightful)

    by adam_megacz ( 155700 ) <adam@ m e g a c> on Saturday April 19, 2003 @10:28PM (#5767021)
    So if I choose to run this client, how do I know that it won't accidentally index content that is only accessible from behind my firewall?
    • Re:Firewalls? (Score:4, Informative)

      by friedegg ( 96310 ) <bryan.wrestlingdb@com> on Saturday April 19, 2003 @10:40PM (#5767057) Homepage
      You can always put an entry in your robots.txt to block it.

      Actually, the robots.txt issue is one they're still working on. Right now it doesn't check the file very often, which upsets some webmasters.

      They're open to suggestions, so maybe you could suggest a list of blacklisted IP's/hostnames. I suggested they look into supporting gzip compressed web pages, and they said they'd look into it.
    • Re:Firewalls? (Score:2, Interesting)

      by GigsVT ( 208848 )
      If you knowingly run a program that openly spies on every page you go to, you get what you deserve.
  • Google Toolbar (Score:5, Interesting)

    by petree ( 16551 ) on Saturday April 19, 2003 @10:30PM (#5767035) Journal
    Couldn't google [] do this anyways with the google toolbar []? Cause with the advanced features version it tracks every page you visit. If they offered some incentive to install the toolbar, google could just beat them at this game. I actually use the google toolbar already by choice (it makes my web searching more productive) everyday, all they have to do is get lots of people using it and wouldn't that work just as well or better?
    • Re:Google Toolbar (Score:5, Interesting)

      by Kelerain ( 577551 ) <> on Saturday April 19, 2003 @10:54PM (#5767112)
      This tracking is actually how a lot of important information leaks out. Security through obscurity has always been a poor mans system, and this busts it wide open. I wont post them here but there are several interesting searches you can do that give personal results for things that REALLY have NO place on a publicly accessable page. On a more positive note, google already uses distributed computing though thier googlebar [] However they donate the cycles to various worthy causes like folding at home (currently thier only benificiary), but it is concevable that if they came up with some secure and usefull search related thing to do with the cycles they could put it to use almost instantaniously. I think that there aren't segnificant benifits (plenty of discussion elsewhere here) for them to want to use it however.
    • If they offered some incentive to install the toolbar, google could just beat them at this game.

      Does being a kick-ass tool (for those unfortunate enough to be using Internet Explorer) count as incentive?
    • Grub appears to have more cross-browser and cross platform (Google Toolbar only runs on Internet Explorer 5 for now.) Grub runs on Linux and windows, and since it isn't a browser plugin, doesn't require you to have a certain browser.

  • ...rather a crawl with a distributed component.

    They use the screensaver grub clients to check if a web page has been modified since the last time it was crawled (by the centralized crawl done by Looksmart). They probably use some smart MD5 checksum of the pages and send that with the urls to be crawled to the clients. If the checksum of what the grub client crawled doesn't match then the centralized crawl is instructed to re-fetch that url.

    They go this route because the If-Modified-Since HTTP 1.1 request
    • Not the greatest way of doing this. On one of the sites I maintain, the date shows up at the top of the page. The other content changes very infrequently in most cases (a few pages hit a news&events database but that's about it). But the new date would be enough to change the checksum (unless they're allowing for it somehow)

      Grub hits us quite often. I've seen the same URL hit multiple times in one day by different hosts. It's ignoring the "revisit-after" meta tag (7 days), but then, so are most of the
  • It's kind of funny and a bit ironic that search engines are generally used to search information from a central repository and Grub uses a distributed network to index pages. It's almost like having a distributed google cache (that's updated more frequently). Perhaps a better idea would be to invent a crawling daemon that runs on each server with a standard protocol that reports to a central server the relevence of search terms (hey it's DNS for search terms!!) - to bad it would be heavily abused (mostly
  • by eidechse ( 472174 ) on Saturday April 19, 2003 @10:42PM (#5767066)
    ...those pigeons can't be beat.
  • My Take on Grub (Score:2, Informative)

    by Anonymous Coward
    Looksmart is only using Grub to save on their bandwidth. Essentially Grub just compresses web pages before sending them to Looksmart's indexer thus reducing the bandwidth they have to pay for by a factor of 5 or so. The same thing could be accomplished through a proxy which compresses web pages. Eventually, once the HTTP mime standard for requesting compressed web pages is better supported by web servers, Grub will not be necessary.
  • by One Louder ( 595430 ) on Saturday April 19, 2003 @10:51PM (#5767100)
    So...let's say my instance of Grub crawls over a repository of .mp3s and supplies that information to the combined index.

    What's the difference between my machine indexing them and the university students recently being hauled into court for indexing open shares? Why would I not be held liable for contributory copyright infringement?

    No thanks.

  • by anagama ( 611277 ) <> on Saturday April 19, 2003 @10:55PM (#5767116) Homepage
    From the readme in the linux version - no idea what the other readmes might say. However, it appears that they are sensitve to the fact that bootloader grub pre-existed their program. They are requesting catchy names. Here is an excerpt:

    The main executable has been renamed to "grubclient" out of respect for the GNU Grub bootloader, who's executable is named "grub". They were out first, so we decided to pick another name. If you have a catchy suggestion for a new name, please let us know.
  • by Call Me Black Cloud ( 616282 ) on Saturday April 19, 2003 @10:57PM (#5767126)
    I prefer [] to There the cycles are going to cancer or smallpox research. Currently [] over 2 million machines are participating.

    Altruism has its place, but since I'm more likely to die of cancer than of not having the complete www indexed I think I'll be selfish and work towards a cure for something that may affect me.
  • i wonder if google has already seen this coming (i've seen that grub fellow in my logs a number of times and sort of wondered about it), and is going to use their own distributed search engine [] once they get the bugs hammered out...
  • *Another* bunch of spiders chewing up my bandwidth, ignoring my robots.txt files, and bringing my server(s) to their knees.

    Joy of freaking joys.
    • Flood Control (Score:2, Interesting)

      by SmartGamer ( 631767 )
      According to the Grub FAQ, it respects robots.txt although not the META tags. Although it takes a week or two for it to listen to the robots.txt, it does eventually...

      The sheer volume of this project concerns me, however. The very fact that it got Slashdotted may cause it to be a bit heavier than expected!

      It sounds like a good use of spare bandwidth, but if it's going to wind up a superscanner, it's going to send a hell of a lot of requests.

      I tried it and deleted it as quickly: it's not very good at bein
    • I've got hits from grub from 57 different addresses in the last month. So there's certainly no coordination among the clients. It's a WASTE of web server bandwidth. I also don't appreciate bots that claim it will come back to the robots.txt file later after crawling through denied pages and wasting even more bandwidth.
  • by digitect ( 217483 ) <> on Saturday April 19, 2003 @11:02PM (#5767151)

    I expected some way to search... this looks more like a project to index the web rather than make the results available for public use via web interface. Did it strike anyone else odd that there was no web form on the home page with which to search?!

    It seems like a good concept, but the availability of the information collected needs to be accessible without installing the client. I'm not game to install distributed computing apps without some freely available benefit. The "for the good of the world" motivation went out the window for me about a day after my first Seti At Home experience. (But now BitTorrent [], there was appreciable benefit. I had RedHat 9 isos within 8 hours of their initial release!)

  • by jafac ( 1449 )
    just another extension of the 1998 zeitgeist;
    It's all about eyeballs.


    Show me the profits.
  • they're going to sneak in file sharing support with a kazaa plugin.
  • by Sancho ( 17056 ) on Saturday April 19, 2003 @11:44PM (#5767301) Homepage the web gets larger and more cluttered.

    I've already discovered this with comic books turned into movies. Finding synopses of the comic book [] X-Men is nigh impossible. Finding syopses of the movie [] s [] is much, much easier. Damn near every site online about X-Men, Spiderman, The Hulk, Batman, etc. deal with the movies, and sifting through the cruft is not easy. And that's just comic books. Other topics can be just as hard to find, and this doesn't even touch upon fake search results that only turn up porn or worse, a blank page (happens frequently).

    Searching for MORE stuff isn't going to help. Searching better is the key. Google goes a long way towards this, but even it has the same problems of finding too much crud.
  • The architects of the GRand Unified Bootloader posted to the mozillazine forums today, flaming the choice of the name "grub" for this new system and calling for spamming of all grub-related discussion boards in retaliation.

    Or not. What a difference maturity makes.

  • by oaf357 ( 661305 ) on Saturday April 19, 2003 @11:52PM (#5767330) Homepage Journal
    Yea. If you help Grub, Grub gives your web site a preferencial listing. Building the biggest search engine, sure. Building good search results, not so sure.
    • by Anonymous Coward
      It doesn't give you a preference in listings, simply a preference in crawling. You offer some work to guarantee your site has fresh indexing. It's not much different than the search engines that sell frequent crawling for extra. A fresh non-relevant listing won't help you much more than an older listing.
  • Why not a proxy with a component that is a node of a distributed search engine?

    Something that the i.e. squid cache, and is some kind of client of that kind of network will be more useful, at least for common users (the ones that don't have yet a proxy cache will gain a lot in internet navigation, and will not use extra bandwidth, it will use just what they already downloaded) and for the "search" engine will give another approach of ranked results, giving more results for the sites that are more accessed,

  • by bcrowell ( 177657 ) on Sunday April 20, 2003 @12:11AM (#5767373) Homepage
    I have a FreeBSD server that wastes the vast majority of its CPU cycles (and most of its bandwidth, too). So what is a good distributed computing project to donate those cycles to? I'd like to find something that
    1. makes me feel warm and fuzzy about my altruism
    2. can run in the background on a Unix box
    3. is open-source (so I don't have to run someone's closed-source app on my box and trust their security through obscurity [])
    Well, #1 rules out Grub, #2 rules out Folding@Home, and #3 rules out both SETI@Home and Folding@Home.

    So what worthy causes are out there?

  • DDoS (Score:4, Interesting)

    by karlm ( 158591 ) on Sunday April 20, 2003 @12:14AM (#5767382) Homepage
    So the idea is to DDoS the entire web? :-)

    If this thing gets too popular without proper throttling, they could cause real havoc.

    • A DDoS is only effective because it's a whole bunch of messages all at once to one target- in the 100,000,000 range for a full-scale attack, to always cover all the positions.

      The database of "check-me"s is randomized rather evenly. Even if this takes off, I don't see how it could really do serious damage to any but the truly dinky servers: the hits will not come in all at once and flood the whole connection. While it very well could end up a constant stream, it's unlikely to be the massive stream that make
  • Legalities? (Score:4, Interesting)

    by cheshiremackat ( 618044 ) on Sunday April 20, 2003 @12:17AM (#5767390)
    Alright, I have 3 major problems with this...

    1) How different is this than the princton kiddies system? I don't know about you, but I don't want a 95 billion dollar bill arriving in the mail...

    2) What if you local (cache?) contains a few links to kiddie porn? Not your fault, right? Software does it's own thing, you cannot control, BUT what will the FBI think? The FBI Scottland Yard, RCMP are currently heavily investigating Kiddie Porn cases (good work IMHO), but what if your the unlucky sap who getts stuck with a few sketchy URLs? Or Worse Yet, what if this GRUB keeps a cache of the website like google does? Then what?

    3) What about material that is legal locally, but illegial somewhere else... eg. Nazi stuff in Germany, Falun Gong in China, etc... The last thing I want is to be refused to be given a travel visa cuz my PC has an illegial cache...

    Good idea in principle, but with sketchy content on the web, I don't think I will be the one keeping track of it all. If there is a way to filter out the questionable stuff then maybe, but since the purpose is to be as inclusive as possible, it seems incompatible.

  • by oren ( 78897 ) on Sunday April 20, 2003 @03:22AM (#5767840)
    It is too easy to send currupted information into the database. They have *no choice* but to trust the clients. Sure they could run spot checks on the results, but they would be very partial and it would be easy enough to fake responses for those as well.

    So the more popular it gets, the more incentive people will have to promote their sites by feeding it fake index information. If this magically got to be very popular, within weeks search results would become meaningelss and it would drop back into obscurity. The more likely result would be that it will never become popular in the first place.

    Besides, who wants to donate his CPU and bandwidth resources for a commercial company, anyway?

  • Normally, most search engine's spidering methods are designed to be pretty nice to servers - such as only requesting pages once every 30 seconds or so.

    However, I've seen times when the methods of some of the search engine spiders were foiled by such simple things as having a large number of virtual hosts on a machine. Combine that with a number of front-end machines all connected to the same database server, and things can get really nasty.

    In one particularly bad incident, several fairly big-nam
  • 1. Design a search engine
    2. Let everyone else fill it
    3. Profit

    The second step is finally found!!! YAY
  • I'm sure grub will indeed build a larger database than most other search engines, since grub (or grub-client, or whatever it's calling itself) has never, not even once bothered to look at a robots.txt file on any web site I've ever administered. This is what webmasters call a misbehaved robot, and it is not something to be looked at with respect.
    • by Anonymous Coward
      Here it is on mine requesting it: - - [18/Mar/2003:17:25:30 -0700] "GET /robots.txt HTTP/1.1" 200 222 "-" "Mozilla/4.0 (compatible; grub-client-1.07; Crawl your own stuff with" - - [19/Mar/2003:19:41:05 -0700] "GET /robots.txt HTTP/1.1" 200 222 "-" "Mozilla/4.0 (compatible; grub-client-1.07; Crawl your own stuff with" - - [30/Mar/2003:22:10:41 -0700] "GET /robots.txt HTTP/1.1" 200 222 "-" "Mozilla/4.0 (compatible; grub-client-1.07; Cr
  • The common point made by these "distributed" software authors is that there are "wasted" CPU cycles in your computer that you could donate to a project for free.
    However, that is not true at all! CPU cycles are not wasted. When the CPU has nothing to do, it sleeps. At least in a modern operating system (i.e. about everything after Windows 95).

    By "donating your wasted CPU cycles" you will actually increase the power consumption of your computer. This will be very noticable in a laptop, but when you watch
  • $IP - - [05/Apr/2002:12:27:55 +0200] "GET /methoden/hanf/robots.txt HTTP/1.0" 404 218 "-" "Mozilla/4.0 (compatible; grub-client-0.3.0; Crawl your own stuff with"
    So, this was last year.... Is this a dupe?

A consultant is a person who borrows your watch, tells you what time it is, pockets the watch, and sends you a bill for it.