Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

P2P Web searches 80

prostoalex writes "Researchers at UCLA are looking for easier ways to implement Web searches by using peer-to-peer techniques to decrease the workload. 'Queries need to be passed along only a few links rather than flooded throughout the network, which keeps search-related traffic low,' reports Technology Research News."
This discussion has been archived. No new comments can be posted.

P2P Web searches

Comments Filter:
  • I'm sick of all this hype about p2p. Its a good technology but its not like we have to use it for everything. The old ways of doing things still work.
    • True, at some point networks and machines will be so congested with various p2p protocals, that everyone will jump back to centralized servers.
      • by Bert690 ( 540293 ) on Sunday September 12, 2004 @09:09PM (#10231371)
        True, at some point networks and machines will be so congested with various p2p protocals, that everyone will jump back to centralized servers.

        If you'd take some time to actually read the article, you'd see that the story is about research that addresses congestion problems with existing p2p methods.

        Besides, much if not most traffic from p2p networks is from file downloads, not query routing. Moving files to a centralized server isn't going to reduce that traffic at all. In fact, the bottlenecks that result can make congestion even worse.

        Moving files to central servers only seems to help congestion because central servers with anything interesting to download tend to be shut down quickly.

    • The old ways of doing things still work.

      Which is why I still prefer walking everywhere, using chalk and slate for taking notes, and refuse to use a zipper...

      In other words, wtf?!?!
      • by Anonymous Coward
        Which is why I still prefer walking everywhere, using chalk and slate for taking notes, and refuse to use a zipper...

        And why I like using my RPN calculator to change the TV station...
      • . . .and refuse to use a zipper...

        Zippers are obsolescent, you insensitive Luddite.

        KFG
      • ?

        the point is that new technologies are adopted when they improve on an existing method. we already have super-fast, super robust, complete search technologies that are not p2p ... so what problem are they trying to solve? an academic exercise? well, that's okay ... but let's call it what it is.

        google is already so fast as i would not notice it is if were any faster. the best a p2p search technology could achieve would be equivalent speed with the addition of the consumption of my bandwidth.
    • by Anonymous Coward
      No, they won't. You need tons of server hardware to cope with the bandwidth of anything even remotely popular. Thus free services tend to be spoiled with ads and whatnot.

      The magic of p2p is that you can build the same out of 'thin air'. There are no expensive server rooms and gigabit lines but just a bunch of nodes that are slightly more complicated than simple clients. You use it, you provide it. Fair game and you get exactly the kind of service you want without strings attached. At least theoretically -
    • by cbiltcliffe ( 186293 ) on Sunday September 12, 2004 @11:06PM (#10232066) Homepage Journal
      I'm sick of all this hype about p2p. Its a good technology but its not like we have to use it for everything. The old ways of doing things still work.

      You're right, but consider this:

      The entertainment industry is trying very hard to convince the US government that all P2P can be used for is copyright infringement, so it should be banned completely.
      Any non-infringing use obviously proves them wrong, no matter how out there it is.

      Right now, I think we need as many off-the-wall uses as possible for P2P, even if it's not the most efficient way to accomplish the task.
      Calling mass attention to these uses wouldn't hurt, either.
    • u r right, P2P for search will require a lot effort and result isnt reliable

      I think proper cache is enough for web search, as long as you have a farm with tons of memory holding up everything, the search result can be fairly fast.
  • If it's P2P... (Score:5, Insightful)

    by thebudgie ( 810919 ) on Sunday September 12, 2004 @07:24PM (#10230645)
    The searching load on servers might be reduced i suppose. But from my experiences with P2P searches are long and slow. How would this help exactly?
  • I foresee.. (Score:5, Interesting)

    by Gentlewhisper ( 759800 ) on Sunday September 12, 2004 @07:24PM (#10230650)
    Maybe in future Google will implement a small server in our "Gmail notifier" application, and each time we search for something on google, it will cache some of the results, and should anyone close by ask for it, just forward the old results to them.

    Save the server load on the main google server!

    **Plus maybe some smart guy will figure out how to trade mp3s over the GoOgLe-P2p network! :D
    • Re:I foresee.. (Score:5, Insightful)

      by LostCluster ( 625375 ) * on Sunday September 12, 2004 @07:43PM (#10230818)
      Save the server load on the main google server!

      Error 404: No such main server found.

      Google is such a distributed computing network that when a single computer in a cluster fails, they've discovered that it'd cost them more to go to the broken node and repair it than the vaule of the computing resources they've lost. Google just lets such failed computers sit useless, and waits until there are enough downed computers to justify sending in the repair people.

      Besides, P2P services to respond to your Google query would mean that your query would end up in the hands of a dreaded "untrusted third party", and I don't think anybody here wants all of their searches available to their next door neighbor.
      • "Google just lets such failed computers sit useless, and waits until there are enough downed computers to justify sending in the repair people."

        Wow, rather than letting it sit there useless and depreciating, I rather they find some cheap and efficient means to just sell that machine (cheaply!) outright, and then they order a new replacement to go back into that empty pigeonhole.
      • Re:I foresee.. (Score:3, Informative)

        by Phleg ( 523632 )

        Google is such a distributed computing network that when a single computer in a cluster fails, they've discovered that it'd cost them more to go to the broken node and repair it than the vaule of the computing resources they've lost.

        This is nothing more than just a myth. They continually have job postings looking for Data Center Technicians, whose entire job is to crawl through their massive cluster and repair downed nodes. I should know, I interviewed for the position just a month or two ago.

        • They fix the downed nodes eventually... but one down node alone is not worth sending anybody after. They wait until there's a collection of downed nodes to send the tech after them...
      • I don't think anybody here wants all of their searches available to their next door neighbor

        One of the coolest things I've seen was a little ticker that WebCrawler used to run that was just a constant stream of random search queries other people had made. You could click on any of them as they scrolled by and it would bring up the results.

        Totally anonymous, very addictive. Sad to see it gone.

      • Re:I foresee.. (Score:3, Interesting)

        by mrogers ( 85392 )
        Actually, I'd rather have my next door neighbour know what I was searching for (and vice versa) than have any single person know what *everyone* was searching for. Power corrupts.
      • I don't think anybody here wants all of their searches available to their next door neighbor.

        I'm more worried about my next door neigbbour being able to serve up the search results!

    • saw this at http://en.wikipedia.org/wiki/Wikipedia:External_se arch_engines#Google_Appliance.3F [wikipedia.org]

      Maybe this is obvious and has been discussed before, but have we considered using an appropriate Google Search Appliance [4] (http://www.google.com/appliance/products.html). This is actual hardware that would need to be purchased that would sit in the racks of our servers and could be setup to index the entire Wikipedia every day. I don't know how expensive this solution is or whether "we" can afford it, but it

  • by rasafras ( 637995 ) <tamas.pha@jhu@edu> on Sunday September 12, 2004 @07:25PM (#10230655) Homepage
    Google still works.

    Results 1 - 10 of about 6,290,000 for p2p [definition]. (0.19 seconds)
    • by LostCluster ( 625375 ) * on Sunday September 12, 2004 @07:36PM (#10230759)
      Not to mention, Google is often better at searching a given website than the search untility a site tries to provide on its own. TechTV host Leo Laporte used to frequently searching Google with the "Site:techtv.com" marker included to find deeply-hidden articles on the site, because it'd be easier to search that way than using TechTV's own search boxes.

      Google's even encuraging this behavior by linking their free websearch feature with their AdSense service, and giving publishers a share of the AdWords revenue when a search that came from their site results in an ad click.
      • Correctamundo! In fact the google search is a very efficient way of searching sites. Wikipedia uses this to great advantage if your keyword search fails. A big advantage is that frequent googlers have a good sense of how to word the query for maximum valid results.

        I am just about to put a 50,000 message mailing list archive online and the search facility will be Google, which is far better than any of the other solutions I've investigated.
        • A big advantage is that frequent googlers have a good sense of how to word the query for maximum valid results.

          I agree - many times I see a search box on a website, with no "advanced search" link and you never know how it'll work. Usually you find that (unlike Google) it'll match any word and not all of them, so you lots of really irrelevant material. You don't know what boolean operators it supports etc etc.

          Another quite simple advantage of using a Google search on your website is that it's a cons
    • Google leave us with no real need for something better on searching the web.
      But we need a Free search engine, so we don't depend on any big corporation to run our lives, and P2P is the way to overcome the huge cost of running a single system to serve the whole Internet.
    • Which reminds me of an interesting long-term monitoring idea: track Google responses for the same query over a long time, and monitor the response time (e.g. 0.19 seconds in the above example). Is anyone doing this?
  • by Magila ( 138485 ) on Sunday September 12, 2004 @07:30PM (#10230703) Homepage
    From a quick read of the article it sounds like what they've done is implemented a slightly more sophistcated/less deterministic version of the ultrapeer/hub system already in use by Gnutella/G2 Basicaly quereies are routed such that they are guarenteed to reach a "highly-connected node" which is the equivalent of an ultrapeer/hub node. The main difference is the folks at UCLA have come up with a novel method of picking ultrapeers, but the end result isn't much different.
    • by shadowmatter ( 734276 ) on Sunday September 12, 2004 @11:22PM (#10232138)
      Not quite... Note: I'm about to karma whore here.

      About a year ago, right before starting my senior year at UCLA, I was offered an opportunity to work on this P2P project. At the time it was called "Gnucla," and was being developed by the UCLA EE department's Complex Networks Group. I turned it down, because I had already committed to working on a p2p system in the CS department. But since in all honesty their research was more novel than ours (and my friend was in their group), I subscribed to their mailing list and kept informed on what they were doing.

      What they've done isn't find a novel way of picking ultrapeers. Let's review what motivated ultrapeers -- in the beginning, there was Gnutella. Gnutella was a power-law based network. What this meant is that there was no real "topology" to it, unlike peer to peer networks that were emerging and based on Distributed Hash Tables (such as Chord [mit.edu], Pastry [microsoft.com], Kademlia [psu.edu] [on which Coral [nyu.edu] is based]). It had nice properties: a low diameter, and very resilient to attacks common on p2p networks. (Loads of peers dropping simultaneously could not partition the network, unlike, say, in Pastry -- unless they are high degree nodes.) But the big problem was that to search the network, you had to flood it. And that generated so much traffic that the network eventually tore itself apart under its own load.

      So someone thought that maybe if only a few, select, high-capacity nodes participated in the power-law network, it wouldn't tear itself apart because they could handle the load. These would become the ultrapeers. The nodes that couldn't handle the demands of a flooding, power-law network would connect to ultrapeers and let the ultrapeers take note of their shared files, and handle search requests for them. Thus, when a peer searches, no peer connected to an ultrapeer ever sees the search unless they have the file being searched for, because the searching happens at a level above them. Between low-capacity nodes and ultrapeers, it's much like a client-server model. Between ultrapeers, it's still a power-law network.

      But the ultrapeer network has problems in itself, so this group sought to find a way to search a power-law based network, such as Gnutella, without flooding. They exploited the fact that, in a power-law network, select nodes have very high degree connectivity. If you take a random walk on a power-law based network (meaning, starting from your own PC, randomly jump to a node connected to you, randomly jump to a node connected to that node, etc...) you'll end up at or passing through a node with very high connectivity. Thus, they were a natrual spot rendezvous point for clients wishing to share files, and clients wishing to download files. Perhaps, in this sense, they are an "ultrapeer," but we haven't separated the network into two different architectures like before. The network is still entirely power-law based, and retains all its wonderful properties.

      But that's not the entire story, just the gist of it. There are other neat tricks to it... Trust me, this is really good stuff we're talking about here. They recently won Best Paper Award at the 2004 IEEE International Conference on Peer-to-Peer Computing [google.com]. (See paper here [femto.org].)

      "Brunet," as they call it, is designed to be a framework for any peer-to-peer application that could exploit the percolation search outlined above. Google-like searching is just one possible approach (and perhaps a little unrealistic...). Right now I can tell you that they have a chat program in the works, and it is working well. The framework should be released when it's ready.

      Please don't flood me with questions -- remember, I'm not actually in their research group :)

      - sm
  • by Man of E ( 531031 ) <i.have@no.email.com> on Sunday September 12, 2004 @07:32PM (#10230723)
    P2P searching? The Ask Slashdot section does P2P searching already (in a less fancy-schmancy way), moreso than some would like :-)

    Q: What is $search_term and how does it work?
    A: A simple google search shows that $search_term is $blahblah and you use it like $this (repeated a hundred times)

    Add another hundred replies about how the poster should search before submitting, and how AskSlashdot is degenerating into AskPeopleToGoogleForYou, and there you have it. P2P searching in all its glory.

  • islands of users (Score:3, Informative)

    by bodrell ( 665409 ) on Sunday September 12, 2004 @07:33PM (#10230739) Journal
    That wouldn't solve the problem of local areas of users that are disconnected from everyone but themselves. I know this is an issue with other p2p apps. You can only connect to someone who's in your area, and sometimes that just isn't good enough. I know China is in many respects isolated from the rest of the internet.
    • If they are disconnected from everyone else, how would any kind of search reach them/everybody else?
      • If they are disconnected from everyone else, how would any kind of search reach them/everybody else?

        Exactly. They wouldn't.

        Do note that I was talking about areas of users not being able to connect to anyone else. P2P is not the same as explicit IP addresses like on the web. For example, it would be a lot harder for me to get to slashdot by only clicking on links than by typing the address into my browser's bar.

  • A group of researchers from UCLA have been hired by Google Corporation with enticing payrolls and stock options.
  • by shodson ( 179450 ) on Sunday September 12, 2004 @07:40PM (#10230801) Homepage
    Infrasearch [openp2p.com] was working on this, until Sun [sun.com] paid $8M for the company, them had them work on something else [jxta.org], then Gene Kan committed suicide [wired.com]. Be careful what you work on.
    • I'm glad somebody mentioned Infrasearch, they were pioneering the field of peer-to-peer search way back in 2000. Gene Kan and co. were some of the first to realize that peer-to-peer networks could be used for something other than evading the authorities.
      The brillient aspect of Infrasearch (later JXTASearch [jxta.org]) is that unlike most peer-to-peer search implementations, it doesn't just act like a metasearch engine, broadcasting or propagating a query to a bunch of specialized indexing nodes and then aggregating
  • Huh? (Score:3, Insightful)

    by Ars-Fartsica ( 166957 ) on Sunday September 12, 2004 @07:49PM (#10230868)
    Aren't searches sent to, and derived by, single search engine domains?

    Google, Yahoo etc of course crawl the web at large, but even if you want to throw a peer network at crawling, aren't you mitigating freshness?

    What I can see is a DNS-like system for propogating metadata in to the interior of the network, and maybe a caching mechanism as a result...not sure if this is what they mean.

  • by i_want_you_to_throw_ ( 559379 ) on Sunday September 12, 2004 @08:01PM (#10230946) Journal
    Feel free to shoot full of holes as needed....

    Every website has DNS servers so what if that same company that ran the DNS servers indexed the pages of the sites that it hosted? Daily?

    Wouldn't that then provide a complete index of the web?

    Start a search and somehow get the results back through that distributed method. Haven't figure that out yet...... but if you can...
    PROFIT!!!!!
    • So, under this theory... everybody indexes their own content? Implying, everybody would provide legitimate "indexes" and not simply provide whatever is most likely to bring in search engine visitors? "Look, here's my index! My site has a MILLION pages of free porn warez!!" Indexing needs to be done by a third party, that's just the way it is.
    • This, or something akeen to this has already been tried years ago with Harvest and its SOIF records (I think that was the name). The idea was to index locally, while being a part of a larger index network. Obviously, it never worked.

      There is a mailing list for people involved with writing and running web crawlers (aka spider or robots), and several years ago there was a lot of talk about making crawling and indexing more efficient by enchancing the 'robot exclusion protocol' (i.e. robots.txt) by creating
  • by Bert690 ( 540293 ) on Sunday September 12, 2004 @08:55PM (#10231289)
    This is some pretty cool research, but this really has pretty much nothing to do with the web.

    It's an ariticle describing a new p2p query routing method. Nothing more. There's already a lot of such algorithms out there. This one seems to exhibit some nice completness properties that hold in idealized scale free networks. But I'm not convinced such a theoretical property would hold in the real world. While p2p networks tend to be roughly scale free, the "roughly" and "tend to be" qualifiers are what make such theoretical properties unlikely to hold in practice.

    Nice to see they plan to release some software based on the technique though.

    • I agree - I haven't finished reading the paper yet, but it seems like each node needs to know the percolation threshold of the network. How is that information calculated and disseminated? Or do the nodes adapt the topology locally to create a network with a known percolation threshold?
    • Right.

      It's important to get the scaling right. Many of the P2P networks out there have algorithms that scale very badly. There's way too much unnecessary P2P traffic. The earliest P2P algorithms were horribly inefficient. There's been some progress, but not enough. Kids should be able to find the latest pirated Britney Spears video in about 2 hops, without blithering all over the planet looking for it. There's probably a copy on the local cable LAN segment, after all, and that's where it should come fr

  • by microbrewer ( 774971 ) on Sunday September 12, 2004 @09:03PM (#10231335) Homepage
    A peer to peer program Ants P2P has just implimented a Distributed Search Engine .Ants P2P is Based on Ant Routing Anlgorithms so it needed a solution to finding files on its network it found a solution that works .The Network also has a HTTP tunneling feature and its developer Roberto Rossi is creating a search solution based on simmilar methoods to search Web Pages published on the network .

    Ants P2P is designed to protect the identity of its users by using a series of middle-men nodes to transfer files from the source to destination. As additional security, transfers are Point to Point secured and EndPoint to EndPoint secured.

    1. Distributed search Engine - Each node performs periodic random queries over the network and keeps an indexed table of the results it gets. When you do a query you will get files with or without sources. If you get files simply indexed (without a source), you can schedule the download. As soon as Ants finds a valid source, it will begin the download. This will also solve the problem of unprocessed queries. This way you will get almost all the files in the network that match your query with a single search.

    http://sourceforge.net/projects/antsp2p/ [sourceforge.net]
  • by Turn-X Alphonse ( 789240 ) on Sunday September 12, 2004 @10:03PM (#10231651) Journal
    I'm so sick of companies wanting to push off their crap onto us. If I want something from them they should offer it me on terms I find acceptable.

    In this case a couple of text links which may intrest me (Google refrence : check).

    I don't want to have to share my bandwith with 50 other people so they can do the same. If you want to run a service, website or game server you should pay for it. Don't start passing off the bandwith bill onto us users.

    Either get used to the heat (price) or get out of the kitchen (market).
  • Mmm, buzzwords. (Score:3, Insightful)

    by trawg ( 308495 ) on Sunday September 12, 2004 @11:56PM (#10232293) Homepage
    Step 1) Find established technology which is working more or less happily as-is
    Step 2) Add the word 'p2p' in front of it.
    Step 3) ???
    Step 4) Profit

    I assume Step 3) is now as simple as "show name of new product with 'p2p' in the subject and explain how its NOT related to pirating movies or music" (to increase investor confidence they're not going to get taken to town by the RIAA/MPAA), then its just sit back and watch the fat investment/grant dollars roll in!
  • I wonder If I was alone thinking about something like this [filehash.com] when reading the title? :-)
  • They can always link to the google [google.com]s very own cache. :-)

    Well, actually they might be on to something as I said in a comment on a post some months ago (Why can't I peruse all my comments? (sans subscription)) and also, I noted that a p2p encrypted backup technology would be a good idea, which was then taken off and written about [pbs.org]

    I said, it'll be peer to peer everything. (in this case, p2p raid, for redundancy, not performance) using certs.
    • An article about research which showed that random network crawlers gave increased performance on P2P networks... perhaps this means that better performance could be managed if a skynet esque 'self aware) ie third party knowledgeable layer of the network existed to facilitate each node (searching)

      Sorry, I hope that makes sense in context.
  • "The proxy contains an index-sharing p2p-based algorithm which creates a global distributed search engine. This spawns a world-wide global search index. The current release is a minimum implementation of this concept and shall prove it's functionality."
    --http://www.anomic.de/AnomicHTT P Proxy/index.html

    "If the index-sharing someday works fine, maybe the browser producer like Opera or Konqueror would like to use the p2p-se to index the browser's cache and therefore provide each user with an open-source, free
  • Just keep comin' round.

    Harvest [sourceforge.net]

    BugBear
  • we all know gnutella had a stinking algorithm for searching files.
    basically it was a big, fat broadcast of all queries to all hosts, regardless of whether it mattered to that host or not. only very few clients could cope with the linear growing bandwith requirement. the other just "missed" the queries and so the net fragmented.

    there were a lot of people who knew this.

    one of the first "academic" solutions that came up (at least to my knowing) was p-grid (http://www.p-grid.org/), which uses extremely intere

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...