Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

Gnutella2? 271

Anenga writes "A Windows (and somewhat WINE compatible) Gnutella client, Shareaza, has released a public preview of its next version which includes a re-designed Gnutella protocol they call "Gnutella2". Gnutella2 (or "G2") dumps the Gnutella broadcast model and uses a new global searching method with UDP connections. It also features compression to limit hub-to-hub (G2 Ultrapeers) bandwidth, Tiger Tree Hashing etc. Shareaza has released a small description of the revised protocol here, but plans to release a full spec to the GDF after the release of v1.7 Final. Gnutella2, which is really a revised Gnutella protocol, will also be free and open for anyone to use in their clients. Shareaza and G2 may give Gnutella - an open and free P2P protocol which has been struggling to keep up with the times against Kazaa, eDonkey and other P2P spin-offs - the stability and power it needs to attract the closed and commercial FastTrack Network users when or if the network folds."
This discussion has been archived. No new comments can be posted.

Gnutella2?

Comments Filter:
  • It's pretty fast... (Score:4, Interesting)

    by Anonymous Coward on Wednesday November 06, 2002 @10:03AM (#4607195)
    I've tried the beta release and G2 hubs operate faster than the G1 hubs. I was able to get faster and larger searches. If only the other clients included supportf for G2 in the future. Better not be Coke II!
    • by iofire ( 521067 ) on Wednesday November 06, 2002 @10:35AM (#4607405)
      Did anyone else notice that on the beta download page (visit the "next version" link at the top of the page) that there is a button to download it via gnutella? It's nice to see someone make use of this as a way to download software.
      • "It's nice to see someone make use of this as a way to download software."

        Kazaa does that. Their mini-installer logged into the P2P network and pulled the files down from one of the peers. When I started Kazaa again, the first thing that happened was people started downloading it from me.

        I thought it was kinda cool. Far less bandwidth use on their part.
        • The installer does that, and they also offer this capability as a service. It's called CloudBurst, of something like that, and lets people pay to upload their content to the Kazaa nodes, where it's stored in a special shared folder, and it's downloadable with a specially written Kazaa client.
  • by pc_plod ( 577081 ) on Wednesday November 06, 2002 @10:05AM (#4607218)
    Anything that is a move to keep a variety of standards out the in the P2P world is a positive move, stopping record companies finding a way to stop the whole movement by blocking a single protocol, (a la Napster). The more the better.
    • by Nerant ( 71826 )
      While you have a point, I must point out Napster wasn't strictly a peer to peer network system. Napster had a centralised set of servers, and was only peer to peer in the sense that it utilised the bandwidth of those sharing to upload those files to you and vice versa. As far as I know, these centralised servers are in fact what led to litigation against napster.
      True peer to peer networks like Gnutella have no real centralised points: the process of discovery of new nodes does not require a a centralised server or servers, unlike Napster.
      • Of course, but pressure on client distibution can be equally damaging, as well as the adding of corrupt files to the network and other such underhand tricks. Having a central server to shut down made Napster an easy target but real P2P systems are vunerable as well, especially if one standard emerges
    • I just hope the whole "gnutella2" doesn't end up being vaporous. More interesting than yet another client is enhancements to the protocol, but the gnutella2 web site is 'opening soon', and the 'Full specifications will be available soon', but there is already this Shareaza client out there?

      I've been a user and supporter of the Gnutella Network since the beginning ( back when it didn't work that well :) ), and I've seen enough clients come and go to know that it takes a well documented protocol/specification to see network growth and improvement.

      Until we have the specs, it's just hype.

  • by jfrumkin ( 97854 ) on Wednesday November 06, 2002 @10:05AM (#4607219) Homepage
    I've been working with the JXTA [jxta.org] project for a bit now, and they seem to be taking a very nice approach to designing a p2p network that is implementation independant (can be implemented on different platforms, devices, etc.). Besides gnutella (and g2), and JXTA, are there other open P2P networks out there? And if there are, what's the best project?
    • by iofire ( 521067 ) on Wednesday November 06, 2002 @10:29AM (#4607362)
      I'm surprised that no one has mentioned it, but giFT is a very nice open protocal modeled after the fasttrack network. (originally it used the actual fasttrack network, but now they use an open protocol called OpenFT)
      Check it out at http://gift.sourceforge.net [sourceforge.net]
      The ncurses based frontend giFTcurs is very nice, but there also are graphical and even web-based frontends to it.
      I use it under linux and have been very happy with it.
      • by Anonymous Coward
        I've had nothing but problems with giFT. Finally started booting back into Windows and using Kazaa.
        • I've had nothing but problems with giFT. Finally started booting back into Windows and using Kazaa.

          Curious. I've had very few problems with the CVS version of giFT. In fact, i've found it to be the best-working p2p app on Linux since Napster. Doesn't eat all of the bandwidth for nothing, and downloads are *really* fast compared to *any* Gnutella client. I've actually downloaded things with this thing! =)

          The problems have mostly been like "okay, today it didn't work, tomorrow it'll work again"; I think the biggest outage in my case was recently when the new interface protocol was introduced and all clients seemed to work only day or two later.

    • are there other open P2P networks out there?

      Yes, the ALPINE Network [cubicmetercrystal.com] uses a UDP based social discovery mechanism to implement fast, effective searches with minimal bandwidth and dual NAT support.

      Some of the features include:

      - High concurrent connection support (over 10,000).
      - Adaptive configuration for enhanced accuracy and quality of responses.
      - True peer to peer network. No hierarchy, no central servers.
      - Low communication overhead (small UDP packets, no forwarding).
      - Module support to allow extensions to query and transport operations.

      You can read an overview of how alpine works here [cubicmetercrystal.com]. There is also a frequently asked questions [cubicmetercrystal.com] and plenty of developer information [cubicmetercrystal.com].

      Enjoy!
  • by Bowie J. Poag ( 16898 ) on Wednesday November 06, 2002 @10:06AM (#4607228) Homepage


    Anyone here find it just a wee bit ironic that a postabout BMG and their so-called "copy protection" (*chuckle*) is followed immediately by a rather technical article on a new, faster, better, low-density P2P client?

    Hell, they haven't even managed to shut the _first_ version down!

    Cheers,
  • by CatWrangler ( 622292 ) on Wednesday November 06, 2002 @10:09AM (#4607243) Journal
    if people tested out this network by trading only BMG files at first. Have to beta test and all though I suppose.
  • by Anonymous Coward on Wednesday November 06, 2002 @10:12AM (#4607268)
    Is the Gnutella Web Caching System [zero-g.net]. It allows clients to find other gnutella peers without any sort of central gnutella server.
  • Crossing fingers (Score:3, Informative)

    by ceranta ( 86805 ) on Wednesday November 06, 2002 @10:13AM (#4607275)

    Let's hope that this gnew version of gnutella will be better and more scalable than the previous one.

    Points from the gnutella2.com site:

    Level One: A New Protocol

    Gnutella2 introduces a flexible new protocol to support current and future P2P technologies. Packets are compact binary trees of named data items, which allow multi-vendor information nesting and augmentation, selective digital signing and other exciting features. Existing data structures can be modified and improved without disrupting deployed software, and advanced topics such as UNICODE support are handled in a uniform manner.

    Level Two: A New Data Transport Architecture

    Gnutella2 provides two interdependent data transport mechanisms: reliable compressed TCP streams, and an unreliable and semi-reliable UDP transport provider. The combination of these two systems allow higher level G2 constructs to take maximum advantage of network conditions to deliver data packets quickly and efficiently, with or without assured delivery, within bandwidth requirements and without unnecessary overhead.

    Level Three: A New Set of Base Services

    Gnutella2 takes full advantage of the first two levels to deliver an exciting new set of distributed peer-to-peer services. Controlled global object searching is implemented using an iterative walker approach, with selective out of band response delivery and translation. Combined with an abstract component interest/response query model, this system goes beyond what is available in any other P2P platform. The Gnutella Addressing System (GAS) provides the ability to reach arbitary nodes based on a known identifier, regardless of their connection method.

    Level Four: A New Implementation Standard

    One of the problems facing the legacy Gnutella network was the varying level of support for critical network features in different clients. The Gnutella2 Standard requires clients to implement the first two levels completely, as well as the dual transport providers with some form of intelligent bandwidth control, 1-bit universal QHT, simple search response, basic metadata (at minimum), simple query language, link compression, root tigertree as the primary URN, HTTP/1.1, partial transfer and sharing. If able to operate as a hub, the full set of generic routing rules must be supported. Support for G1 is recommended but not required.

    CLICK ME! [webcruiser.org]

    • Re:Crossing fingers (Score:2, Informative)

      by Adam Fisk ( 536262 )
      Almost all of these protocols are existing standards that have been developed by the rest of the Gnutella community. In face, not one of the primary protocols has actually been developed by Shareaza -- they're basically just repackaging the existing collection of newer protocols and calling them "Gnutella 2."

      The only new additions proposed here are the binary tree structure for packets and the node addressing system. Otherwise, all of these protocol have been developed by other members of the Gnutella community. "Gnutella 2" is a marketting gimmick aimed at doing things like getting on Slashdot.
      That said, it's not all bad. The perception of Gnutella should change, as the network is continuing to develop rapidly, with powerful protocols including the Hash/URN Gnutella Extension (HUGE), the Gnutella UDP Extension for Scalable Searches (GUESS), and the Ultrapeer proposal.

      Perhaps the more important issue at hand, however, is whether or not Gnutella will remain an open, interoperable protocol, or whether it will disintegrate into proprietary schemes. As yet, none of the new parts of Gnutella 2 have been posted in public specifications. This is really a first for Gnutella -- the Gnutella community works because standards are published publicly and go through a review process among all Gnutella developers. Gnutella 2 may bode ill for the future of Gnutella as an open network, but I really I hope not. I hope that Mike (Shareaza) will quickly publish any new specifications that he has to alleviate the fears of myself and everyone else in the Gnutella world!

      • Re:Crossing fingers (Score:5, Informative)

        by 0x0d0a ( 568518 ) on Wednesday November 06, 2002 @12:13PM (#4608281) Journal
        Yup. Raphael Manfreti (of gtk-gnutella fame) and the Limewire team (also major GDF developers), get no credit, and these "Sharezilla" wankers get a Slashdot link.

        Well, *here* is credit where credit's due:

        GTK-gnutella [sourceforge.net]

        LimeWire [limewire.com]

        Gnutella started out as an "interesting project". It is now one of the most heavily developed an analyzed projects -- somewhat less centralized than the Freenet project, but far more skill (and variety of clients) on this than, say, FastTrack and the much-lauded Kazaa.
  • Kazaa vs. eDonkey (Score:5, Insightful)

    by T-Kir ( 597145 ) on Wednesday November 06, 2002 @10:17AM (#4607294) Homepage

    Ever since I've been using Broadband (Optimum Online yeah baby!), eDonkey has won me over vs. Kazaa(lite).

    Alhough eDonkey needs a little more work than Kazaa to operate, the file hashing/segmented downloads/no leeching is far better than Kazaa, plus the amount of file corruptions I get using Kazaa is way too much (especially with very large files). I've also started trying Overnet [overnet.com], but still have loads of downloads I'm clearing through the Donkey (Yes I have tried using the donkey downloads for Overnet, but only half register in the download tab).

    I've tried using Gnutella/Gnucleus on numerous occasions, bit given up due to a lack of being able to do anything with it compared to the other P2P programs... I just hope Gnutella2 will become a viable option for me to use it.

    • Re:Kazaa vs. eDonkey (Score:5, Informative)

      by Arker ( 91948 ) on Wednesday November 06, 2002 @10:39AM (#4607442) Homepage

      If you have a *nix box (even an apple if it's OS X) you can use mldonkey [nongnu.org] which is a very nice client. You can operate it remotely from another box, it uses both edonkey and overnet protocols simultaneously, it's partially open source (there is a key component kept secret for security reasons, the one flaw in thes protocols is that they require trusted clients unfortunately) and it really gives you the best of edonkey and overnet both, as well as supporting the move to overnet since anything you're downloading from edonkey or sharing out will also be shared to overnet.

      • Re:Kazaa vs. eDonkey (Score:2, Informative)

        by bluehell ( 20672 )
        it's partially open source (there is a key component kept secret for security reasons, the one flaw in thes protocols is that they require trusted clients unfortunately)

        That's not true anymore. Since emule [sf.net], anonther edonkey client, released its source code, the mldonkey author decided to open-source the remaining code.
        • Re:Kazaa vs. eDonkey (Score:2, Interesting)

          by Arker ( 91948 )

          Wow, that's a big change. I wonder where that leaves the security issue though, that's of course always been coming, but now I guess it's upon us... the network relies on the fact that even those who try to be leechers can't avoid sharing the parts already downloaded while waiting for the rest... if the complete source is out it will be much easier for someone to put together a full leecher client... and if that becomes very popular the whole network will become untenable. :(

          I never thought security through obscurity was a viable philosophy longterm, but it's better than nothing. What now? Have any of the developers addressed this that you know of?

    • Re:Kazaa vs. eDonkey (Score:4, Informative)

      by Jugalator ( 259273 ) on Wednesday November 06, 2002 @10:46AM (#4607502) Journal
      Same progress for me - i.e. Kazaa to eDonkey to Overnet. The biggest advantage with eDonkey over Kazaa IMHO are the "MD5 URL's" or whatever you should call them, where clicking on an URL adds the download to the eDonkey queue, by using the MD5-style checksum. So you're 100% sure it's not a fake file. You can also be 100% sure that's it's not a partial file as well.

      I guess the downside with eDonkey is that it requires up to date server lists, although that's a minor hassle really. And that's where the serverless Overnet comes in, which owns Kazaa any time except for the occasional music searches perhaps.
      • For Kazaa, etc. you can try out sig2dat [tripod.com], which works similarly to the MD5 checksums. You click on a sig2dat link and the program will generate a .dat file with a checksum in it, which Kazaa will use to search for the file.

        It's not perfect though, since you have to shut down and restart the client to get it to see the generated .dat files. It's nice when using sites such as Fast Track Movies [fasttrackmovies.com].
    • by 0x0d0a ( 568518 ) on Wednesday November 06, 2002 @01:30PM (#4609065) Journal
      no leeching is far better than Kazaa

      I seriously doubt that. Any current "no leeching" mechanisms I've seen are severely flawed and rely on trusted remote code.

      People who whine and bitch that people are bypassing them are ignoring the fact that the design is fundamentally wrong. You cannot trust code on another computer. Period. It *will* be broken.

      It is possible to build a trust web (where you have metered trust, instead of just a binary "trusted" or "not trusted" a la PGP). Have each user generate a public/private key pair. Have each person maintain a list of trusted users. These users are identified by their public keys. "Trust values" are assigned to each user in the list-holding user's trust list. The scale is arbitrary -- maybe "100" means trust a lot and "1" means trust a little, and "0" means no trust. Trust is generally positive (more on that later).

      When you want to determine "absolute trust" of a user, you run out and download the trust lists of all the users from them in your trust list (this spans only two hops out on the web of trust...you could go further, though I think this is sufficient). Person can grant absolute trust to person B as following: (points of trust A gives B in A's local trust list)/(total points of trust A gives A's local trust list)* (points of trust A has in our local trust list).

      Then, attackers like the RIAA will be excluded from the network of trust, having low or no trust values, as they hand out corrupted files.

      Trust lists can be redownloaded whenever. Cache 'em for weeks if you want.

      Clients could automatically add a point of trust per data unit downloaded succesfully from a remote client...then, if it's a bad download, the local user could strip all trust away.

      Trust could be used for ranking priorities to let people download from you, determining which copy of a file is "authentic" and which is bogus, etc.

      Other possibilities: the reason we don't allow negative trust or blacklists -- only whitelists -- is because it's usually fairly easy to regenerate a new IP, and this results in bloating attacks against users maintaing blacklists. If a user can present something that "costs" them something to obtain, like a VeriSign cert or other "expensive" (i.e. can't regenerate on your computer easily) proof of identity (doesn't have to be your RL name -- could be a signed cert endorsing a 'nym from Zero Knowledge), then give them automatically a certain number of points of trust (client configurable). Why? Because it's much less likely that they're running out and buying a new Verisign cert for each attack. They're opening themselves up to blacklisting.

      You could purge year-old entries from your local trust list to stay up to date...oh, there's tons of possible tweaks.

      The trust network simply sits on top of another P2P network. It does not require that users not download from users with zero trust -- it simply provides some extremely useful information which is essential to implementing strong antileech/anti network attack protections, or what have you. It is also very difficult to attack. PGP is much more vulnerable, since you just need one stupid person in your web of trust to okay someone, their binary trust bit flips to 1, and they're in your web. If you don't trust someone much, and they give someone else a little tiny bit of trust...that person is only very slightly trusted.

      Drawbacks:
      My analysis of this approach has found only two drawbacks. First, there is some disk and memory overhead to store cached trust information locally. Gnutella clients already store IPs for much of the network, so it shouldn't be prohibitive, though -- we don't have to handle the whole network, just *trusted* users.
      The second one is that letting people download your trust list -- crucial to the functioning of the system -- can leak some information. It means that you "trust" some user on the network. If that user provides nothing but, say, child porn, anyone on the trust network has circumstantial evidence that you have downloaded child porn. Of course, you could have granted the person trust for any number of other reasons, but it is a small amount of information leakage, and worth mentioning.

      I welcome comments.
    • Okay, on your (and others) advice, I just gave it a try. It doesn't seem to have very many files. "Babylon" (to find Babylon 5 episodes) turned up 0 results. "Terminator" only showed 1 result, and that was supposedly a screener copy of T3: Rise of the Machines.

      Kazaa-lite, on the other hand, has enabled me to watch virtually all of Babylon 5 (I've just got the last half season to go). What am I missing, how is eDonkey better?

      • OK, first off there is a little knack to searching on eDonkey. You first have to make sure that your firewall will accept connections through ports 4662-4663 (and forwarding to the machine running the donkey)... most of this info is on the eDonkey site [edonkey2000.com].

        When it is up and running, you can do a search when you are connected to a server (a good idea is to get an updated serverlist, one of the places I go to is The Donkey Network [no-ip.org]). If there aren't any of the files there, then click the 'Extend Search' button that pops up to the left of the search button... to do more searches, click the button then press and hold down the enter key for less than a second, do more short bursts to let any server search results get through.

        A lot of the files will be dependant on what people are sharing, and the more blue the colour, the more people have the same file. A great place I've recently found that lists certain Sci-Fi files is Varelse's Sharepool [xmission.com], and another site for other links is ShareReactor [sharereactor.com].

        A lot of the server work (updating lists, etc) has been automated in Overnet [overnet.com], but I haven't been using it at all yet. As I said in the first post, it takes a little more work to learn eDonkey, but I've found the quality of files that are being shared far superior to the FastTrack network (esp. for very large files). There are times that I can't find stuff on the Donkey network, so Kazaa still comes in handy.

  • Just wondered... (Score:4, Interesting)

    by Anonymous Coward on Wednesday November 06, 2002 @10:31AM (#4607371)
    Does anyone actually use P2P networks for legal uses?!?!?!?!?!? e.g. not mp3/porn..

    If so, can you list what you use it for?

    • mp3 is perfectly legal. many bands share their music via p2p networks. I also download some drivers & game demos from p2p, as it's often faster way to get them.
    • by roryh ( 141204 )

      MP3s are not inherently illegal. I download classical music for which there are no copyright issures that I'm aware of.
    • by Jugalator ( 259273 ) on Wednesday November 06, 2002 @10:51AM (#4607549) Journal
      Does anyone actually use P2P networks for legal uses?!?!?!?!?!? e.g. not mp3/porn..

      *thinking hard*

      Downloading .nfo's and .cue's?

      AFAIK, those aren't illegal. :-)
    • Re:Just wondered... (Score:4, Interesting)

      by Anenga ( 529854 ) on Wednesday November 06, 2002 @11:39AM (#4607971)
      Now that Shareaza has global searches (and nativly hashes in SHA/MD4/ND5/TTH) we can post up the hash of linux distro's and begin downloading from the Linxu distro site.

      People can download off that person using partial file sharing, people can download off that person using partial file sharing etc. It will save the main site a hell of a lot of bandwidth and you'll be downloading the distro swarming from 10+ people rather than one slow FTP site.
    • I've downloaded a lot of MP3s of music that I already have on cassette. Since making personal-use copy of works you own is legal, I believe that activity is covered.

      I'm sharing out a lot of MP3s I got from mp3.com and Amazon free downloads.

      There are also a lot of movie trailers and video samples from pornsites out there
    • yes, lots of legit uses. here are some:

      -downloading tv shows (futurama, the simpsons, south park, etc.), which is no different than recording the shows on my vcr, except has the advantage of being more easily organized and no commercials.

      -downloading copyrighted works that i already own on another media such as cassette tape or vhs that i want in digital format.

      -i'm sharing a lot of public domain works that i've downloaded from project gutenberg (plato, aristotle, descartes, thoreau, etc).

      -downloading/sharing star office 5.2, which sun no longer offers for free on their web site.

      there's probably other uses that don't come to mind at the moment...

  • by smd4985 ( 203677 ) on Wednesday November 06, 2002 @10:33AM (#4607384) Homepage
    I'm a engineer at Lime Wire LLC so I can debunk much of this submission. Shareaza's Gnutella2 isn't so much the second iteration of Gnutella - instead, think of it as a improved Gnutella . In fact, the improvements were actually proposed by Lime Wire LLC (consult the GDF and look for messages about 'GUESS'). The GUESS protocol is a UDP based protocol we developed to allow for Gnutella network crawls/walks. We introduced it for public comment on the GDF *before* releasing it because we understand that Gnutella, as a open protocol, needs support from all Gnutella developers. I'm not sure what exactly Shareaza has implemented (because they HAVE NOT released the specs yet), but it sounds a lot like GUESS.

    So this isn't so much Gnutella2 as a improved Gnutella. Perhaps one day it will evolve into Gnutella2 more formally, but at the moment this talk of Gnutella2 is premature.
    • by Adam Fisk ( 536262 ) on Wednesday November 06, 2002 @10:44AM (#4607484)
      Seconding Susheel's comments, "Gnutella 2" appears to be primarily a marketting gimmick. Gnutella 2 is really just a collection of protocols, most of which have been in use on Gnutella for some time. The one apparently new protocol is a version of the Gnutella UDP Extension for Scalable Searches (GUESS) open standard, that was proposed by LimeWire some time ago, as Susheel mentioned, and that is in experimental stages. That said, perhaps "Gnutella 2" makes some sense as a name, as the computing community seems to be out of touch with how rapidly developments are happening on Gnutella. The collection of protocols used on Gnutella today make it a vastly different network than what people typically think of as Gnutella. If Gnutella 2 changes that perception, then it's great. Just keep in mind that "Gnutella 2" has little to nothing to do with Shareaza -- they primarily contributed the name. The new protocols in use on Gnutella are the result of countless hours of work from many Gnutella developers around the world.
    • I'm a engineer at Lime Wire LLC...

      So when are you guys going to remove all that crapware & stealware from the LimeWire client?
      • So when are you guys going to remove all that crapware & stealware from the LimeWire client?

        When a good amount of people are willing to pay decent money for a client?
      • Well I assume if your posting on /. you are familiar with CVS? Go to this page [limewire.org] and grab a copy of the source. All you need to compile is the java sdk [sun.com] and the ant build tool. [apache.org]
    • coming from a company who installs spyware? I don't use p2p filesharing apps, but I've had to clean several machines of users who do. Although I fully understand the desire to rob people blind through their own stupidity; I'm in technical support for crying out loud, I just can't see how anything that comes from you guys(or any other company willing to put that sort of trash on a person's machine) could be truthful.
    • Judging by your comment and your associate's, it seems that you fellows feel as if your toes are being stepped on a little :-) Honestly though, Shareaza came out less than a year ago and is outpacing Limewire and Gnucleus (which was my previous favorite client), and it's written by one guy. Oh, and, NO SPYWARE.

      At first I saw that you worked for LimeWire, and felt a small amount of respect - then I remember the bullshit hoops I had to go through to clean my system of the utter crap it installed through my system directories and the registry.

      As for calling something like Gnutella2 premature, um, no. The standards of the web were written down by the W3C, just as the Gnutella standards are written by the GDF. But if Shareaza comes out with something radically different and is accepted by the majority of users, it becomes the standard much in the same way that IE (unfortunately) did in the browser war. Now the W3C is playing catchup - and maybe the GDF is as well.

      • by Adam Fisk ( 536262 ) on Wednesday November 06, 2002 @11:17AM (#4607752)
        I think you're right -- our toes were stepped on a little =) -- ours and everyone else's in the Gnutella world who put in most of the work to come up with "Gnutella 2," many of them open source programmers who donated their time to the effort.

        Mike has done a hell of a job on his client and is a very nice guy, but he simply is not the originator of the vast majority of the standards being branded as "Gnutella 2."

        The key word in your last paragraph is "unfortunately." Yes, it was unfortunate that IE created it's own standards and bypassed the w3c. Are you truly advocating proprietary standards over open standards? Am I misinterpretting you?

        • This guy is a troll. He's pushing IE, closed source, propriatary standards, and the domination of MS over the W3C and standards committees. He's saying that product "foo" is better than your product to try to sting you a bit, and then bashing the GDF, which *developed* the stuff he's lauding.

          Don't bite.
      • At first I saw you worked for LimeWire, and felt a small amount of respect - then I remember the bullshit hoops I had to go through to clean my system of the utter crap it installed through my system directories and the registry.

        I hate to say it, but I'm starting to get a pretty good chuckle every time I see some poor Windows user griping about the amount of pain they go through to get "good downloadz". I hear whining about "pop up" or "pop under" ads. I hear complaining about "spyware". I hear complaining about "mandatory sharing" in P2P apps. I hear people frantic that newer P2P apps can "fake" shares (like on Direct Connect) because of piss-poorly designed architectures involving trusted remote code.

        It's all really funny to those of us who have been using open source P2P clients and Mozilla on Linux. *We* haven't seen a single one of these problems, and *we* aren't suffering.

        But, you know what? I encourage pop-ups. And intrusive advertising, spyware, and everything else. Why? It doesn't affect me in the least, and it means that *you* are subsidizing the good life for me. Each pop-up you see funds another good, clean pop-up free page for me.

        Of course, someday you people are going to catch on. You're going to use Mozilla, use Linux. You're going to use better P2P clients. But until that day, the rest of us are going to enjoy the good life.

        Until then, thanks for everything!
    • Since you work for Limewire, then you'll know that Mike (the developer of Shareaza) was contacted by you with your GUESS proposal as he was working on his own similar proposal (wish is now used in Shareaza). I know those e-mail logs are lurking somewhere at Limewire LLC. Perhaps your accusations that Mike "stole" or "took" the GUESS protocol are a little too "immature"?

      Come now. I think were all quite tired of the poor attitude the GDF has shown towards Mike. There is no rule in the GDF against further development of the Gnutella protocol. I could of course have apathy for why your upset (Mike hasen't released the specs yet, or consulted the GDF before hand) but there is no putting off the actual facts: I get better results on G2 than G1. Less bandwidth, more fruitful results, etc etc.

      GUESS will be included in Shareaza soon enough as other clients start using it (in Shareaza's G1 capability). If you dislike the G2 design, that's fine. But we also hope that you could put aside your personal matters and actually embrace the network as really a better path to take than currently working with the mess which is G1.
      • Sharezilla is more than welcome to use their "G2" protocol. They can communicate with the other "Sharezilla" users out there. Woohoo!
      • We also don't mean to imply that Mike "took" GUESS in any way. I fully understand that he was working on a separate protocol when GUESS was being developed -- I understand because I'm the one who had those conversations with him.

        The point is that GUESS is a public specification for searching on Gnutella. Hopefully, whatever Mike is doing will soon be public as well. Then again, if it's not GUESS but is very similar to GUESS, then it'll create a standardization nightmare that we've worked very hard to avoid -- the type of incompatibility that wastes everyone's time. We need to have an open network that evolves and innovates rapidly. Perhaps Gnutella 2 will be a positive part of that, but it's not off to a great start.

  • gee (Score:2, Funny)

    by Sacarino ( 619753 )
    I didn't think I was getting my fair share of "britney wet-t avi" hits with gnutella, good thing it's reworked now so i'm not limited to local traffic.
  • try giFT (Score:2, Insightful)

    by Anonymous Coward
    giFT [sourceforge.net] started out as an open source FastTrack client, but is now an independent P2P-network with a similar network structure to FastTrack. It seems to work quite reliably (giFTcurs [sourceforge.net] is a wonderful interface), and the shared files are high quality compared to other networks - lots of ogg encoded music, for example.
  • UDP and firewalls (Score:5, Interesting)

    by elliotj ( 519297 ) <slashdot.elliotjohnson@com> on Wednesday November 06, 2002 @10:40AM (#4607456) Homepage
    I wonder how this client will perform for people behind firewalls? Many firewalls are setup to deny UDP traffic because most Internet activity is TCP and having UDP open has been unnecessary up to this point.

    I wonder if this will halt the spread of Gnutella2? With P2P, it's all about getting as many people online as possible.
    • Re:UDP and firewalls (Score:5, Interesting)

      by Adam Fisk ( 536262 ) on Wednesday November 06, 2002 @10:53AM (#4607564)
      Firewalls block most incoming UDP traffic in the same way that they block incoming TCP traffic -- there's really no difference. Incoming traffic is generally denied except for specific ports.
      So, with both UDP and TCP, only outgoing data will not be blocked as a general rule. With TCP, this poses less of a challenge because once you've established a connection, data can be passed both ways. With UDP, you cannot establish a connection in the same way. That said, most firewally will allow incoming UDP from a specific endpoint if you've sent outgoing data to that endpoit "recently." In this way, a quasi-connection can be established.
      All that aside, though, the short answer is that non-firewalled hosts, and specifically "Ultrapeers" on Gnutella, act as proxies for firewalled hosts, allowing firewalled hosts to behave on the network almost exactly like hosts without firewalls.
  • by fault0 ( 514452 ) on Wednesday November 06, 2002 @10:43AM (#4607475) Homepage Journal
    is OpenFT from the giFT project [sourceforge.net].. as people may recall, giFT was originally an open implmentation of parts of the FastTrack protocol, used by Kazaa, et al. This was an year ago, and KaZaA was not at all happy about this, so they updated a few times to break giFT (see KaZaA version 1.33).

    So, some of giFT's developers decided to abandon fasttrack, and make their own protocol, OpenFT. giFT went from "giFT is not FastTrack" to "giFT: Internet File Transfer". This protocol, primarily written by jasta of gnapster fame, has been development for the last ~8 months. A publically released version of giFT with OpenFT is not available yet, but right now, the CVS version [sourceforge.net] works quite well.. even in some ways better than FastTrack does.

    There are also some great advantages to giFT. First of all, it enforces a seperation between the client and the network code. giFT is a daemon that handles most of the interaction with the outside world. There are also a multitude of giFT frontends, which are very easy to write, as no network code has to be created. giFT is also modular.. you can put in bridges or even full support to other protocols and networks.
  • UI (Score:2, Informative)

    by Mr_Silver ( 213637 )
    It's a small thing, but my biggest complaint about these p2p programs is that the user interface just sucks.

    Sure, it's useable, but it's horrific. Kazaa's is aweful, eDonkey's just blows and WinMX, urgh, don't get me started.

    Admitially I never really investigated Gnutella after trying the original Nullsoft version. The UI was ok, if a little plain, but the time it took to hook up to a bunch of stable nodes, the slow download time and frequency of dropped downloads just put me off.

    So really, all i'm asking is that whilst you're concentrating on making an excellent protocol, please don't employ a 7 year old with a crayon to do the UI. Hell, I'd happily help out on an OSS project, however I can't use VC++ to save my life and most people wouldn't like submissions on Visual Basic frm's - i'll probably end up standing on the sidelines shouting but having no-one listen.

    There are a few examples of technically inferior applications that do better than others simply because their UI is clean, consistent and works. Lets have that, please!

    • Whaa? I actually think Shareaza has the best UI out of all Gnutella clients.

      Though, there is that one Searching Status I begged Mike not to put in, which was put in anyways =)

      Your suggestions are welcome on the forum [shareaza.com].
    • Re:UI (Score:3, Funny)

      by 0x0d0a ( 568518 )
      It's a shame that all the Windows P2P clients use things that look like their UI designer is an ex-web designer.

      UNIX P2P clients don't suffer from this problem.
    • Re:UI (Score:2, Interesting)

      by Pinky ( 738 )
      I get a half dozen complaints about the GUI in my p2p program called Myster every few months. I have asked people to do mock-ups of what they would like to see in a GUI and have not yet received any. Personaly my ideal p2p GUI is in Myster (well, with some more intellegent window behavior). Many simple windows for a small learning curve and ability to do many things at once. or not.. :-)

      OSS no spyware Unicode everywhere (lots of japaneese stuff)

      www.mysternetworks.com
  • by evilviper ( 135110 ) on Wednesday November 06, 2002 @11:14AM (#4607735) Journal
    The (solvable) problems with Gnutella:

    Bandwidth Usage (for searches)

    Search results. You only get about 4-7 hops. Assuming 4 hops & 4 non-redundant connections per node, that means you are only searching about 256 nodes. Being able to search everyone would make Gnutella for more useful for less-common files.

    Fifo queuing. You may have been requesting a file for the past 24 hours, but someone that just requested a file may get lucky, and take what should have been your spot.

    Messages. We need messages to tell people that slow nodes downloading from our node gets disconnected, that you are 2nd in the queue, etc.

    Upload settings. Each node should be disconnected after a set period of time to prevent slow nodes from causing bottlenecks, or RIAA employees from abusing the limited open slots.

    Bandwith Min/Max for Uploads/Downloads. A limit on the min/max speed for each file download/uploaded, and a min/max for the TOTAL of all downloads/uploads.

    Dynamic determination of REAL IP (if behind NAT with dynamic globally valid IP).

    Solution to the 'PUSH' fiasco. Is there a way that 2 firewalled nodes can connect to a third (non-firewalled) party to open the connection, then tranfer data directly? I don't think so, but worth including here.

    Any more?
    • Hmm, I've left a few out...

      Metadata. So that you can find mp3s/oggs via the value in their id3 tags. e.g. Search for Artist:Megadeth

      Searching by (sha1) hash. Some have it, ALL need it. Slackware could put their latest ISO on gnutella, and distribute the (sha1) hash on their website. You then find the file on many nodes, and can download from any or all of them. Instant , easy, free, file mirrioring.

      And while I'm here... Long live Napshare!
    • Solution to the 'PUSH' fiasco. Is there a way that 2 firewalled nodes can connect to a third (non-firewalled) party to open the connection, then tranfer data directly? I don't think so, but worth including here.

      Not possible. The only way it could work is if the third party connection acted as a proxy for all of the data, because neither side can initiate the TCP transfer. The PUSH idea is actually a pretty neat solution to getting around firewalls, but I never liked the way Gnutella used it, you rarely got a successful push.

      It won't work if both clients are firewalled, and this is in keeping with the point of the firewall, i.e. preventing incomming connections.

      My solution is to run p2p on a sacrifical PC, that only has limited access to my network i.e. read-only access to SAMBA shares. I do this anyway because it's a public PC in my living room, and with lot's of random people around from time to time, it's a good idea to protect my data. My firewall forwards the p2p ports to this host, so I basically can access all of the nodes on the network. Should it ever get "rooted", then my exposure is not quite as bad as it would be for a trusted machine.

      Running p2p behind a firewall severely limits the number of people you can access. I see this as a good thing, because it means less people are fighting over the resources that I personally can use. ;-)

      • Not possible. The only way it could work is if the third party connection acted as a proxy for all of the data, because neither side can initiate the TCP transfer.

        I've seen an article somewhere describing a system that exploits the little-known TCP feature of simultaneously opening a connection from both sides to allow two machines behind NATs to talk TCP to each other. It's very tricky and starting the connection requires the assistance of a third party that is capable of sending packets with spoofed IP addresses. After the connection is open, though, the two participants can talk directly without additional help.

        It may not be practical, but it is possible.
    • by chrohrs ( 302592 ) on Wednesday November 06, 2002 @12:05PM (#4608205) Homepage
      What may not be clear to many Slashdot readers is that the Gnutella protocol has been steadily improving over the last few months. Let me correct the previous poster on a few points:

      Search results. You only get about 4-7 hops. Assuming 4 hops & 4 non-redundant connections per node, that means you are only searching about 256 nodes.

      Your math is way off here. Try 7 hops with 6 connections, plus an extra factor of 100 or so from ultrapeers. That said, we are always looking for ways to improve searching. Ultrapeers [limewire.com] were one step along that path.

      Fifo queuing. You may have been requesting a file for the past 24 hours, but someone that just requested a file may get lucky, and take what should have been your spot.

      Many clients (e.g., LimeWire, BearShare, Shareaza, Gtk-Gnutella) have supported this for some time now. They all interoperate too.

      Bandwith Min/Max for Uploads/Downloads. A limit on the min/max speed for each file download/uploaded, and a min/max for the TOTAL of all downloads/uploads.

      All decent client have features like this. But note that this is an implementation issue, not a protocol issue.

      Search by hash

      This has been supported for many months, thanks to Gordon Mohr's HUGE proposal.

      Metadata

      LimeWire has had XML-based metadata for over a year. I believe Shareaza uses the same scheme.

      As these examples show, the GDF has been quite successful at driving innovation on the Gnutella network. But caution is sometimes in order; it can be hard to predict the result of thousands of clients running a new protocol. It would be good for Shareaza to submit its new extensions for peer review before rolling out thousands of clients. It is easy to build a client that gets more search results; it is harder to do that without hurting the entire network.

      Christopher Rohrs
      LimeWire
  • by TerryAtWork ( 598364 ) <research@aceretail.com> on Wednesday November 06, 2002 @11:20AM (#4607774)
    The whole point of a p2p network is not to share files but to not get caught sharing files.

    Last February I got a bigfoot letter from my ISP, Rogers, who had been contacted by the Canadian equivalent to the MIAA, whatever it's called.

    I was sharing tons of stuff, 8,000 mp3s, on DalNet and they wanted me to stop. What bothers me is they never contacted ME, they went straight to my ISP and tried to get me kicked off the Internet.

    The letter from Rogers said you're in violation etc, stop now etc, or else etc.

    So, I stopped.

    This close call ruined my career on Dalnet where I had built quite a rep, and trashed my source of free music.

    And not popular music either - ancient stuff you cant get anymore, like Robert Crumb and his Cheap Suit Serenaders. Buy THAT at your local CD shop...

    Since then the point has been made moot by the fact that my cable modem has been capped at about a FIFTH of it's previous speed. (I am investigating DSL)

    However - the crux of the whole matter is this - the record companies hired people to go on the internet and score music for them. Then these people, who, and this is crucial, have the IP of the music source, use that IP to run down the source down and then use legal means to try and get that person kicked off the internet.

    IT DOESN'T MATTER HOW FANCY YOUR PROTOCOL IS OR HOW GOOD YOUR CRYPTOGRAPHY IS, IF THEY CAN GET YOUR IP YOUR SCREWED.

    I have NEVER seen a p2p system address this issue.
  • Gnucleus & GnucDNA (Score:5, Informative)

    by DeadBugs ( 546475 ) on Wednesday November 06, 2002 @11:59AM (#4608148) Homepage
    Gnucleus [gnucleus.com] has been a solid Gnutella client for me.

    They are also working on GnucDNA [gnucleus.net] a component for building your own P2P applications.
  • Download Mirror (Score:3, Informative)

    by nstrom ( 152310 ) on Wednesday November 06, 2002 @02:25PM (#4609650)
    Download link http://download.shareaza.com:8825/Shareaza1701.exe [shareaza.com] seems impossibly slow -- I'm getting 276 bytes per sec on my DSL connection. For anyone who wants to check out the 1.7 prerelease, here's a mirror:

    http://nstrom.chaosnet.org/Shareaza1701.exe [chaosnet.org]
    • i'm sharing 1701 on gnutella, as are many people. that's the fastest way to download it if you already have a client that understands magnet links:
      magnet:?xt=urn:sha1:QQB67YHOQV5BSLCFS7JYV6 2QAPLWCF RB&dn=Shareaza1701.exe

  • So does this mean I get more pr0n, or less pr0n?
  • UDP and DDoS (Score:2, Interesting)

    by sshore ( 50665 )

    I haven't seen anyone mention the potential of abusing the UDP search extension as a massive DDoS reflector. Simply send a query for something very common, with a faked source address on the packet, to as many Ultrapeers as possible. (I'm assuming that Shareaza implemented the GUESS extension, as many people have suggested.)

    The documentation for GUESS is not reassuring:

    In the past, a principal objection to using UDP has been that it allows anyone to easily execute a DDoS attack on any target machine. This concern has been based on the assumption that queries would require an extension listing the IP address and UDP port to reply to, however. In this proposal, this extension is not required, as responses are always sent directly back to the node that sent them, rendering such an attack impossible.

    This totally ignores the fact that the only way to determine which node sent the packet is to use the source address on the UDP packet! Am I missing something here? Am I misreading the documentation?

According to the latest official figures, 43% of all statistics are totally worthless.

Working...