Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

Gnutella at One Year 89

transient writes: "Gnutella's first birthday passed quietly about 10 days ago. An OpenP2P article reflects on the Gnutella network as a transient extension of the Web, since Gnutella peers use HTTP for file transfer and are essentially Web servers. Seems the network keeps evolving; there's some discussion of the new BearShare Defender and more info on the recent Gnutella virus. If Gnutella peers are Web servers, wouldn't that make Gnutella users who share files equivalent to Web site publishers, with the same responsibilities?"
This discussion has been archived. No new comments can be posted.

Gnutella at One Year

Comments Filter:
  • by Anonymous Coward
    Infoanarchy.org is a weblog for the discussion of information freedom and p2p apps that help create that freedom. If that interests you, then don't forget to visit [infoanarchy.org].
  • by Anonymous Coward
    If i am the owner of the copyright, of i have permision from the owner, then it isn't ilegal if i share that kind of files. I can share GPL'd code using gnutella and i won't be doing nothing ilegal. I know almost everybody do not have the ownership of the copyright of the files they share, but here comes the same problem that happen with napster: the problem is not the protocol and the programs that use it, the problem is the people that uses this programs to do ilegal things. Nobody tells nothing about http or ftp, but i have downloaded copyrighted songs using this protocols (when i talk of http, i mean i downloaded the song from a web page). But companies don't want the web to disappear. They use it for they own pourposes. Alfonso.
  • If easy to program, easy to implement multicast were available, gnutella would've used it and not been nearly as poor in the scalability department.

    Multicast is pretty easy to program, not much harder then UDP. Or at least the system interface is almost exactly the same (you have to manually set the TTL, that's about the only difference I remember).

    Getting a multicast feed is harder, but not really harder then NNTP, you find someone who has one and request a tunnel (unless your ISP magically gives you multicast, which is quite rare).

    Mind you this was the state of affairs about 8 years ago, when I did the multicast news software [usenix.org] in 1993~1994. Well, you also frequently needed kernel patches then too, but I don't think that is needed in modern unix-like systems.

    It is quite hard doing something with multicast that doesn't suffer congestion problems, it is like doing normal UDP work where the protocall doens't help you with packet loss or congestion, except it is far harder to get replies from all receivers (in fact if you want to scale forever you can't ever accept any replies from anyone). It's a big old pain, but people do UDP based systems, and they could do multicast ones as well with more work.

  • I want multicast that just works regardless of whether an ISP supports it or not, and if the ISP wants to reduce bandwidth usage on their network, the implement it.

    Well you can get a tunnel from anyone, not just your ISP. It is in the ISP's best intrest to be the one to provide it, unless they charge you per bandwidth used.

    Last I checked UUNET gave free multicast to any leased line cust's. That was quite a while ago though, it may have changed. I also know they can do multicast to dial-ups (anyone using Ascend boxes there should be able to do it), but I don't know what the deal on that is.

  • Well, what would you call something that serves files via http other than an http server? What we call web servers (such as apache, etc.) serve files via http. That Apache and company do other things, sometimes, like parsing includes or doing content nogotiation, whereas Gnutella does distributed searching instead, reflects a difference in purpose rather than a difference in nature.

    I guess my reply boils down to, what sets it apart from other http-based file server applications such that you feel it incorrect to call it an http server?

    -j

  • I'm still working on my gnutella proxy/server, and I can state for a fact that gnutella isn't anything like a web server. Gnutella's routing, query, message protocol uses packets for christ's sake. In fact the HTTP it uses to *transmit files isn't even correct but protocol standards. The only way you can use a web browser with gnutella is to use a special proxy.

    My proxy is basicly a "dedicated" gnutella server + message passing for linux clients ( gtk_gnutella ) that have no server componet. If anyone wants to chat about gnutella protocol specs and extentions please email me or post here. I'm very interested in the new client's extentions to the protocol! =)
  • I am working on a project that may satisfy a number of the intended features you mention.

    For exmaple...

    As a straight-peer-to-peer network grows, it becomes saturated with traffic. Requests are sent, propagated, and choke the entire network of peer-to-peer clients, usually at the lowest bandwith level.

    I saw this first hand when using a modified Gnutella client to monitor the types and number of queries occuring on the network. The vast majority was crap or outright maliscious, and it brought my 1.5Mbps downstream DSL line to a crawl.

    But it is possible to have a fully decentralized network that is bandwidth friendly. I am working on it now.

    If you try to run this through an established client server system, lawyers decend like flocks of carrion birds.

    Another important asecpt of this network is that searching and actual transfer are decoupled. When you find some hits to your query, you are returned a list of locators for that resource. These may be simple HTTP style, or they may be Freenet SHA-1 hash keys. Which means that you can find the content you seek in an open, decentralized network. And the obtain it (if it is senstive data in your country, etc) via a secure, anonymous manner like Freenet.

    And finally, the most important aspect of this network is that it is adaptive to your preferences. A very large problem with Gnutella and other peer based networks is spam and irrelevant results. With this network you continually add peers who respond with relevant, quality information, and drop other peers who provide no value.

    At any rate, if you are interested, you can read more about this project. It is called the ALPINE Network [cubicmetercrystal.com] and the main page is at http://cubicmetercrystal.com/alpine/ [cubicmetercrystal.com]
  • The biggest problem with multicast over the internet is that it is not supported.

    If it was supported, then the biggest problem would be congestion avoidance.

    The congestion avoidance algorithms built into TCP are the only saving grace for the internet backbone as it exists today. With any kind of widely deployed multicast, this becomes very critical to implement and work efficiently.

    There has been some progress in this area, but it is a very difficult problem. The IETF has a working group on multicast congestion control. Its work is available here:

    http://www.ietf.org/internet-drafts/draft-ietf-rmt -bb-lcc-00.txt [ietf.org]
  • Another great site is Peertal at http://www.peertal.com/ [peertal.com] for all sorts of news about peer to peer projects and news.

    Ben Housten has a good page with ideas and links at http://www.exocortex.org/p2p/index.html [exocortex.org]

    The Peer to peer working group has their site at http://www.peer-to-peerwg.org/ [peer-to-peerwg.org]

    You may also want to check out the Orielly OpenP2P page at http://www.oreillynet.com/p2p/ [oreillynet.com]

    And of course, I need to shamelessly plug my open source decentralized searching network, the ALPINE Network [cubicmetercrystal.com]

  • Because dropping packets does not ease congestion. If you are sending a flood of packets that is continually overloading a given router, the TCP connections will starve, and the majority of the UDP multicast packets will be dropped.

    This would be a horrible scenario!

    There is some good information on TCP friendliness and congestion avoidance algorithms here: http://www.psc.edu/networking/tcp_friendly.html

    This really is incredibly important. Anything that starves TCP and introduces congestion at a wide level in the internet is going to wreak havoc.
  • OTOH, the bandwidth usage of gnutella searches vs. the total bandwidth available is a very small ratio.

    This is probably true. Most gnutella clients would be on smaller DSL or modem links. These would have a hard time overwhelming bandwidth.

    In most cases the problem occurs between different ISP level routers or at the client's link itself.

    If the traffic on the multicast channel was greater than 56k consistently, the modem clients TCP connections would starve, which would not be good from an end user perspective.

    If the traffic was such that a small ISP was using most of their bandwidth (probably outgoing for the multicast to destinations) then all clients of that ISP would be having problems as well.

    It really does get pretty tricky quickly. however, there is good progress being made in this area, and perhaps with IPv6 we will begin to see multicast working on a larger scale (I hope so!)

    The project I am working uses unicast to multiple destinations that acts very similiar to a multicast. However, I had to build some very elaborate mechanisms into the protocols to keep congestion and TCP starvation from occuring as well as allowing varied bandwidth links the ability to communicate without the fast ones overwhelming the slower ones.

    This is the ALPINE Network [cubicmetercrystal.com] and the more extensive information about congestion avoidance is here [cubicmetercrystal.com]
  • Yes, it isn't too hard to program. It's simply has a reputation for being esoteric. Getting a multicast feed is more work then most people are willing to go through. I remember trying to get one about 2 years ago and not being able to because the people who could give me one wanted to charge me enormous sums of money (in the $100s to $1000s/mo range).

    I want multicast that just works regardless of whether an ISP supports it or not, and if the ISP wants to reduce bandwidth usage on their network, the implement it.

  • If it looks like a duck, and quacks like a duck, it's a duck. Gnutella's goal is to get a search packet to be seen by every node connected to the network. That sounds a lot like multicast to me.

    At layer 2 and 3 multicast could be implemented via flooding every node on the network with your multicast packet, just like Gnutella does. So, the flood goes over a bunch of TCP links instead of a bunch of point to point WAN links and broadcast Ethernet links, what's the essential difference here?

    Her idea is a sort of automatic tunneling system that leverages IP routing to build the multicast tree out of multicast aware routers. There don't actually have to be any multicast aware routers for it to work. They just make the tree more efficient.

    I thought of an idea for fixing gnutella awhile ago, which largely involved gnutella nodes forming up into their own multicast trees where the multicast packets traveled over TCP links instead of point to point WAN links. When I read that part of Internetworking, I was so struck by the similarity of our ideas that I made a point to talk to her during IETF 50. Her's is a lot better than mine because implementing it at the IP layer leverages existing IP routing to avoid duplication of packets on any given link.

  • Why not just have routers drop packets like they do right now for TCP? Nobody ever claimed that multicast had to be reliable.

  • You haven't thought through the problem very well. Right now, links involved in a gnutella network often see every single search packet many times, along with all the associated TCP ack packets. How is this reducing the burden on routers?

    Gnutella wants every single node that's connected to see every search request. By any definition I can think of, that's anysource multicast. I don't care what you think of the efficiency of multicast, any layer 3 multicast scheme is going to be more efficient than gnutella currently is by virtue of the fact that physical network topology can be taken into account at layer 3.

    Why don't you go read the chapter I was referring to before posting again? Better yet, please explain to me how what gnutella does isn't multicast, and how what gnutella does is better for any segment of the network than a good multicast implementation would be?

  • Hmmm... Yes, you're correct. With single source multicast, this has a possibility of any easy solution. With any source multicast it's a lot harder.

    OTOH, the bandwidth usage of gnutella searches vs. the total bandwidth available is a very small ratio. For this particular application, I don't think it'll be terribly important, but it's a good thing to think about.

  • Strangely enough, it was UUNET that I asked and I was quoted a pretty hefty price. I asked my ISP (visi.com) first, and they said they had dropped it due to lack of interest and wouldn't pick it back up again just for me. :-) So, I was kind of stuck.

    I also think this is too much to go through for multicast to work. It should just work without having to call someone to get a tunnel, and without having to look up a tunnel on some website.

  • If easy to program, easy to implement multicast were available, gnutella would've used it and not been nearly as poor in the scalability department. Gnutella is basically a layer 5 implementation of anysource multicast that uses flooding to get its job done.

    If anybody is interested, I talk to Radia Perlman at IETF 50 last week, and we would like to try to form a working group around making an RFC out of the simple multicasting protocol she describes in the last chapter of her book 'Internetworking'.

  • I had a similar when I was driving around town earlier, I probably should have been paying attention to the road.... bah! I was trying to figure out how such a system would be implemented and there are several hurdles. When the system is small is when it would be it's weakest, as the large (high bandwidth, high uptime) nodes would have to keep track of quite a bit of small nodes. As time went on and the technology got more popular the load would be distributed quite a bit more. But where to store all the data? In ram, no... too much of a burden on the large node, in dbms?, a database, I guess you could require the user to install MySQL or something like that to use the service, but that is kinda bulky.

    As far as keeping traffic down, you only let large nodes talk to large nodes, passing along search requests if they don't have matches for the material being requested. I figure do a first pass search that returns info on the large node right above a small node if found and allow the small node to re-request the search be run past the large node if it isn't happy with the results. Then have the large node pass on the request to other large nodes it knows of.

    I duno, there are quite a few things that need to be figured out for that to work but I belive that it is a much better model. If anybody is interested in throwing ideas back and forth please feel free to e-mail me! I would love to discuss the idea and mabey do some implementing.
  • Yeah, but only MusicCity doesn't suck. At this moment, the biggest server after MusicCity listed on napigator.com has 240607 files, compared to each MusicCity's 3,960,000+ files, plust hte fact that MusicCity servers are linked. So all they have to do is take down MC and opennap dies.
  • I have used it and several clones. It sucks. I have never been able to download one file with it. Maybe I am just too impatient for it... even searching takes forever. Do you have to get shot to know it hurts?
  • by Lotek ( 29809 ) <Vitriolic@NOSpaM.gmail.com> on Saturday March 24, 2001 @08:45PM (#341978)
    Okay, I am tired, and I am not a programmer, but I am trying to work out an idea that has been kicking around in my mind: a self-organizing system that handles request traffic by promoting systems that are both willing to serve and bandwith blessed into a net of servers that do for a gnutella-like PTP network what the central servers did for Napster.

    As a straight-peer-to-peer network grows, it becomes saturated with traffic. Requests are sent, propagated, and choke the entire network of peer-to-peer clients, usually at the lowest bandwith level. Since there is no central coordinating system to handle the search requests, you eventually get a network that is ass slow and unable to perform to expected levels. If you try to run this through an established client server system, lawyers decend like flocks of carrion birds. So it seems to me the fix is a hybrid network of servers that are promoted up from a pool of high bandwith connections, organized like resistance cells. These client machines would only talk to an upper level system, transferring a list of songs on the system to its cell leader. This cell leader would be part of a higher-level cell, and would send data about what was in its cell to a higher level server. Eventually, you hit the top level where you would have a ring of systems on very high bandwith connections.

    Search requests would hop to the top level servers, who would talk to each other and fire back the answer. Then the two (client) machines would start swapping data. These top level machines would be updated from below with fresh data, updating their search pool dynamically.

    As clients come online, they would find a server, report what they have in their swap folder, and start sharing data. requests for searches would only go to the highest bandwith systems, and then only those that are willing to serve in this capacity. If you come online with a nice fast machine, with a fat network pipe, you can become part of the search network.

    Obviously, there would need to be some method of pointing clients to servers, especially if the servers were to dynamically drop on and off the network. I envision that once the sofware determines that you qualify to be a server, and you check that you do want participate, it would set you up as a backup server for a functioning system. when that system drops from the network, your machine would find another comperable system and set it up as a backup.

    Any thoughts on this? Is it already being done? Should I stop smoking the crack? I know that this would be a nontrivial problem to set up, but it seems that it would remain rather uncentralized and chaotic, but not be as prone to choking as gnutella is.

  • I provide other people's content, but most of it is not illegaly provided. It's free software, shareware, Project Gutenberg files (my contribution was to rename them from their cryptic 8-letters-and-digits name to something one can find like Dafoe - Robinson Crueso.txt) and tons of scientifc papers (again, renaming them is crucial). Best thing is, I've seen 'my' files being shared by the folks who downloaded them. So, at least there, the system works.

    OTOH, sharing large files currently is not a good idea. Modem users will try to leech them and I'm not up that long. That might be solved by clients that require a minimum bandwidth for certain files. Also, I want to have "upload slots" available for the small files I share. Typically, all of the upload slots I provide are filled with uploads of larger files so that nobody will be able to get through to the 10 KB files.

    There should also be an automatic ban of people who hammer me with requests.
  • Did you read the entire article ? this is basicaly what BearShare is trying to do in it's last incarnation, now in Alpha. BearShare 3.0 can be set as Peer (Gnutella Classic) Client (à la Napster client) no incoming requests are accepted, (Server) where it can be used as a hub for clients files contents (à la Napster Server). Servers don't have central repositories (à la Napigator) but discover themeselves using a specific BearShare protocol based itslef on te Gnutella protocol. To encourage servers prolification, users with high bandwidth are defaulted to Server.

    I see this clearly as a very intersting development, which will lead to greater scalability, it's a kind of combination of a Napster and Gnutella network, I think this is the way to go.
  • If I had a mod point I would mod you into hell so you could be raped by the devil.
  • edonkey Http://www.eDonkey2000.org

    Connect to a server, search from the server (anyone can run a server) files are downloaded from multiple people at once and download locations are searched for across servers. Servers shared the IPs of other known servers.

    It works pretty damn well.
    FunOne
  • The only reason Gnutella seems to get coverage here is because there's "GNU" in the title.

    And the only reason Napster gets any coverage here is because there's "Nap" in the title... no wait, that's stupid, it's because there's "ster" in the title.
  • Hey, it's so cool that you can look in the Slashborg Thesaurus under "Gnutella" and find "scalability problems". That way you don't actually even have to use the program to complain about it!

    Remember, if 100 Anonymous Cowards all say something, it MUST be true!
    --
    Obfuscated e-mail addresses won't stop sadistic 12-year-old ACs.
  • The problem I see is not whether the protocol itself can scale, we are seeing numerous "tweaks" that will allow this ( Clip2's Reflector [clip2.com] and Bearshare.net's [bearshare.net] forthcoming 3.0.0 "Defender" release) What I see as the problem is the splintering and added features being incorporated by the different Gnutella Clients: Gnotella [nerdherd.net] has added "Improved bitrate scanning", BearShare [bearshare.net] and Limewire's [limewire.com] Firewall Detection, as well as other "extraneous" features, that add information to the gnutella packets. How long will it be before these clients cause sufficient incompatibility that seperate, client specific networks arise? What we really need is an agreement between the different developers to pass on these extra packets, or agree on a central "feature set". I am not advocating that we do away with the myriad gnutella clinets, I think there variety and different personalities are great. I just don't want to see the community splinter through incompatibility issues.

    -OctaneZ
    (What I would really like to see is a native applications similar to Clip2's reflector for both WIN32 and Linux that serves as a "network server" only, that uses low CPU and large numbers of connections for people who believe in the Gnutella idea and are graced with highspeed connections.)
  • The last time I checked, the Gnutella development team was waiting until to release its source code until "Version 1.0" was ready. I know that the protocol spec is open, and that a number of clones are available that work just fine, but I always thought it was odd that the original did not actually appear to be Open Source. Are there any restrictions on using the term "Gnu" in a product name without making the source available?
  • I was initially really intrigued by the start of the article, which points out that the web in its infancy was essentially a p2p system, even if the http protocol wasn't meant for it... almost everyone ran both servers and clients and shared content. But then I thought... the real reason why that kind of environment didn't continue isn't so much that the masses started connecting via transient means. It's that not that many people really have compelling content of their own to share. Just look at what most people use current p2p apps for: to redistribute other people's content. With only a few real content providers, there's no inherent reason why one-to-many is worse than many-to-many; in fact, there are many reasons why one-to-many is better (assured quality of content, for one... anyone else tired of downloading songs on napster or gnutella, only to find out later that they're incomplete?) The only reason why everyone is turning to p2p is because it's currently the easiest & best way to steal apps/music/miscellaneous content produced by others. If the music companies had any clue, they'd run their own servers serving digital copies of every song ever produced for a reasonable fee, and then we'd see the days of many-to-many return to the grave.
  • The programmers of the system.
  • What, and OpenNap doesn't have scalability problems? The number of non-cross-indexed servers that musiccity is forced to distribute their user load over suggests otherwise... =)

    Gnutella still allows for using something like reflector to increase the index centralisation for performance reasons while remaining fundamentally decentralised. Or did you mean scalability of something other than accessible peer count or index size?
  • "If Gnutella peers are Web servers, wouldn't that make Gnutella users who share files equivalent to Web site publishers, with the same responsibilities?"

    To what responsibilities are you referring? I don't see how the protocol in use would change anything.
  • "[...] they're next [...]"

    Who is the "they"? Gnutella (-compatible-software) users? Judging their actions, the RIAA and MPAA don't seem willing to go after end users.
  • "dies" - how do you mean? What software were you using? Did you try more than one reflector? (Did you even try a reflector?)
  • This is pretty much what I was thinking of implementing a while ago, but this would require pretty huge servers at the top and I think I've found a better way.

    Here's a paper [egroups.com] I found from decentralization mailing list, it's a hash table distributed to each node in the network. It's reliable, fast enough and scalable. I've thought a few additions to it which would allow storing the content inside the hash too and how to search for data. The original authors may have better ideas but at least I know it could be made to work :)

  • "Nobody's gone after gnutella because there's nobody to go after. There is no central server to shut down like there was with Napster."

    Wanna bet? Check out the article on theInternet Anti-Piracy System (aka Media Tracker) [theregister.co.uk]... also.. check out the screenshots here. [7amnews.com]

    "The software, developed by the International Federation of the Phonographic Industry (IFPI), mimics all the commonly used and less-well-known file-sharing clients used to share music. The software can also be used to keep an eye on IRC chatrooms and newsgroups, according to New Zealand Web site 7amnews.com, which has obtained what it claims are screenshots of Media Tracker in operation."

    and

    "Media Tracker builds up a list of tracks, the networks they're being shared on - Napster, Freenet, Gnutella etc. - the sharer's IP address and the name of their host or ISP. The date and time the song at which a given song was shared is also recorded. All this is held in a database that can be used to cross-check individuals' sharing patterns and to locate ISPs with a high percentage of sharers among their subscribers. "

    Looks like they may be gearing up to go after individual traders, or at least their ISP's.

  • by Rogain ( 91755 )
    Everything is now illegal.
  • by rograndom ( 112079 ) on Saturday March 24, 2001 @07:21PM (#341996) Homepage

    Yeah, that seems about how long I've been waiting for this file to download.

    :)

  • thats how ALL cases are won or lost; it's still determined by the judgement of our 'peers.'

    So if a peer is making a negative judgment about you, just disconnect them. Problem solved ...

  • why I can never download Crouching Tiger, Hidden Dragon successfully.

    The copyright gods look down on infringers. Simply wait 95 years, and you'll be able to download it for free from your local film preservation society.

    Oh wait, you're a human. I keep forgetting that mortal humans don't live more than 85 years in most cases, which makes the copyright term quite pointless [8m.com] for implementing the "for limited Times" language of the U.S. Constitution.

  • I remember the old alpha days, where nothing would work and hardly anything would download, yup glad those days are gone.

    This is easily the most insightful comment I've seen on Slashdot in the past 2 years.

    --

  • If you don't see them, they can't eat you...
  • If you are interested in continuing these discussions after Slashdot archives this one, try the Usenet newsgroups alt.internet.p2p and alt.gnutella. Note that many ISPs only add new alt.* groups on user request so if they are not available write to the news administrator at your ISP (address news or usenet) or contact the support people (address support) and ask that the newsgroups be added.
  • A lot of people think that gnutella isn't scalabl enough. But from what I have seen of it, gnutella is less efficient then Napster because it takes up about 500 bytes-1 K a second to link hosts and pass requests.

    On a analog modem, that is an annoying loss of bandwidth, but on DSL or Cable or a ethernet dorm room, that owuld be a trivial amnount of bandwith. As broadband starts to spread, I am sure gnutella will be more and more practical.

  • My approach:
    First need to find an ip address to a server.
    Need to connect to a database that holds the
    list of servers. When downloading a gnutella
    clone get an up-to-date list of servers. Connect to those servers and they all agree and elect servers in a tree structure based on ping times or ipaddress heirachical order which should reflect geographical location. These servers (which are the same as clients as everyone is a server and a client in a peer-to-peer environment) replicate data (as an enterprise would) to keep state (send file lists), with redundancy and elect servers (somewhat as netbios does) based on uptime and must be recoverable just like dynamic routing protocols such as ospf and must have a short convergence time.

    mb
  • I like the subversion that Napigator [napigator.com] offers since it still uses the Napster program for its main interface. Windoze users, take note! (If you haven't already!). It rocks.

    For Linux users, I've had great success with Gnapster [faradic.net] which uses many of the same Musiccity [musiccity.com] servers. Free music for everyone!
    ------------
  • If there's anyone the RIAA would go after, it's the people who make the clients. They (RIAA) could claim that the individual programmers involved in the making of Gnutella clients are acting as a vehical for piracy. The MPAA can jump on board too, because it also allows you to trade movies.

    Individual users may be protected with the webserver loophole, but as Gnutella gains popularity along with ease of use (anyone ever use LimeWire?), the lawsuits will pop up.
    --
  • I got bearshare, installed it, searched for something. What do I get?
    3000 matches.
    99% was shit. Took me like 10 min to scan thru it and find what I wanted.

    I'd have to say that about 20% was some preteen shit. Different folks, different strokes, but I don't wanna look at the result
    "lolita preeteen fucked by daddy.jpeg/mpeg/???"
    Especially when I was looking for a live concert recording of the grateful dead.
    I didn't find it by the way :(

    Same shit with "regular" porno. I'm a guy, so porno's good in my books, but it does get in the way of searches.

    So - wanna kill open "decentralized" networks - just innundate the search results with shit, hell you can just use file names like
    "Buy brittney spears new album fuck tit blowjob preeteen.mp3"
    and get some free advertising while you're at it.

    Or, just write a gnutella client that seeks and downloads "ilicit" material to /dev/null, tying up the queues for everyboy else. Probably would work on IRC and hotline as well.

    Anyways, its 2am, I'm going to sleep. Dunno why I'm helping the fuckin' pigs, but perhaps this post will encourage others to "fix" the small problems in p2p decentralized programs, like ability to ban IP's, filter out shit, etc...

    Shouts.


    I have a shotgun, a shovel and 30 acres behind the barn.

  • That would be like holding someone responsible for what they do.
    ---
  • Nobody's gone after gnutella because there's nobody to go after. There is no central server to shut down like there was with Napster.
    ---
  • We wont rest until everyone is on a p2p network or filesharing connection that is shared by no one. (Lets see 1 million users --- 1 million and 1 places to share files)....
  • How is this going to work with IPV4? Mcast routing on the Internet is basically confined to mbone and unicast from ISP to ISP via tunnels.

    Also the flaw with IPV4 multicasting if used extensivly is your PC will join 16 multicast groups (which could equal a lot of traffic) as the multicast MAC address is taken from the last 23 bits of the layer 3 mcast address.

    If Radia's idea is based on layer 2/3 multicasting I feel its a little bit screwed until be get IPV6 in place. Then again, I've never read "internetworking" so it could be a total different system?

    But anyway regardless of the future plans, your also misguided in your comment. Gnutella uses no form of multicasting at the session layer or otherwise. Multicasting, when a protocol uses IP for at the network layer, by definition must be performed at layer 2/3 or else it's unicast, period. Gnutella floods requests but uses a peer based systems and packet duplication to each peer to perform flooding; unique packets to each unique destination, even at layer 5. Think about what a system like GNUTella is called: peer to peer.

    So your comment sounds good but technically doesn't make any sense. Congratualtion to the moderators once again.

  • "Somebody should really send the guys at Nullsoft a Llama-shaped cake -- chocolate of course!

    ...and you can send them a reality check too, cuz gnutella gets slower and slower with every overrated slashdot post about it.


    --

  • that you can't really block Gnutella. I know our campus tried once upon a time. I think we can all agree that SOME illegal filesharing occurs reguardless of what we want. You can't really hit a gnutella user since I don't think you can directly access a user from a web browser, thus not making them a web server (correct me if I'm wrong here, please, I've never used it..), and that the formatting itself would seem to actually be a better target for the RIAA to go after complete shutdown. It also avoids the situation that napster put themselves into.

    Curiously, why hasn't anyone gone after gnutella?? I'm betting they're next, unfortunately.
  • >Gnutella is a lousy protocol.
    >It has some serious scalability problems.

    Yep. That's what the naysayers were spouting a year ago, when there were 1000 hosts on the network. Now, there are over 20,000 hosts on the network (according to Limewire Hostcount [limewire.com]), and guess what? The network hasn't collapsed.

    There will be a saturation point, no doubt. But we haven't hit it yet.
  • >That's why Napster is losing; the RIAA's lawyers portrayed it as
    >being illegal

    Napster was meant for sharing MP3s, and MP3s only. On Gnutella, you can share anything you want. I think that's a start at proving that p2p (or at least Gnutella) has more good potential than bad.

    Shaun
  • It's the protocol, not the client, which is open source. The original Gnutella client source will never be released, thanks to our friends at AOL-TW and the various record companies they're in bed with. The protocol specs are public and that's proven to be enough.

    Several Gnutella clones, though, *are* open-source. Check here for a list of the clones whose source code is available to the public:

    http://www.gnutelladev.com/source/ [gnutelladev.com]

    Shaun
  • >As clients come online, they would find a server, report what they
    >have in their swap folder, and start sharing data. requests
    >for searches would only go to the highest bandwith systems,
    >and then only those that are willing to serve in this capacity.
    >If you come online with a nice fast machine, with a fat network
    >pipe, you can become part of the search network.

    I can't say much other than this is already being worked on to some extent. It's basically a tiering model, where clients will connect to other clients who have opted to function as "servers" or "superclients," and those servers bear a bit more of the network burden. Those of us with fatter pipes will be able to contribute to the routing, while the folks on dialup can search for files without worrying about their Gnutella client trying to route tens of MBs of traffic per hour.

    There are some seriously smart people out there who are making sure this becomes a reality. It could be a few weeks, it could be a few months; but you'll see this in the future.

    Shaun
  • >How long will it be before these clients cause sufficient
    >incompatibility that seperate, client specific networks arise?
    >What we really need is an agreement between the different
    >developers to pass on these extra packets, or agree on a
    >central "feature set".

    I agree with you 100% on this issue - in fact, "developer fragmentation" was the #1 problem I listed in a September 2000 overview of problems facing Gnutella [shat.net]. (Note: This article is rather outdated, especially in that I no longer use the original Nullsoft client!! BearShare, LimeWire, ToadNode, etc. were not released when I wrote this.)

    Luckily, we've come a long way since last September, or at least that's how it appears to me. BearShare and LimeWire, the two most aggressively developed Windows clients, seem to be fully interoperable. They send extra information - host uptime, for example - within the Gnutella packets. But they appear to integrate without any problem. How this extra data affects some of the older clients, I'm not sure; but the leaders in the Gnutella front seem to be "cooperative competitors."

    Shaun
  • >Many ISPs write into their TOS that you aren't allowed to
    >run servers because they are afraid of the content providers,
    >don't want to provide the bandwidth anyway, and want to charge
    >much higher fees to supposedly commercial servers.

    And, I dare say, many ISPs don't give a flying fuck about this particular TOS entry unless you're running a server that's a) taking up inordinate amounts of bandwidth or b) serving illegal material. Even at that, they still won't care about b) unless someone reports it.

    A cursory glance at my BearShare hosts at any given moment shows mostly cable/DSL users. Most of those providers forbid running servers, but most of them have no real way to tell, unless you're congesting the network. I was a bit surprised that BearShare's latest version sets the default max number of simultaneous uploads to 10 (I keep mine set at 2) but for the most part, unless you're a total dumbass, running Gnutella isn't going to pop up any bandwidth-sucking red flags at your provider's NOC.

    One of Gnutella's strong points - unlike a lot of standard protocols - is that you can dynamically change your listen port to whatever you want, and the changes are effective immediately to the rest of the network. If your ISP blocks/monitors 6346, you can change it to something else. If your ISP blocks that, you can change it again; and for the really paranoid, you could write a dirty VB bot or something to change your listen port every hour. Of course, you *could* do the same for FTP/HTTP/etc servers but it would make it more difficult for your visitors to find you.

    Server-ban or no, most if not all ISPs have no reliable way to detect or block Gnutella traffic. I think that's quite an advantage.

    As for bandwidth being on the rise, you have to consider that file sizes are increasing as well. 5 years ago, the end user surely couldn't download at 200+K/sec, but 5 years ago, the end user wasn't sharing 250MB pornos with the rest of the world, either. The pipes *are* getting fatter, but so are the files being sent across them.

    Shaun
  • The only reason MC is so big is because they set up about 30 servers. Almost all inexperienced users would instinctively connect to something like that, without looking at the other options. If they get shut down, probably DJNap or Opennap would become just as popular. It doesn't take too much to attract a bunch of users onto a server. I used to run one, and after just an hour, I would have a few hundred users, and that would sometimes grow, until someone would pull out my ethernet cable or something like that :)

    Unfortunately, I no longer run it because the RIAA sent my ISP that infamous letter. 4 days without my 1.5 mbps DSL was more than I could stand. I just can't get over the fact that they had convinced my ISP that I was actually running an mp3 server, providing my own mp3 files to download!
    -mdek.net [mdek.net]
  • I'm inclined to agree with those that say the only reason Gnutella receives so much coverage here is because it's got "GNU" in the title.

    Talking to people at school, everyone who has EVER tried gnutella (including myself) absolutely hates it. It's hard to use....files RARELY actually download...it's slow, etc.

    What's the big deal? Why not post articles on winmx or imesh or other file sharing programs like these that are actually popular and work, and are in the process of replacing napster.

    scott
  • considering i don't use them yet, not a whole lot. how much of a kickback are people like you getting from pimping a terrible product like gnutella.

    thanks for playing

    scott
  • same here, I never found anything useful on gnutella. I've got a 10Mbps line so I was keeping something like 50-100 connections going, but it still couldn't find anthing decent.

  • I got bearshare, installed it, searched for something. What do I get? 3000 matches. 99% was shit. Took me like 10 min to scan thru it and find what I wanted.

    You have to be precise in your searches, or you will get crap. It took you 10 whole minutes?? You could of cooked ten pizza pops in that time!!

    I'd have to say that about 20% was some preteen shit. Different folks, different strokes, but I don't wanna look at the result "lolita preeteen fucked by daddy.jpeg/mpeg/???"

    I think LimeWire allows you to filter adult crap.

    Especially when I was looking for a live concert recording of the grateful dead.

    I can't see how you're getting porn while searching for the Grateful Dead....

    I didn't find it by the way :(

    That's a shame. :(


  • I had a dream Gary Gnu was covered in Nutello. Now that's Gnutz!
  • There was a study which found that a huge amount of the files shared were hosted on a small number of Gnutella nodes. The RIAA was very smug in predicting that these were the people they'd attack if they ever wanted to take Gnutella out (if/when, that is.) What I don't think they've taken into account is the number of nodes that will take their place. Such a strategy might lead to a nice democratization of the network, forcing lurkers to share some content.
  • I'm not sure I understand the question in the article. If you run a Gnutella node, you're not magically exempt from responsibility for what you host. Protocol is completely irrelevant. The only reason G users can get away with more is that they're harder to track down. This could of course be solved by someone with enough money and desire to take out lots and lots of small, personal nodes.

    Note that they wouldn't actually have to take anyone to court, by the way. A quick call to your ISP would be enough to take care of the problem.

  • Can you imagine the nightmare of implementing Gnutella using multicast (if such a thing were possible on today's net, which it's not.) You'd still have all the traffic problems you have with the current system, only now they'd be weighing down routers. Even if the traffic were routed over a breadth of multicast addresses (which would eat into the supply), imagine what happens when you subscribe to a channel.

    Not only does your connection get hosed, but your ISP/IT dept. probably shuts down multicast for the afternoon and has some very stern things to say to you.

  • Gnutella uses no form of multicasting at the session layer or otherwise. Multicasting, when a protocol uses IP for at the network layer, by definition must be performed at layer 2/3 or else it's unicast, period

    I believe the buzzword is "application level multicast." A lot of companies (eg FastForward) are trying to implement this sort of thing as the solution to the lack of multicast support on the backbone.

  • any layer 3 multicast scheme is going to be more efficient than gnutella currently is by virtue of the fact that physical network topology can be taken into account at layer 3.

    So all you're after is an architecture that's aware of network topography. You're absolutely correct in that Gnutella is currently very poor in the way it builds the network and distributes messages. Layer 3 multicast would be an improvement in this area, but unfortunately it has serious implementation flaws that make it extremely difficult to implement on a large network. However, the real problem with multicast (for this application) is that once you have subscribed to a channel, you have no reliable controls on traffic flow (outside of what your routers are able/configured to allow.) In other words, should the Gnutella client wish to throttle back traffic, you're outa luck (existing traffic reporting mechanisms are going to be inadequate for this application.)

    How is this reducing the burden on routers?

    Handling multicast routing puts a significant burden on a router. The more traffic, and the more complex the multicast routing table, the worse this is. Gnutella represents the worst possible case for a multicast network: many short messages from many sources. Not to mention that when a router fails, the table has to be rebuilt. If the failing component is a switch, this often results in broadcast traffic on the LAN (and this is really the least of the potential difficulties.)

    Better yet, please explain to me how what gnutella does isn't multicast, and how what gnutella does is better for any segment of the network than a good multicast implementation would be?

    It is a form of multicast. It's not layer 3 multicast. Was I unclear there? I'm sorry. most people call it "application level multicast", although that also implies a certain consciousness of topography. My point was simply that layer 3 multicast has serious flaws that would make it less than ideal for this purpose. That's aside from the fact that it's absolutely not going to be implemented on the major backbones, which makes the issue completely academic. The only real solution is for Gnutella to become more intelligent in the way it routes messages. It's not surprising that it's so stupid, it is after all a first generation technology.

  • OTOH, the bandwidth usage of gnutella searches vs. the total bandwidth available is a very small ratio. For this particular application, I don't think it'll be terribly important, but it's a good thing to think about

    We did some very preliminary research on the scalability of gnutella-style search systems for a project at work. The quickest summary is yes, the search traffic is relatively low-- but it's "low" in direct proportion to the number of nodes. Since each node is supposed to hear searches from the majority of the network (or at least, those nodes within a certain distance), it's pretty easy to screw up everybody's day once the network gets large enough.

    And of course, those nodes connected to narrow pipes can just be blown away by multicasting, whereas at least with the current Gnutella (as inefficient as it is), the TCP acks work as a throttle to control incoming requests.

  • We all know that copyrighted material is being shared on both Gnutella and all the other P2P clones.

    When for example, a song is shared. Who is at fault? The person downloading, or the person sharing?

    It seems like the person downloading would be the guilty party(assuming they don't own the cd). If I leave a cd-r on top of my car, and someone takes it, did I commit piracy?

    Also, when exactly do you break the law? When I transfer copyrighted material through my ISP do they "pirate"? Is it only pirating if someone listens to a song?

    I think that brings up an interesting point. If you don't listen, is it piracy, and can they prove you listened to it?

    You would think the RIAA would say, enough is enough, and do a minimal charge and allow people to share files freely(as in speech and as in beer).

    I have no use for songs that are copy "protected"

    --Joey
  • by Joey7F ( 307495 ) on Saturday March 24, 2001 @06:57PM (#342032) Homepage Journal
    Wow one year? In net fads years, that is like what, middle aged?

    I remember the old alpha days, where nothing would work and hardly anything would download, yup glad those days are gone.

    --Joey
  • Gnutella is scalable. That "article by the napster backend guy showed that it's dead in the water" was some interesting mathematics but totally WRONG. When that article was written, there were around 6,000 concurrent users on Gnutella - now there are 25,000 at any given time. The network should have already blown up! See for yourself:
    http://www.limewire.com/hostcount.htm#rolling [limewire.com]

    There is a very active Gnutella Developer Forum where all the true Gnutella developers from all the major clients have been working to improve the protocol and network for months. They have made great progress and will continue to.

    Gnutella was a simple protocol and idea with staying power for the long term. There will be more power and surprises in store.

  • The network grew too large for the original LimeWire crawling algorithm. If we were still taking 30 minutes to do the crawl, the numbers would actually be higher. Our current crawl takes about 6 minutes.
  • Finally.. something we CAN'T slashdot..

    Or can we? *grin*
  • They are just using the file transfer part of the http protocol, surely... not that I think that the comparison of them insofar as responsibilies go is invalid, but if we start grouping together high level concepts (like peer to peer vs central server, for instance) simply because they share a *protocol* in common, isn't that a little strange?

  • Peer-to-peer sharing used to be the norm on the Internet: most hosts on the network participated equally in services like FTP, IRC, talk, mail, and USENET.

    The current view of the world came when we got slow, intermittent dial-up connections, firewalls, Windows, and ISPs. But most of all, it came when companies like ICQ and Hotmail wanted to derive large amounts of money by funneling traffic through their sites.

    So, where are we going? Content providers want central control over distribution and service providers want to tie users to their services. Many ISPs write into their TOS that you aren't allowed to run servers because they are afraid of the content providers, don't want to provide the bandwidth anyway, and want to charge much higher fees to supposedly commercial servers.

    Altogether, the outlook looks pretty bleak to me: the Internet is at risk of turning into a medium where the majority of information is provided by those with money and power. There is one bright spot, though: while less and less relative bandwidth is P2P, there is a lot more bandwidth than 10-20 years ago. So, while in relative terms, almost all audio and video may come from big companies with commercial agendas, there is more bandwidth than ever available for peer-to-peer distribution of text and image content. And while "access rights management" on audio, video, and e-book formats may be in our future, plain ASCII and images are likely not to fall under that either. I hope that's all we need to keep the web available for important non-commercial social functions.

  • I came up with the idea of Gnutella in tandem, I started telling everyone my great idea for secure MMORPGs and unhackable Starcraft and they say,"Dude thats already out, its gnutella..." Basically, all game traffic in a network is monitored by a ton of random clients... When you log off, your information is stored on other people's PC. So when you log on, and claim you gained a ton of stuff in the interim, a few people need to back you up... So you can't hack the game. Of course game programmers in general are a slow lot, and there is not much commercial viablity of a structure where you can't charge your customers monthly... So prolly won't see it for decades.
  • You can't claim someone else's gold is lower, or you'll be placed up in contest. Same way a teacher in elementary school knows who the liar is. He keeps pointing out people doing stuff wrong while everyone else says something different. Liar/hacker detected and he's no longer allowed to play the games with other kids.
  • Somebody should really send the guys at Nullsoft a Llama-shaped cake -- chocolate of course!
  • i had all but given up on gnutella until i came across limewire [limewire.com]. in my opinion this is the best gnutella clone out there. has anyone seen something better?

  • GNUtella is a good protocall, has potental, but as a open nap guy, I see a lot more going on there, as the connection protocals, are much more stable. And although the best efforts have been shown in the decss case (looking over at my cease-and-desist letter) source code is still gonna be out there despite laws, it's free speach, and since open nap is open source, it'l be there, and servers will be there, at last count there were over 1000 open nap servers...if the RIAA is ready to file that many law suits, then it'l be a busy year.. -Mike Haisley
  • I just started using BearShare to access Gnutella a few weeks ago and it caught my attention that BearShare users can be accessed via web browser on port 6346. This option can be turned off in the "Uploads" tab. In my opinion that makes the host a BearShare user is using a web server.
  • It seems like every time a technological hurdle pops up, an inevitable response is that broadband access will soon spread to the masses and solve all our problems. While raw speed and brute force certainly do a reasonable job of smoothing over potholes on the digital boulevards, perhaps we should give technical consideration to those who need it most....I personally don't think that broadband will proliferate for quite a while, and that for the next 5-10 years we must continue to cater heavily to the 56K generation.

  • Sorry to poop on the party but Gnutella's protocol sux. 80 to 90% of the time the client your connecting to is busy. If you happen to connect the connection drops. It the most fustruting peice of garbage ever designed and lasted on my BSD for for no more then 4 hours before I rm -R -f the muthafucker.

You are in a maze of little twisting passages, all different.

Working...