Gnutella at One Year 89
transient writes: "Gnutella's first birthday passed quietly about 10 days ago. An OpenP2P article reflects on the Gnutella network as a transient extension of the Web, since Gnutella peers use HTTP for file transfer and are essentially Web servers. Seems the network keeps evolving; there's some discussion of the new BearShare Defender and more info on the recent Gnutella virus. If Gnutella peers are Web servers, wouldn't that make Gnutella users who share files equivalent to Web site publishers, with the same responsibilities?"
infoanarchy.org (Score:1)
Re:See (Score:1)
Re:Gnutella scalbility and multicast (Score:2)
Multicast is pretty easy to program, not much harder then UDP. Or at least the system interface is almost exactly the same (you have to manually set the TTL, that's about the only difference I remember).
Getting a multicast feed is harder, but not really harder then NNTP, you find someone who has one and request a tunnel (unless your ISP magically gives you multicast, which is quite rare).
Mind you this was the state of affairs about 8 years ago, when I did the multicast news software [usenix.org] in 1993~1994. Well, you also frequently needed kernel patches then too, but I don't think that is needed in modern unix-like systems.
It is quite hard doing something with multicast that doesn't suffer congestion problems, it is like doing normal UDP work where the protocall doens't help you with packet loss or congestion, except it is far harder to get replies from all receivers (in fact if you want to scale forever you can't ever accept any replies from anyone). It's a big old pain, but people do UDP based systems, and they could do multicast ones as well with more work.
Re:Gnutella scalbility and multicast (Score:2)
Well you can get a tunnel from anyone, not just your ISP. It is in the ISP's best intrest to be the one to provide it, unless they charge you per bandwidth used.
Last I checked UUNET gave free multicast to any leased line cust's. That was quite a while ago though, it may have changed. I also know they can do multicast to dial-ups (anyone using Ascend boxes there should be able to do it), but I don't know what the deal on that is.
Re:Http protocol == web server? (Score:1)
I guess my reply boils down to, what sets it apart from other http-based file server applications such that you feel it incorrect to call it an http server?
-j
Gnutella protocol (Score:1)
My proxy is basicly a "dedicated" gnutella server + message passing for linux clients ( gtk_gnutella ) that have no server componet. If anyone wants to chat about gnutella protocol specs and extentions please email me or post here. I'm very interested in the new client's extentions to the protocol! =)
Re:Time for a 2.0 approach? (Score:2)
For exmaple...
As a straight-peer-to-peer network grows, it becomes saturated with traffic. Requests are sent, propagated, and choke the entire network of peer-to-peer clients, usually at the lowest bandwith level.
I saw this first hand when using a modified Gnutella client to monitor the types and number of queries occuring on the network. The vast majority was crap or outright maliscious, and it brought my 1.5Mbps downstream DSL line to a crawl.
But it is possible to have a fully decentralized network that is bandwidth friendly. I am working on it now.
If you try to run this through an established client server system, lawyers decend like flocks of carrion birds.
Another important asecpt of this network is that searching and actual transfer are decoupled. When you find some hits to your query, you are returned a list of locators for that resource. These may be simple HTTP style, or they may be Freenet SHA-1 hash keys. Which means that you can find the content you seek in an open, decentralized network. And the obtain it (if it is senstive data in your country, etc) via a secure, anonymous manner like Freenet.
And finally, the most important aspect of this network is that it is adaptive to your preferences. A very large problem with Gnutella and other peer based networks is spam and irrelevant results. With this network you continually add peers who respond with relevant, quality information, and drop other peers who provide no value.
At any rate, if you are interested, you can read more about this project. It is called the ALPINE Network [cubicmetercrystal.com] and the main page is at http://cubicmetercrystal.com/alpine/ [cubicmetercrystal.com]
Re:Gnutella scalbility and multicast (Score:2)
If it was supported, then the biggest problem would be congestion avoidance.
The congestion avoidance algorithms built into TCP are the only saving grace for the internet backbone as it exists today. With any kind of widely deployed multicast, this becomes very critical to implement and work efficiently.
There has been some progress in this area, but it is a very difficult problem. The IETF has a working group on multicast congestion control. Its work is available here:
http://www.ietf.org/internet-drafts/draft-ietf-rm
Re:infoanarchy.org (Score:2)
Ben Housten has a good page with ideas and links at http://www.exocortex.org/p2p/index.html [exocortex.org]
The Peer to peer working group has their site at http://www.peer-to-peerwg.org/ [peer-to-peerwg.org]
You may also want to check out the Orielly OpenP2P page at http://www.oreillynet.com/p2p/ [oreillynet.com]
And of course, I need to shamelessly plug my open source decentralized searching network, the ALPINE Network [cubicmetercrystal.com]
Re:Gnutella scalbility and multicast (Score:2)
This would be a horrible scenario!
There is some good information on TCP friendliness and congestion avoidance algorithms here: http://www.psc.edu/networking/tcp_friendly.html
This really is incredibly important. Anything that starves TCP and introduces congestion at a wide level in the internet is going to wreak havoc.
Re:Gnutella scalbility and multicast (Score:2)
This is probably true. Most gnutella clients would be on smaller DSL or modem links. These would have a hard time overwhelming bandwidth.
In most cases the problem occurs between different ISP level routers or at the client's link itself.
If the traffic on the multicast channel was greater than 56k consistently, the modem clients TCP connections would starve, which would not be good from an end user perspective.
If the traffic was such that a small ISP was using most of their bandwidth (probably outgoing for the multicast to destinations) then all clients of that ISP would be having problems as well.
It really does get pretty tricky quickly. however, there is good progress being made in this area, and perhaps with IPv6 we will begin to see multicast working on a larger scale (I hope so!)
The project I am working uses unicast to multiple destinations that acts very similiar to a multicast. However, I had to build some very elaborate mechanisms into the protocols to keep congestion and TCP starvation from occuring as well as allowing varied bandwidth links the ability to communicate without the fast ones overwhelming the slower ones.
This is the ALPINE Network [cubicmetercrystal.com] and the more extensive information about congestion avoidance is here [cubicmetercrystal.com]
Re:Gnutella scalbility and multicast (Score:1)
Yes, it isn't too hard to program. It's simply has a reputation for being esoteric. Getting a multicast feed is more work then most people are willing to go through. I remember trying to get one about 2 years ago and not being able to because the people who could give me one wanted to charge me enormous sums of money (in the $100s to $1000s/mo range).
I want multicast that just works regardless of whether an ISP supports it or not, and if the ISP wants to reduce bandwidth usage on their network, the implement it.
Re:Gnutella scalbility and multicast (Score:2)
If it looks like a duck, and quacks like a duck, it's a duck. Gnutella's goal is to get a search packet to be seen by every node connected to the network. That sounds a lot like multicast to me.
At layer 2 and 3 multicast could be implemented via flooding every node on the network with your multicast packet, just like Gnutella does. So, the flood goes over a bunch of TCP links instead of a bunch of point to point WAN links and broadcast Ethernet links, what's the essential difference here?
Her idea is a sort of automatic tunneling system that leverages IP routing to build the multicast tree out of multicast aware routers. There don't actually have to be any multicast aware routers for it to work. They just make the tree more efficient.
I thought of an idea for fixing gnutella awhile ago, which largely involved gnutella nodes forming up into their own multicast trees where the multicast packets traveled over TCP links instead of point to point WAN links. When I read that part of Internetworking, I was so struck by the similarity of our ideas that I made a point to talk to her during IETF 50. Her's is a lot better than mine because implementing it at the IP layer leverages existing IP routing to avoid duplication of packets on any given link.
Re:Gnutella scalbility and multicast (Score:2)
Why not just have routers drop packets like they do right now for TCP? Nobody ever claimed that multicast had to be reliable.
Re:Gnutella scalbility and multicast (Score:2)
You haven't thought through the problem very well. Right now, links involved in a gnutella network often see every single search packet many times, along with all the associated TCP ack packets. How is this reducing the burden on routers?
Gnutella wants every single node that's connected to see every search request. By any definition I can think of, that's anysource multicast. I don't care what you think of the efficiency of multicast, any layer 3 multicast scheme is going to be more efficient than gnutella currently is by virtue of the fact that physical network topology can be taken into account at layer 3.
Why don't you go read the chapter I was referring to before posting again? Better yet, please explain to me how what gnutella does isn't multicast, and how what gnutella does is better for any segment of the network than a good multicast implementation would be?
Re:Gnutella scalbility and multicast (Score:2)
Hmmm... Yes, you're correct. With single source multicast, this has a possibility of any easy solution. With any source multicast it's a lot harder.
OTOH, the bandwidth usage of gnutella searches vs. the total bandwidth available is a very small ratio. For this particular application, I don't think it'll be terribly important, but it's a good thing to think about.
Re:Gnutella scalbility and multicast (Score:2)
Strangely enough, it was UUNET that I asked and I was quoted a pretty hefty price. I asked my ISP (visi.com) first, and they said they had dropped it due to lack of interest and wouldn't pick it back up again just for me. :-) So, I was kind of stuck.
I also think this is too much to go through for multicast to work. It should just work without having to call someone to get a tunnel, and without having to look up a tunnel on some website.
Gnutella scalbility and multicast (Score:5)
If easy to program, easy to implement multicast were available, gnutella would've used it and not been nearly as poor in the scalability department. Gnutella is basically a layer 5 implementation of anysource multicast that uses flooding to get its job done.
If anybody is interested, I talk to Radia Perlman at IETF 50 last week, and we would like to try to form a working group around making an RFC out of the simple multicasting protocol she describes in the last chapter of her book 'Internetworking'.
Re:Time for a 2.0 approach? (Score:1)
As far as keeping traffic down, you only let large nodes talk to large nodes, passing along search requests if they don't have matches for the material being requested. I figure do a first pass search that returns info on the large node right above a small node if found and allow the small node to re-request the search be run past the large node if it isn't happy with the results. Then have the large node pass on the request to other large nodes it knows of.
I duno, there are quite a few things that need to be figured out for that to work but I belive that it is a much better model. If anybody is interested in throwing ideas back and forth please feel free to e-mail me! I would love to discuss the idea and mabey do some implementing.
Re:GNUtella Vs. OpenNap (Score:2)
Re:Hey michael (Score:1)
Time for a 2.0 approach? (Score:3)
As a straight-peer-to-peer network grows, it becomes saturated with traffic. Requests are sent, propagated, and choke the entire network of peer-to-peer clients, usually at the lowest bandwith level. Since there is no central coordinating system to handle the search requests, you eventually get a network that is ass slow and unable to perform to expected levels. If you try to run this through an established client server system, lawyers decend like flocks of carrion birds. So it seems to me the fix is a hybrid network of servers that are promoted up from a pool of high bandwith connections, organized like resistance cells. These client machines would only talk to an upper level system, transferring a list of songs on the system to its cell leader. This cell leader would be part of a higher-level cell, and would send data about what was in its cell to a higher level server. Eventually, you hit the top level where you would have a ring of systems on very high bandwith connections.
Search requests would hop to the top level servers, who would talk to each other and fire back the answer. Then the two (client) machines would start swapping data. These top level machines would be updated from below with fresh data, updating their search pool dynamically.
As clients come online, they would find a server, report what they have in their swap folder, and start sharing data. requests for searches would only go to the highest bandwith systems, and then only those that are willing to serve in this capacity. If you come online with a nice fast machine, with a fat network pipe, you can become part of the search network.
Obviously, there would need to be some method of pointing clients to servers, especially if the servers were to dynamically drop on and off the network. I envision that once the sofware determines that you qualify to be a server, and you check that you do want participate, it would set you up as a backup server for a functioning system. when that system drops from the network, your machine would find another comperable system and set it up as a backup.
Any thoughts on this? Is it already being done? Should I stop smoking the crack? I know that this would be a nontrivial problem to set up, but it seems that it would remain rather uncentralized and chaotic, but not be as prone to choking as gnutella is.
Content to be shared (Score:2)
OTOH, sharing large files currently is not a good idea. Modem users will try to leech them and I'm not up that long. That might be solved by clients that require a minimum bandwidth for certain files. Also, I want to have "upload slots" available for the small files I share. Typically, all of the upload slots I provide are filled with uploads of larger files so that nobody will be able to get through to the 10 KB files.
There should also be an automatic ban of people who hammer me with requests.
Re:Time for a 2.0 approach? (Score:1)
I see this clearly as a very intersting development, which will lead to greater scalability, it's a kind of combination of a Napster and Gnutella network, I think this is the way to go.
Re:Gnutella scalability (Score:1)
Sort of like.... (Score:1)
Connect to a server, search from the server (anyone can run a server) files are downloaded from multiple people at once and download locations are searched for across servers. Servers shared the IPs of other known servers.
It works pretty damn well.
FunOne
Re:Gnutella (Score:1)
And the only reason Napster gets any coverage here is because there's "Nap" in the title... no wait, that's stupid, it's because there's "ster" in the title.
Re:Hey michael (Score:1)
Remember, if 100 Anonymous Cowards all say something, it MUST be true!
--
Obfuscated e-mail addresses won't stop sadistic 12-year-old ACs.
Client Interoperabilty (Score:3)
-OctaneZ
(What I would really like to see is a native applications similar to Clip2's reflector for both WIN32 and Linux that serves as a "network server" only, that uses low CPU and large numbers of connections for people who believe in the Gnutella idea and are graced with highspeed connections.)
Has Gnutella source code been released yet? (Score:1)
usefulness of a transient web? (Score:2)
Re:Who is next? (Score:2)
Re:Hey michael (Score:1)
Gnutella still allows for using something like reflector to increase the index centralisation for performance reasons while remaining fundamentally decentralised. Or did you mean scalability of something other than accessible peer count or index size?
"Web servers"? (Score:2)
To what responsibilities are you referring? I don't see how the protocol in use would change anything.
Who is next? (Score:2)
Who is the "they"? Gnutella (-compatible-software) users? Judging their actions, the RIAA and MPAA don't seem willing to go after end users.
Re:Hey michael (Score:2)
Re:Time for a 2.0 approach? (Score:1)
Here's a paper [egroups.com] I found from decentralization mailing list, it's a hash table distributed to each node in the network. It's reliable, fast enough and scalable. I've thought a few additions to it which would allow storing the content inside the hash too and how to search for data. The original authors may have better ideas but at least I know it could be made to work :)
Music industry tracking individual MP3 sharers (Score:1)
Wanna bet? Check out the article on theInternet Anti-Piracy System (aka Media Tracker) [theregister.co.uk]... also.. check out the screenshots here. [7amnews.com]
"The software, developed by the International Federation of the Phonographic Industry (IFPI), mimics all the commonly used and less-well-known file-sharing clients used to share music. The software can also be used to keep an eye on IRC chatrooms and newsgroups, according to New Zealand Web site 7amnews.com, which has obtained what it claims are screenshots of Media Tracker in operation."
and
"Media Tracker builds up a list of tracks, the networks they're being shared on - Napster, Freenet, Gnutella etc. - the sharer's IP address and the name of their host or ISP. The date and time the song at which a given song was shared is also recorded. All this is held in a database that can be used to cross-check individuals' sharing patterns and to locate ISPs with a high percentage of sharers among their subscribers. "
Looks like they may be gearing up to go after individual traders, or at least their ISP's.
See (Score:2)
One year? (Score:5)
Yeah, that seems about how long I've been waiting for this file to download.
:)
Judged by our peers? (Score:1)
thats how ALL cases are won or lost; it's still determined by the judgement of our 'peers.'
So if a peer is making a negative judgment about you, just disconnect them. Problem solved ...
¹It's still under copyright (Score:1)
why I can never download Crouching Tiger, Hidden Dragon successfully.
The copyright gods look down on infringers. Simply wait 95 years, and you'll be able to download it for free from your local film preservation society.
Oh wait, you're a human. I keep forgetting that mortal humans don't live more than 85 years in most cases, which makes the copyright term quite pointless [8m.com] for implementing the "for limited Times" language of the U.S. Constitution.
Re:Happy Bday (Score:1)
This is easily the most insightful comment I've seen on Slashdot in the past 2 years.
--
FNORD (Score:1)
alt.internet.p2p and alt.gnutella newsgroups (Score:1)
Gnutella scalability (Score:1)
A lot of people think that gnutella isn't scalabl enough. But from what I have seen of it, gnutella is less efficient then Napster because it takes up about 500 bytes-1 K a second to link hosts and pass requests.
On a analog modem, that is an annoying loss of bandwidth, but on DSL or Cable or a ethernet dorm room, that owuld be a trivial amnount of bandwith. As broadband starts to spread, I am sure gnutella will be more and more practical.
How Gnutella is done. (Score:1)
First need to find an ip address to a server.
Need to connect to a database that holds the
list of servers. When downloading a gnutella
clone get an up-to-date list of servers. Connect to those servers and they all agree and elect servers in a tree structure based on ping times or ipaddress heirachical order which should reflect geographical location. These servers (which are the same as clients as everyone is a server and a client in a peer-to-peer environment) replicate data (as an enterprise would) to keep state (send file lists), with redundancy and elect servers (somewhat as netbios does) based on uptime and must be recoverable just like dynamic routing protocols such as ospf and must have a short convergence time.
mb
Napigator, anyone? (Score:1)
For Linux users, I've had great success with Gnapster [faradic.net] which uses many of the same Musiccity [musiccity.com] servers. Free music for everyone!
------------
Re:The problem lies in (Score:2)
Individual users may be protected with the webserver loophole, but as Gnutella gains popularity along with ease of use (anyone ever use LimeWire?), the lawsuits will pop up.
--
p2p not perfect. (Score:1)
3000 matches.
99% was shit. Took me like 10 min to scan thru it and find what I wanted.
I'd have to say that about 20% was some preteen shit. Different folks, different strokes, but I don't wanna look at the result
"lolita preeteen fucked by daddy.jpeg/mpeg/???"
Especially when I was looking for a live concert recording of the grateful dead.
I didn't find it by the way
Same shit with "regular" porno. I'm a guy, so porno's good in my books, but it does get in the way of searches.
So - wanna kill open "decentralized" networks - just innundate the search results with shit, hell you can just use file names like
"Buy brittney spears new album fuck tit blowjob preeteen.mp3"
and get some free advertising while you're at it.
Or, just write a gnutella client that seeks and downloads "ilicit" material to
Anyways, its 2am, I'm going to sleep. Dunno why I'm helping the fuckin' pigs, but perhaps this post will encourage others to "fix" the small problems in p2p decentralized programs, like ability to ban IP's, filter out shit, etc...
Shouts.
I have a shotgun, a shovel and 30 acres behind the barn.
Of course not (Score:1)
---
Re:The problem lies in (Score:1)
---
I See (Score:1)
Re:Gnutella scalbility and multicast (Score:2)
Also the flaw with IPV4 multicasting if used extensivly is your PC will join 16 multicast groups (which could equal a lot of traffic) as the multicast MAC address is taken from the last 23 bits of the layer 3 mcast address.
If Radia's idea is based on layer 2/3 multicasting I feel its a little bit screwed until be get IPV6 in place. Then again, I've never read "internetworking" so it could be a total different system?
But anyway regardless of the future plans, your also misguided in your comment. Gnutella uses no form of multicasting at the session layer or otherwise. Multicasting, when a protocol uses IP for at the network layer, by definition must be performed at layer 2/3 or else it's unicast, period. Gnutella floods requests but uses a peer based systems and packet duplication to each peer to perform flooding; unique packets to each unique destination, even at layer 5. Think about what a system like GNUTella is called: peer to peer.
So your comment sounds good but technically doesn't make any sense. Congratualtion to the moderators once again.
Re:Happy Birthday Gnutella. (Score:1)
...and you can send them a reality check too, cuz gnutella gets slower and slower with every overrated slashdot post about it.
--
The problem lies in (Score:1)
Curiously, why hasn't anyone gone after gnutella?? I'm betting they're next, unfortunately.
Re:Hey michael (Score:1)
>It has some serious scalability problems.
Yep. That's what the naysayers were spouting a year ago, when there were 1000 hosts on the network. Now, there are over 20,000 hosts on the network (according to Limewire Hostcount [limewire.com]), and guess what? The network hasn't collapsed.
There will be a saturation point, no doubt. But we haven't hit it yet.
Re:it is now (Score:1)
>being illegal
Napster was meant for sharing MP3s, and MP3s only. On Gnutella, you can share anything you want. I think that's a start at proving that p2p (or at least Gnutella) has more good potential than bad.
Shaun
Re:Has Gnutella source code been released yet? (Score:1)
Several Gnutella clones, though, *are* open-source. Check here for a list of the clones whose source code is available to the public:
http://www.gnutelladev.com/source/ [gnutelladev.com]
Shaun
Re:Time for a 2.0 approach? (Score:1)
>have in their swap folder, and start sharing data. requests
>for searches would only go to the highest bandwith systems,
>and then only those that are willing to serve in this capacity.
>If you come online with a nice fast machine, with a fat network
>pipe, you can become part of the search network.
I can't say much other than this is already being worked on to some extent. It's basically a tiering model, where clients will connect to other clients who have opted to function as "servers" or "superclients," and those servers bear a bit more of the network burden. Those of us with fatter pipes will be able to contribute to the routing, while the folks on dialup can search for files without worrying about their Gnutella client trying to route tens of MBs of traffic per hour.
There are some seriously smart people out there who are making sure this becomes a reality. It could be a few weeks, it could be a few months; but you'll see this in the future.
Shaun
Re:Client Interoperabilty (Score:1)
>incompatibility that seperate, client specific networks arise?
>What we really need is an agreement between the different
>developers to pass on these extra packets, or agree on a
>central "feature set".
I agree with you 100% on this issue - in fact, "developer fragmentation" was the #1 problem I listed in a September 2000 overview of problems facing Gnutella [shat.net]. (Note: This article is rather outdated, especially in that I no longer use the original Nullsoft client!! BearShare, LimeWire, ToadNode, etc. were not released when I wrote this.)
Luckily, we've come a long way since last September, or at least that's how it appears to me. BearShare and LimeWire, the two most aggressively developed Windows clients, seem to be fully interoperable. They send extra information - host uptime, for example - within the Gnutella packets. But they appear to integrate without any problem. How this extra data affects some of the older clients, I'm not sure; but the leaders in the Gnutella front seem to be "cooperative competitors."
Shaun
Re:peer-to-peer isn't new (Score:2)
>run servers because they are afraid of the content providers,
>don't want to provide the bandwidth anyway, and want to charge
>much higher fees to supposedly commercial servers.
And, I dare say, many ISPs don't give a flying fuck about this particular TOS entry unless you're running a server that's a) taking up inordinate amounts of bandwidth or b) serving illegal material. Even at that, they still won't care about b) unless someone reports it.
A cursory glance at my BearShare hosts at any given moment shows mostly cable/DSL users. Most of those providers forbid running servers, but most of them have no real way to tell, unless you're congesting the network. I was a bit surprised that BearShare's latest version sets the default max number of simultaneous uploads to 10 (I keep mine set at 2) but for the most part, unless you're a total dumbass, running Gnutella isn't going to pop up any bandwidth-sucking red flags at your provider's NOC.
One of Gnutella's strong points - unlike a lot of standard protocols - is that you can dynamically change your listen port to whatever you want, and the changes are effective immediately to the rest of the network. If your ISP blocks/monitors 6346, you can change it to something else. If your ISP blocks that, you can change it again; and for the really paranoid, you could write a dirty VB bot or something to change your listen port every hour. Of course, you *could* do the same for FTP/HTTP/etc servers but it would make it more difficult for your visitors to find you.
Server-ban or no, most if not all ISPs have no reliable way to detect or block Gnutella traffic. I think that's quite an advantage.
As for bandwidth being on the rise, you have to consider that file sizes are increasing as well. 5 years ago, the end user surely couldn't download at 200+K/sec, but 5 years ago, the end user wasn't sharing 250MB pornos with the rest of the world, either. The pipes *are* getting fatter, but so are the files being sent across them.
Shaun
Re:GNUtella Vs. OpenNap (Score:1)
Unfortunately, I no longer run it because the RIAA sent my ISP that infamous letter. 4 days without my 1.5 mbps DSL was more than I could stand. I just can't get over the fact that they had convinced my ISP that I was actually running an mp3 server, providing my own mp3 files to download!
-mdek.net [mdek.net]
Sorry Gnutella (Score:1)
Talking to people at school, everyone who has EVER tried gnutella (including myself) absolutely hates it. It's hard to use....files RARELY actually download...it's slow, etc.
What's the big deal? Why not post articles on winmx or imesh or other file sharing programs like these that are actually popular and work, and are in the process of replacing napster.
scott
Re:Sorry Gnutella (Score:1)
thanks for playing
scott
Re:Hey michael (Score:1)
Re:p2p not perfect. (Score:1)
You have to be precise in your searches, or you will get crap. It took you 10 whole minutes?? You could of cooked ten pizza pops in that time!!
I'd have to say that about 20% was some preteen shit. Different folks, different strokes, but I don't wanna look at the result "lolita preeteen fucked by daddy.jpeg/mpeg/???"
I think LimeWire allows you to filter adult crap.
Especially when I was looking for a live concert recording of the grateful dead.
I can't see how you're getting porn while searching for the Grateful Dead....
I didn't find it by the way
That's a shame.
Re:Gnutella (Score:1)
Re:The problem lies in (Score:1)
Re:Http protocol == web server? (Score:1)
Note that they wouldn't actually have to take anyone to court, by the way. A quick call to your ISP would be enough to take care of the problem.
Re:Gnutella scalbility and multicast (Score:1)
Not only does your connection get hosed, but your ISP/IT dept. probably shuts down multicast for the afternoon and has some very stern things to say to you.
Re:Gnutella scalbility and multicast (Score:1)
I believe the buzzword is "application level multicast." A lot of companies (eg FastForward) are trying to implement this sort of thing as the solution to the lack of multicast support on the backbone.
Re:Gnutella scalbility and multicast (Score:1)
So all you're after is an architecture that's aware of network topography. You're absolutely correct in that Gnutella is currently very poor in the way it builds the network and distributes messages. Layer 3 multicast would be an improvement in this area, but unfortunately it has serious implementation flaws that make it extremely difficult to implement on a large network. However, the real problem with multicast (for this application) is that once you have subscribed to a channel, you have no reliable controls on traffic flow (outside of what your routers are able/configured to allow.) In other words, should the Gnutella client wish to throttle back traffic, you're outa luck (existing traffic reporting mechanisms are going to be inadequate for this application.)
How is this reducing the burden on routers?
Handling multicast routing puts a significant burden on a router. The more traffic, and the more complex the multicast routing table, the worse this is. Gnutella represents the worst possible case for a multicast network: many short messages from many sources. Not to mention that when a router fails, the table has to be rebuilt. If the failing component is a switch, this often results in broadcast traffic on the LAN (and this is really the least of the potential difficulties.)
Better yet, please explain to me how what gnutella does isn't multicast, and how what gnutella does is better for any segment of the network than a good multicast implementation would be?
It is a form of multicast. It's not layer 3 multicast. Was I unclear there? I'm sorry. most people call it "application level multicast", although that also implies a certain consciousness of topography. My point was simply that layer 3 multicast has serious flaws that would make it less than ideal for this purpose. That's aside from the fact that it's absolutely not going to be implemented on the major backbones, which makes the issue completely academic. The only real solution is for Gnutella to become more intelligent in the way it routes messages. It's not surprising that it's so stupid, it is after all a first generation technology.
Re:Gnutella scalbility and multicast (Score:1)
We did some very preliminary research on the scalability of gnutella-style search systems for a project at work. The quickest summary is yes, the search traffic is relatively low-- but it's "low" in direct proportion to the number of nodes. Since each node is supposed to hear searches from the majority of the network (or at least, those nodes within a certain distance), it's pretty easy to screw up everybody's day once the network gets large enough.
And of course, those nodes connected to narrow pipes can just be blown away by multicasting, whereas at least with the current Gnutella (as inefficient as it is), the TCP acks work as a throttle to control incoming requests.
When copyrights are broken, who is at fault? (Score:1)
When for example, a song is shared. Who is at fault? The person downloading, or the person sharing?
It seems like the person downloading would be the guilty party(assuming they don't own the cd). If I leave a cd-r on top of my car, and someone takes it, did I commit piracy?
Also, when exactly do you break the law? When I transfer copyrighted material through my ISP do they "pirate"? Is it only pirating if someone listens to a song?
I think that brings up an interesting point. If you don't listen, is it piracy, and can they prove you listened to it?
You would think the RIAA would say, enough is enough, and do a minimal charge and allow people to share files freely(as in speech and as in beer).
I have no use for songs that are copy "protected"
--Joey
Happy Bday (Score:4)
I remember the old alpha days, where nothing would work and hardly anything would download, yup glad those days are gone.
--Joey
Gnutella is Scalable, Alive, Well and Growing (Score:1)
http://www.limewire.com/hostcount.htm#rolling [limewire.com]
There is a very active Gnutella Developer Forum where all the true Gnutella developers from all the major clients have been working to improve the protocol and network for months. They have made great progress and will continue to.
Gnutella was a simple protocol and idea with staying power for the long term. There will be more power and surprises in store.
Re:Gnutella is Scalable, Alive, Well and Growing (Score:1)
Whew.. (Score:1)
Or can we? *grin*
Http protocol == web server? (Score:1)
peer-to-peer isn't new (Score:1)
The current view of the world came when we got slow, intermittent dial-up connections, firewalls, Windows, and ISPs. But most of all, it came when companies like ICQ and Hotmail wanted to derive large amounts of money by funneling traffic through their sites.
So, where are we going? Content providers want central control over distribution and service providers want to tie users to their services. Many ISPs write into their TOS that you aren't allowed to run servers because they are afraid of the content providers, don't want to provide the bandwidth anyway, and want to charge much higher fees to supposedly commercial servers.
Altogether, the outlook looks pretty bleak to me: the Internet is at risk of turning into a medium where the majority of information is provided by those with money and power. There is one bright spot, though: while less and less relative bandwidth is P2P, there is a lot more bandwidth than 10-20 years ago. So, while in relative terms, almost all audio and video may come from big companies with commercial agendas, there is more bandwidth than ever available for peer-to-peer distribution of text and image content. And while "access rights management" on audio, video, and e-book formats may be in our future, plain ASCII and images are likely not to fall under that either. I hope that's all we need to keep the web available for important non-commercial social functions.
Gnutella can be expanded into MMORPGS (Score:1)
The key is there is no expensive main server (Score:1)
Happy Birthday Gnutella. (Score:1)
Limewire is bad ass (Score:1)
i had all but given up on gnutella until i came across limewire [limewire.com]. in my opinion this is the best gnutella clone out there. has anyone seen something better?
GNUtella Vs. OpenNap (Score:1)
Re:The problem lies in (Score:1)
broadband: just an easy way out? (Score:1)
Gnutella Sux Ass (Score:1)