Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Gnutella's Challenge 96

Gnutella News sent in an excerpt from a clip2 DSS report about gnutella's evolution and condition. "the network has neither smoothly scaled nor catastrophically collapsed since average traffic grew to regularly exceed dial-up modem bandwidth in August 2000. Instead, the network persists in a fragmented state comprised of numerous continuously evolving responsive segments, the largest of which typically contains hundreds of hosts. We estimate at present that unique Gnutella users per day number no less than 10,000 and may range as high as 30,000. We suggest that further technical innovation and wide adoption of this innovation are necessary for the Gnutella network to scale beyond its present state."" Read this if you're interested in p2p [?] .
This discussion has been archived. No new comments can be posted.

Gnutella's Challange

Comments Filter:
  • by Anonymous Coward
    ...as Richard Stallman would say.
  • by Anonymous Coward
    Gnutella, being a real P2P applicaation, will suffer from scalability problems that a server-based system like Napster can work around.

    There's nothing inherently unscalable about P2P, it's just Gnutella's broadcast searching that doesn't scale. Freenet should scale quite nicely--the network loosely sorts files by key across machines, so you can have a linear search that homes in on a machine holding the file, instead of broadcasting the search to everyone.

  • by Anonymous Coward
    There's this nifty new spiffariffic thing, you might have heard of it. FTP. try going to oth.net and searching for your music. have an ftp client onhand. -facilis decensus averno
  • by Anonymous Coward
    MN isn't the only system where you must "publish" files and you have things sitting on your hard drive that you don't know about. I believe Freenet works the same way, but with whole files.

    The whole deal about Mojo Nation is bartering for resources, not data. When you contribute to the system, you are contributing resources and services. It actually costs Mojo to publish data and make it available to others, since that is draining some resources from the system. However, the downloaders pay their own way, resource wise, when downloading, so you are not saddled with that cost (unlike other systems, where you pay in your own resources for every download from you!). A goal is a system where even the most popular content can be distributed for little cost to the publisher, but also works for storing data that isn't wildly popular.

    With MN, every file automatically gets redundancy and you don't have run your own server keep data in the system. You could even just pay out more Mojo to make sure your blocks stick around. You can't do that with a Napster-like system.

    So Mojo Nation isn't just a "file sharing" program, it's a distributed filesystem built on top of a market for computing resources (especially bandwidth).

  • by Anonymous Coward on Monday November 20, 2000 @06:22AM (#612784)
    I don't think that any of these solves the issue that the gnutella protocol does not scale in it current iteration. Scalability is the real issue. Here are some ideas that would improve the scalability.

    Bandwidth Limited Connections. When two gnutella clients connect they should send in the reply the allocated bandwidth for that connection. The forwarding protocol should not allow more queries to be sent than the bandwidth allows. This would require a form of russian roulette on the packets--a method of killing queries.

    It is feasable that the client should be able to forward post and response packets. Query packets are the most likely target for such filters. The filters could be implemented in several manners:

    • The choosing is totally random.
    • An artificialy intelligent option that learns the requests that have been handled and which have not would allow filtering of requests where the files are more readily available.
    • An filtering option that may be unpopular with script kiddies (in fact, probably riled as sensorship), but popular with older or more mature individuals would perhaps place killfilters on obscene queries that enter your computer. After all, there is no reason that someone who doesn't agree with some actions must facilitate them with his/her own computer.
    I would like to see the analysis of the different queries sent over the network, and some kind of user connectivity.

    Packet filtering would help to solve the current protocol limitation. Since the network is totally connected it would change the dynamics of the gnutella network and make it a more connected place even though there is a higher probability that the queries are never answered.

  • I'll be called an elitist and marked as flamebait and troll, but I have to say this.

    Gnutella network traffic exceeds bandwidth capabilities of dial-up users? Drop them. You can't take the heat, get out. Wanna play with the big boys, pay up.

    I'm running Napster, and there's nothing more annoying that downloading from a 56k source, or waiting for a 56 guy to finish. 56k people (for this very same reason in fact) also don't like sharing any files. They can't. Any additional traffic will drive their leeching speeds to the floor.

    That's what they are, leeches. Because they can't share and contribute constructively to the network. Napster keeps a central database, which frees up dial-up users from handling network messages, so they're still tolerable on that service. But on Gnutella there's no place for anyone running 56k or less.

    I for one won't feel sorry about this development. A good portion of users will drop off, but they're just users, no matter how much thet want they can't be servers.

  • Oh, so what - prey tell - is your definition of Peer to Peer?

    And more to the point, if your definition rules out one of the top three peer to peer systems, that would seem to suggest to me that you need a better definition!

    --

  • by Sanity ( 1431 ) on Monday November 20, 2000 @07:19AM (#612787) Homepage Journal
    Take a look at these [freenetproject.org] simulations of Freenet's reliability and performance as the network size increases. You will notice that once the network stabalizes the network's size has little bearing on the time required to retrieve a piece of data. Other experiments (not yet published) have demonstrated that Freenet appears to scale logarithmically (similar to a binary search-tree), which, if accurate, means that the system could probably deal with a network of millions of nodes without any significant performance drop.

    --

  • by Jim McCoy ( 3961 ) on Monday November 20, 2000 @09:50AM (#612788) Homepage
    One of the best features of Mojo Nation [mojonation.net] is that it breaks files up into smaller pieces so that when you want to download a huge file you are not blocked on the limited upstream capacity of the peer at the other end; each agent sends a small chunk of the file, allowing the peer retrieving a file to request multiple pieces in parallel and moving the download speed restriction back to the downstream capacity of the local connection. This sort of distribution system turns a pool of peers into a swarm of ants carrying small pieces of the content. RAID-like error correction protects against peers disappearing and allow for flexible choices about where to go for the pieces to the file. The new 0.920 release [mojonation.net] of the client starts to demonstrate the advantages this has over conventional peer delivery systems.

    One downside to swarm delivery systems is that data is "published", simple sharing of a common filebase (a la Napster and Gnutella) is not possible. Someone has to upload the pieces to the system in the first place for them to be available because the system does not do the "let me take a look through your hard disk for things to give to others" kind of file sharing found in other P2P systems. jim

  • by jht ( 5006 ) on Monday November 20, 2000 @05:47AM (#612789) Homepage Journal
    Gnutella, being a real P2P applicaation, will suffer from scalability problems that a server-based system like Napster can work around. If Napster gets too popular, they can always add fatter pipes and bigger servers. But Gnutella is bandwidth-constrained since there is no central server farm tracking all the users.

    The exchanges in Napster themselves may in fact be peer-to-peer, but we need to remember that they have big honking servers arbitrating the connections.

    Gnutella's design is terrific (and a great hack), but unless they can re-jigger things to knock the slow connections down in priority (or some comparable solution), they're doomed to be a victim of their own success. I guess the other possibility would be for a minimum bandwidth requirement for the software to enforce. Perhaps some enterprising person will write a Gnutella that only allows, say, 144 Kbps and up connections on the network.

    It would be interesting, though cruel, to relegate all the dialup people to second-class citizen status, but it would allow Gnutella to scale a lot past the existing limits.

    - -Josh Turiel
  • bah, rapidprototyping on a software product that was NEVER originally intended for how it is being used today. Nor was Napster I know, but they are making money on it.

    It is a known fact that taking a step back in the development process can take a HUGE hit on development time. Who the hell wants to do that w/a product that has no intent to be released as an important project in the first place?

    /me is sticking to IRC ;)
  • Dropped queries would have to be cached for, let's say 10 minutes, to defeat people sending 10 copies of the same query to make sure it goes through.

    --

  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Monday November 20, 2000 @11:31AM (#612792) Homepage Journal
    then why is it that I'm always on the same network fragment as that idiot spammer who returns your search request with an html extension but containing a stupid advertizement?
  • Let Gnutella split into multiple networks. It worked for IRC, it will work here, and it will work for similar problems in the future. Any problem that doesn't lend itself well to subdivision is probably badly specified. Don't forget that the Internet is a network of networks, and it works well for a reason.

    Yes, the Internet works for a reason, and that reason is that I can inject my TCP/IP packet into it and any point and reasonably expect it to reach any other point.

    This is why IRC is less popular than, say, AIM or ICQ, and always will be.

    The Internet is the opposite of subdivision; if subdivision were the right approach, we'd all be using BBSes again, and even Fidonet wouldn't exist.

    -
  • Er, last time /I/ checked, if I were running AIM and ICQ /on the same box/ I still couldn't talk to myself.

    This tells me you missed my point; with AIM you can talk to all AIM users, not just Efnet AIM users or Dalnet AIM users or etc.

    (Or to more than one person simultaneously.)

    This tells me you don't know how to use AIM or ICQ.
    -
  • Maybe I'm just an idiot but it seems to me that the easiest way to solve this problem is to have two flavors of Gnutella. A server flavor and a client flavor. The client is pretty much just what the napster program is - dump app that simply finds a server and submits queries but does no real work except to maintain a list of what it provides. The server program should be what Gnutella is, a peer-to-peer system that maintains a loose association of machines with record of where all the data on the network lies, including all the clients.

    This would facilitate a hybrid network with servers run by anybody that chooses to run one, in any country, so it is safe to say that the servers cannot be shut down by an authority, especially since anyone can just set up another and join it to the network. This way we only have the server-machines communicating so it reduces the load on the network and brings gnutella's problems back to a manageable level. The client machines simply find themselves a server and then figure out where best to link to the network and go from there.

    Seems that gnutella's problem is that it is too distributed. Granted, a p2p system can idealistically work, but I don't think we have the bandwidth for it. If the network was more static than it is it would also work but since it is ever changing it makes it much harder to track everything.

    So, has anyone ever tried anything like this before? Did it work? If not can those problems be solved? Any networking gurus out there care to take a shot at this? It can't be a new idea, it's just too obvious.
  • The solution is ersatz election of servers by some means. This would cause the servers to change randomly. There still would really be no server.

    If we could trust what people said about the bandwidth of their links, the election wouldn't actually be too hard.

  • Splitting into subnetworks is not only feasable and desirable, it has already been done in at least one case that I know of (and use).
    In your opinion; I have my own. I do not believe this ABMnet of yours is comparable, you yourself have described a number of critically different aspects. Such as the fact that the user commits to it in a way, it's not faceless, etc, etc etc. It's a DEFINED niche product with centralization...GNUtella simply is not.

    Hmm. Sounds just like the greater internet as a whole, but evolving at a faster rate.
    No, it's not at all like the internet and it's hardly "rapidly" evolving.

    So, why exactly is this bad? Are you an old mainframe guy who still hasn't gotten over the idea of individuals having power
    Gotta love rhetoric. Free the "people" from the tyranny of the admins! Gimme a break. I'm not nearly that old. What I haven't, and never will get over, are the fundamentals of success. Perhaps YOU are the one that is out of touch with reality? Afterall, empirically speaking, GNUtella hardly works for anyone.

    You can propose all these theoretical means by which GNUtella can "fix" itself, but it's simply not reality yet. Furthermore, a great many of these theories at least require a significant change in the code base. So please, watch your tongue.

    As for your assertion that this is just for "warez/mp3/porn", well, those three things, specifically porn, drive the vast majority of all network traffic now. What else is there that would encourage such large-scale sharing of files? This is reality. Deal with it.
    I never said GNUtella is just for warez/mp3/porn. What I said, if you care to read, is that GNUtella's real value is for IRC-type people (i.e., _groups_ people that know each other online and are somewhat technically competent) for getting warez/mp3/porn.

    No where did I imply that this was somehow "evil". Though I do have some reservations about outright piracy, I simply never even hinted at it in this post. You, obviously, know as well as I do, that this is where the bulk of the bandwidth on these so-called p2p services go. It's a relevant fact. Deal with it.

  • by FallLine ( 12211 ) on Monday November 20, 2000 @06:28AM (#612798)
    GNUtella may be an interesting idea, but it's nothing more than a hack. Splitting into subnetworks is both infeasible and undesirable. First, you really can't compare it to IRC. IRC is highly centralized, whereas everything about GNUtella is distributed. IRC can, and does, scale for many thousands of users effectively; GNUtella does not (it responds like crap with any significant number of users) Secondly, you're thinking of the term "network" too rigidly. There is no network admin, no physical location, no centralization. In short, it's a ragtag and volatile collection of different IP addresses. There isn't a way to rigidly enforce the number of users in GNUtella, so how does one keep the networks divided into neat little units. This also means that it's hard to return to a specific network amongst a number of others. Where might your hotlist users be? Where do you find those with like interests when everything is constantly tossing and turning? Finally, and most importantly, you underestimate the importance of size. When the network can only effectively scale to ~5k users (probably a stretch), and when only one in 10 of those users has broadband that can support a decent number of speedy transfers (especially important when users tend to sign off and on while you're downloading), and when only one in 10 of those users has a sizable collection being shared (seems like most users have the same top pop garbage that everyone else has), you're ultimately reduced to, say, 50 users that you'd actually want to search from. I don't know about you, but 50 users isn't nearly enough. Now you might argue that i'm pulling these numbers out of my ass (and you'd be mostly right), but if you look at the empirical results, it's not far off the mark.

    In my opinion, the only thing that something as trivial as GNUtella is good for, ironically, is the IRC types. Who could form psuedo-private loose knit "networks" from which they can share warez/mp3/porn with their "friends" without the need for a dependable server (i.e., just join the channel find an IP and connect to it)
  • I've had better luck. I'm not saying it's the best system in the world, but I downloaded at least 10 songs over 2 Mb last week. Most were crap (artist's fault, not Gnutella) but I got them.
  • Tor Klingberg on gnutellang.wego.com pointed out that all those keyword requests would use a lot of bandwidth. He suggested you just crawl toward the direction of searches that match your keywords. He also pointed out you could have anti-keywords: search terms you crawl away from.

    Of course, you could expedite the whole thing by searching for stuff that describes you even if you'll never download it.

    I was thinking clients like this could integrate into the current network by using a specially formatted search term right when they first connect -- a properly formatted reply would mean "connect to me instead, I'm neighborhood-aware too".

  • by cnicolai ( 14338 ) on Monday November 20, 2000 @09:05AM (#612801)
    If you have a room full of all different kinds of people, they'll interact more meaningfully if they can wander the room, moving next to like minded people, than if they're stuck in their randomly assigned chair. We should let the gnutella network self-organize like that. Here are some details:

    Have clients keep some keywords about the user. It could be a user-written paragraph, the names of shared files, recent search requests, etc. Clients would also have a "horizon" H: clients within H hops are considered "local". Clients can query other local clients for their keywords, and determine how similar those keywords are to their own (maybe a percent).

    Define a "crawl" to be dropping one (low-keyword-match) direct connection and forming a new direct connection to a local node. You might decrease search response times by crawling repeatedly toward higher keyword-matching nodes.

    Imagine a "speed" setting, measured in crawls per minute. There could also be a "randomness" setting, to misrepresent percent-keyword-match by a random amount for each local node. These settings could decrease over time, so you gradually lock in to a suitable local community without getting caught at the nearest local maximum. This idea is borrowed from simulated annealing, which someone else here probably understands better than I.

    Is it possible to integrate such clients into the existing network, through search and search-response packets with a ttl of H?

    Your horizon defines a neighborhood of local nodes. Their shared files will likely be of interest to you, so your client software might list them. In addition, their _ideas_ might be of interest, so your client software could show you their keywords, and allow instant messaging. There could even be a local neighborhood chat, ignoring chat packets with (hops > H), and sending packets with (ttl = H).

    Usage scenario: I heard a band on the radio; sounded kinda like some other bands A B and C; and the lyrics had something to do with X, although I don't think they used that word. I make sure to put A B and C in my keywords, push up the speed and randomness sliders, and wait for them to settle down. Then I start asking in the chat if anyone knows about .... Maybe someone helps me out, and puts up a sample mp3. I might even ask if there are other bands like that.

    Current Napster/Gnutella/whatever software lets you find songs you've heard of by bands you've heard of. Gnutella neighborhoods could let you find music you've never heard of.

    So; here's the rub: What's the best way to get people to buy into this? With snow just setting in here in Buffalo I have a lot of coding time; what's the best codebase to start from? Who should I convince? (and of course, what am I missing and how could this idea be made better?)

    Thanks for reading this whole long thang.

    Chris

  • The network would probably route around it anyway. There are two types of routing going on in GNet. One is query routing... each query is sent down every link in the net. And there is response routing... each query response is sent over the same path the original query took.

    Let's look at query routing. With each node connecting to ~4 or more others, it's a pretty well connected graph. If a query doesn't get through one way, it will probably get through another way. End users wouldn't really see much of a difference because of this connectedness.

    For instance, if there's a 1 in 10 chance that a packet is dropped at a given node instead of relayed, and each node is connected to 4 others, that's a very small chance that it will never make it through.

    Responses, on the other hand will have to be sent using the current protocol, or at least have a negligible chance of being dropped (i.e. drop all queries before dropping a response). Perhaps one day there will be a reroute protocol that can get a response back over a different path. Something to think about...

    At any rate, I don't you'd see any more repeat queries if they occassionally get dropped than you do otherwise.
  • I'd have to agree. I've found Gnutella to be practically useless for months now. Ah well, it wasn't bad when it started...
  • Nuf said. Think up, not down.
  • Is it me, or does Gnutella's Challenge sound like one of the games available on the WOPR in WarGames? I think it was between Falkan's Maze and Global Thermonuclear War.
    --
  • Most of your suggestions sound pretty close to what Freenet does. Or at least what it will do when it's finished.
  • Hundreds of monkeys...[snip]...all for the purpose of file sharing

    It's already been done :) [junglemonkey.net]

    - Al

  • P2P and Heirarchy both have their strengths and weakenesses, and particularly clever developers can pool strengths without amplifying weakenesses and get some pretty neat systems...

    I agree completely. It was not my intent to imply that P2P is uselss, merely that it's a poor fit for problems where scalability is a major design issue. "Hybrid vigor" is a very real phenomenon in computing.

  • This is a bandwagon that just won't roll very far, and the reason - as usual - is obvious to people who've studied the field for a while. Naively implemented, a P2P protocol tends to generate O(n^2) messages for a given workload, where N is the number of nodes. This can often be brought down to O(n) but only with absolutely top-notch developers and a lot of effort. Better than O(n) is usually impossible.

    By contrast, hierarchical systems tend to hover between O(n) and O(log(n)) depending on the particular problem. This does not necessarily apply only to single-rooted hierarchies, either. A multi-rooted hierarchy tends to exhibit the same scaling behavior, though of course the more roots you have the more you start to look like P2P and share its scaling characteristics.

    The long and the short of it is that P2P just doesn't scale well. Even the best-implemented P2P protocol can merely approach the message efficiency of a naively implemented hierarchical protocol. For large numbers of nodes this results in the P2P implementation simply getting swamped. The only question is how large and how swamped it has to be before it becomes unusable.

  • by proclus ( 33875 ) on Monday November 20, 2000 @07:06AM (#612810) Homepage Journal
    I've ported gtk-gnutella to darwin. Here is the link.

    Darwin Gnutella [tripod.com]

    Regards,

    proclus [tripod.com]
  • What people recommend as a 'good' Linux Gnutella client. I use gtk_gnutella latest version (0.54?) and its startup time is horrible. It spends so much time in reading 'cached hosts file'. Also it is a bit high on CPU usage.

    Any suggestions?

    LL
  • It was interesting to see the diagram of a network fragmenting - just as interesting to see their solution. But I'm left wondering how many sys admins are going to implement a reflector server for a service that easily enables the exchange of files without knowledge of the legal status.

    Sure I'd probably run it up and leave it going on a cable line for a while but will this affect the benefits that the reflector would bring to other users and how much is it likely to detract from my network usage?

  • Er, last time /I/ checked, if I were running AIM and ICQ /on the same box/ I still couldn't talk to myself. (Or to more than one person simultaneously.)

    -_QUinn
  • Comment removed based on user account deletion
  • So make the client on the other end provide the information and have the sending client limit the bandwidth. It's still possible to modify, but it's much easier to modify your OWN nefarious client for evil purposes than it is to modify someone else's.
  • It might have something to do with bandwidth, for example in the UK most users are on dial ups, whereas in Germany ISDN is widespread, Germany is also ahead in terms of DSL deployment, would you be more likely to use Gnutella with 56K dial up connection or 128K ISDN or 768K DSL? I've tried Gnutella using a modem that refuses to connect at more than 33K on my line and the connect gets swamped, when I use my ADSL (I was a trialist for BT here in the UK for the last 2 years) its much more usable. I think you'll find that the countries with high speed cheap access via cable or DSL will always come up higher on usage stats for p2p products just because of the bandwidth.
  • Is that it does chop up the files and distribute them.How many people and I'm not talking about freedom loving geeks here really want to be having 1/4 of some data that they cannot access themselves on their HD? Yes I know there is the concept of getting paid the Mojo for doing that, but do you really think that your average user is going to concern themselves with that? Surely it would be easier and more "user freindly" to have the software recognise when there are multiple copies of the same data (e.g. using an MD5 hash of the file) and pull off the data from different machines with offsets so you might pull half from one machine and half from another if there are two machines with the same data. That way the users have the data, and know they have it, and at the same time you share the bandwidth out and if one machine goes down you pull the remaining data off another so you have the redundancy.
  • by CiXeL ( 56313 ) on Monday November 20, 2000 @07:32AM (#612818) Homepage
    all you do is have a option under preferences ala napster that says your bandwidth type ie. 56k/cable/dsl/t1/t3/etc then whenever you connect to another T1 have it make a strong connection between the two of you. Its network matchmaking. To prevent people from specifying a lower bandwidth than you have, have the program limit your download bandwith to that specific speed which should keep people honest while helping to better organize the network.
  • by StrawberryFrog ( 67065 ) on Monday November 20, 2000 @07:11AM (#612819) Homepage Journal
    > sleazy (gnet2.ath.cx has the exact same TLD as another website whose URL contains the word "goat";

    cx is a country TLD. Why should you call the whole of Christmas Island sleasy because of one goat who lives there?

  • Ummm, no. Scalability is always a problem in any network programs but especially so with p2p programs. Like the internet, gnutella has to exchange information about itself and others and eventually most of the information exchange becomes uselss information (redundant) and progress of query will be impeded by sheer number of messages. Look at internet right now - even though it is not collapsing, the main problem is the usless BCP packets are overwelming the network. So in all, gnutella does have a big scaling problem and no internet doesn't work too well (think reliability and scalability).
  • by LHOOQtius_ov_Borg ( 73817 ) on Monday November 20, 2000 @08:55AM (#612821)
    A compromise between P2P and overlapping heirarchies is possible using automatic assignment of nodes to (multiple) regions based on tasks (as in our Webworld system)... Thus P2P is used for certain things (node discovery accross the Net without resorting to an Net-wide broadcast, anonymous filesharing a'la Freenet, etc.) and heirarchies for other things (processes which need to have an average messaging time node-to-node to do resource allocation between processing and messaging)

    Efficiency is not the only issue in P2P... anonymity is one, and another is to inject a type of fault-tolerance into a heirarchal system by allowing for more dynamic assignemnt of heirarchal roles where appropriate...

    Also, since many P2P schemes are built on top of TCP/IP, the option to build a dynamic, hybrid system is much easier, since a heirarchal system lies beneath at the addressing and transport level... You can leverage the messaging efficiency of the heiarchy once you've done discovery through pure P2P, and can also overlay anonymous P2P over the heirarchy for things like Freenet style file sharing...

    P2P and Heirarchy both have their strengths and weakenesses, and particularly clever developers can pool strengths without amplifying weakenesses and get some pretty neat systems...

  • I think when the average Query Bandwith reaches dial-up, the Gnutella will collapse. The current average includes actual file serving. When the querries reach dial-up speeds, the network will be unscalable.

  • GNUtella may be an interesting idea, but it's nothing more than a hack. Splitting into subnetworks is both infeasible and undesirable.

    I have to disagree. I think a split would be the best thing that could happen, because then I might search for things that interest me and not be bothered by all those .mp3 files. Sure, the idea of Gnutella is that everyone shares, but a topic-division would be for the better of everybody. There are clients out there already that allows you to change the protocol-strings to effectively get a private network. Each topic-net would need a centralized access mechanism, but that's not a new problem, Gnutella already have this. Just go to the proper webpage and find the newest IP or IP list.

    /SS
  • funny thing is, gamers have known this for ages. well, not the O() and reasoning perhaps but the end effect is there. The more players you have in a p2p network the worse the gameplay until the lower bandwidth users are saturated. This results in jumpy/inconsistent gameplay. I'm rather shocked people didn't see this problem coming from miles away.
  • by Axemaster ( 88327 ) on Monday November 20, 2000 @06:38AM (#612825)
    Actually, MojoNation [mojonation.net] does something very similar to what you propose.. its still a beta product, and it's still growing, but it looks good so far:

    * Automatic mirroring nodes
    Mojonation block-servers remember what blocks seem to be popular (most requested), and if they dont have them, they may go grab a copy to mirror locally.

    Nodes would automatically mirror data from local (fast) mirrors, so that it's more accessible.

    See above. Data that is popular is automatically mirrored. When data is published to the network, dual-redundancy is used to avoid losing the data if some blocks turn up missing. Think RAID. Well, no, not exactly, but it is somewhat redundant.

    ... 56k clients could connect and ask the "net" of super nodes for the queries on content..

    It's called a content tracker, and anyone can run one on Mojonation. There are two central "master publication trackers" (MPT's) that keep lists of all publication servers, and the clients retrieve this list initially from them. There are possible plans to distribute the MPT's as well.

    Content Security

    All of the content posted to the network would have meta-moderation on it; anyone can classify data, and mark it as such.

    There is currently no 'rating system' in mojonation, but it is something being looked at, barring the technical hurdles in doing so.

    Privacy

    If possible, I'd like to see users IP addresses hidden; only have a unique login name/password setup for security; but this may make hackers/spammers hard to track and ban, but hopefully the meta-moderation would filter out most of it.

    I'm not sure if Mojonation is going to go this route eventually, but if ya use TCP/ip, you can be traced eventually anyway. UDP is unreliable.. As for data privacy, Mojonation actually chops a file up into small blocks, then encrypts those blocks, and distributes them randomly. Then it send the description and block locations to the master server. In essence, nobody knows whats in each individual block on their server (if they run a storage server); everything is encrypted. I am breezing past all the details [mojonation.net] here, feel free to read more [mojonation.net] about it if you wish.

    Volunteers
    Anybody?


    http://sourceforge.net/projects/mojo nat ion/ [sourceforge.net]
  • With Germany at 20% and the UK at 4%, it seems that these numbers do not reflect the online population. Germany (~88M people) is bigger than the UK (~60M people), but not by that much. The main factor is probably just whatever has become popular with the internet community in the particular country. I am impressed though that 1 in 3 Gnutella hosts are non-US centric according to the article.
  • I haven't been able to successfully download files > 2 megs for over 3 months.
    I hope this gets better with all this caching servers for gnutella in development.
  • I had been using scour since gnutella started crapping out. but now that scour is shut down, i've been struggling to find a replacement p2p client. tried yo!nk, that barely works, n-tella dosne't seem to give me anything... anyone have any golden nuggets out there?
    --DV
    "Kermit the frog, cuz he gets all the hos!"
  • The state of the art in P2P networking is still nascent, and no one has truly solved many of the issues around it. One of the mistakes in the architectures of many solutions these days is to pin their success to a particular network protocol.


    In the Open Source P2P project I am working on, xS (http://xs.dasein.org [dasein.org]), I have built in a pluggable network layer that literally enables the ability to add new protocols to the application on the fly. Thus, if the rest of xS rocks but the network protocol sucks, it is easy to write support for a new protocol!

  • that would be skillz
    .oO0Oo.
  • I see no need to relegate dialups to "second class" citizen status, only perhaps the need to treat them differently.

    The article suggested using the clip2 Reflector Server (or is it servent) for dialups to connect to. An interesting way to propogate this further would be to restrict dialups to *only* be able to connect to reflector servers and also encourage the operation of the reflectors at node with a lot of bandwidth to spare.

    This would also allow someone to develop a client that only allows peers of a certain bandwidth (say 144Kbps as was previously suggested) to connect to the network; then the dialups (and really slow DSL customers, sorry Verizon ;-)) could connect to reflector proxies. This would ensure that the network as a whole would remain low-latency and high bandwidth but that it would still be accessable to all.

  • While I like the idea of Gnutella, the reality is pretty unbearable at the moment. In typical usage where you're connected to a large network, Gnutella is so excruciatingly slow that it's practically useless. Napster might be the "wrong way" to do things, but at least it works.

    If Gnutella is going to succeed it needs to be more intelligent. Nodes shouldn't be hammered with search requests. Nodes need to be scored by their actual throughput of search data should be cached to make searches quicker.

    Gnutella also needs anonymity and security features to prevent spyware from seeing what's going on, so it should be possible (though not mandatory) for a node to nominate a bunch of anonymizing servers that search and encrypted data packets ping-pong through before reaching their destination.

  • Not many users from the UK could have something to do with the fact that they pay high per-minute rates for all calls, even local calls to their ISP.

    Apparantly BT has a really strong monopoly.

  • by pjrc ( 134994 ) <paul@pjrc.com> on Monday November 20, 2000 @12:24PM (#612834) Homepage Journal
    I've been toying with an idea for P2P filesharing, which involves a truely decentralized reputation tracking system. The idea is similar to PGP's "web of trust", which you "know" others based on their public key, and people you know give certifications of others by signing their public keys.

    What good is all that... well, a host could make decisions about which queries to route and which to discard based on any information about the reputation of the originator. Hosts would allow faster sends to downloaders with good reputation. Abusive hosts (Spammers, DoS attacks, etc) would ruin their reputation quickly (or keep recreating new keys all with no reputation).

    Reputation in such a system would be very valuable. Somewhat like slashdot karma, it would appeal to many individuals, who would likely go out of their way to gain reputation signatures, perhaps by providing or mirroring lots of high quality files, attaching good meta-data descriptions to files, etc. The client software would need to have ways for everyone to do moderation on files and users... but unlike slashdot, there would be no universal score, only lots of keys/reputation scores, signed by other users. The software could also automatically detect certain behaviors (files available for download, on-line for long times) of other hosts, and issue reputation points. The idea is that a reputation score is to have a way to allocate the available resources (mainly bandwidth), to establish an incentive for users to share files and act in ways that benefit the network, and of course to make the network resiliant to abuse.

    Now, for a system like this to scale, each host will need a LOT of disk space, to store a giant database of keys and signatures on them, and it would ultimately act like a giant cache. Each host would obviously collect the most positive signatures... the initial communication would be similar to boasting, the requester would send several of the best moderation signatures, hoping that the remote host already knows those people who signed and will therefore offer faster transfers, propagate a query farther, etc.

    Maybe this ultimately works out to be the same as digital cash in MojoNation. I believe it is a different idea, in that it's based on an assumption of abundance.... everybody can win. You can get a great reputation without someone else giving up anything. In a cash system, when you get cash (mojo), someone else gives it up, and the overall philosophy is of scarcity.

    If you have any ideas or thoughts to add to this, please post. Am I totally out in left field here, or does this seem like a reasonable idea?

  • by vsync64 ( 155958 ) <vsync@quadium.net> on Monday November 20, 2000 @05:43AM (#612835) Homepage
    I'm sick of this. "Gnutella's going to collapse! We need new innovation! It doesn't scale!"

    It doesn't have to cover the entire Internet. The fact that you can simply specify a server to contact makes the solution so obvious that I can't believe people are still whining.

    Let Gnutella split into multiple networks. It worked for IRC, it will work here, and it will work for similar problems in the future. Any problem that doesn't lend itself well to subdivision is probably badly specified. Don't forget that the Internet is a network of networks, and it works well for a reason.

  • The horizon for GNUTELLA should be about 5000 hosts. Currently they are getting a few tens or hundreds at most. That sounds like a collapse to me.

    But that ISN'T because P2P is a bad idea, it is purely the bad, partly closed source, implementation and specifications of GNUTELLA.

    There's no particular reason that horizons shouldn't still be 5000+. In fact even more than that, as the GNUTELLA protocol is quite bandwidth inefficient. It may be possible to more than double the number of hosts within the horizon by being more efficient.

    e.g. nearly half of the current overhead is in the TCP packet headers. Sending bigger, less frequent messages would reduce the overhead percentage greatly and give much more "useful" throughput. (If you think Britney Spears is useful ;-)
  • From the article:

    Users continue to query the network primarily for audio, video, image, and program files

    Well, THAT just about covers EVERY file in EXISTENCE! Seriously, though, I thought that video and image would be in the vast majority. We all know that people would would waste their time with the seriously slow download speeds of Gnutella are probably the kind who don't get out much, and, well... you get the picture.

  • There should be a Subnet of Gnutella named ATMNet or ATHNet - Anyone ever heard of this? Would really like to hear more about it, but I can't find any information on the Internet.
  • Shoplifting works great. High bandwidth. Better adrenaline rush, too.

    --

  • What's wrong with client-server, anyway? The only problem with client-server is when you're a 20-year-old college student who thinks it's your God-given right to have your warez/porn/mp3 habit subsidized by your school. Then you find all the servers are always getting shut down. Boo hoo.

    So is it surprising that we don't observe a lot of altruistic behavior from people with this kind of antisocial personality, linked together on an anonymous network, doing things that are illegal? Is it surprising that people who are too cheap to buy a Metallica CD are also too cheap to pay for cable modem, buy CDs, rip them, and put them on the network? Here we have a lot of people who think that information freedom is all about taking, and not about creating and giving back; is it surprising that gnutella is languishing at the 0.x stage?

    If you want information to be free, make some free information. Start a garage band, and stop whining about the nonintuitive user interface in the gnutella software you downloaded for free.

    --

  • by SubtleNuance ( 184325 ) on Monday November 20, 2000 @09:18AM (#612841) Journal
    Here is my 'proposed solution' - as everyone else has one, I thought I'd toss this idea out. Why not extend the Gnutella protocol to include a method to subdivide the existing network. Meaning - instead of randomly collecting other nodes of any type - why not only connect nodes of a certain type say "Warez" or "MP3Z". Now if I have 1.2.3.4-MP3 and I choose to connect to the "MP3" subnet of gnutella I will.

    Clients Can query the larger 'unsegmented' net to determine the 'subnetted' network extensions:

    5.6.7.8:warez;
    9.0.1.2:pron;
    3.4.5.6:warez;
    ect.

    This could probably be implemented without breaking the existing clients and network where only Gnutella 'v2' clients would be able to choose a subnet to join. When the "MP3Z" network grows to the breakpoint - someone starts a MP3ZZ network.

    As a side note: Has an organization or project formed on any collective level to address these problems? Is there a 'recognized' authority that is guiding the 'official' Gnutella protocol and a reference implementation? Gnutella is a very necessary model to pursue and develop because of the threat to Napster (though OpenNap provides a mechanism to thwart the $RIAA$MPAA$ whores - there is still the problem of having 'servers' to identify and attack (not to mention the problem that Napigator will have when Napster is finally shut down...))

  • Splitting into subnetworks is both infeasible and undesirable.

    Splitting into subnetworks is not only feasable and desirable, it has already been done in at least one case that I know of (and use).

    There is a thriving and growing subnet known as ABMnet, which is a spin-off of binary newsgroups. People primarily trade missing individual posts that don't show up on their individual servers rather than entire binaries. In other words, if you're just missing Something.R23 you now stand a very good chance of finding it on ABMnet instead of filling up the newsgroup with repost requests.

    The S/N ratio is very low because all users are there for one small purpose. You don't have to deal with 8,000 "Britney Spears Topless" searches, and by being a small niche network it promotes a sense of community which encourages more file sharing.

    There is no network admin, no physical location, no centralization. In short, it's a ragtag and volatile collection of different IP addresses.

    Hmm. Sounds just like the greater internet as a whole, but evolving at a faster rate. So, why exactly is this bad? Are you an old mainframe guy who still hasn't gotten over the idea of individuals having power?

    As for your assertion that this is just for "warez/mp3/porn", well, those three things, specifically porn, drive the vast majority of all network traffic now. What else is there that would encourage such large-scale sharing of files? This is reality. Deal with it.

    Your troll was pretty good. Unfortunately, I don't think you were intentionally trolling.

  • I do not believe this ABMnet of yours is comparable ... It's a DEFINED niche product with centralization...GNUtella simply is not.

    First, it's not centralized anymore than Gnutella is. The client is virtually identical. Second, Gnutella was intended to be a niche product. It was written for mp3's and "oh yeah, you can trade other stuff with it, too, I guess". Based on the report it is very arguable that all the non-mp3 trading is what is bogging down the network.

    From your first post:
    GNUtella may be an interesting idea, but it's nothing more than a hack.

    There is a strong implication in your statement, reinforced elsewhere, that because Gnutella can't be everything for everybody it is useless and should be discarded. I have given you one concrete example of a use that fills a very real need, at least as far as users are concerned. Not only does it make one certain type of file sharing much easier, it also substantially increases efficiency by decreasing redundancy on already over-loaded news servers.

    So, you're partly right. Gnutella isn't, and can't be, everything for everybody. It's not supposed to replace http, ftp, etc. It's simply a better means for one type of file transfer.

    Why spend hours wandering acres of mall space looking for something when you can go right to the specialty store that has what you need? Or, to use another analogy, why throw away the allen wrenches just because most people use screws?

    I still don't see anything wrong with splintering protocols if they serve genuine needs, and Gnutella is quickly and admirably growing into roles that it is suitable for.

  • Fact : Napster itself also has multiple servers that share no common link, as well as most other networked services. Everquest has multiple servers to share the load (I wouldn't want to see 60k players on the same server, even if it were a 32-box beowulf cluster - besides, it would be too crowded). There are simply too many clients in the world to have them all hammer the same box and the same pipe. It's a nuisance perhaps, but it is also a Good Thing (tm) since it inherently provides redundancy. One server goes down, you just point your client software to another server.. no single point of failure.
  • I'm sick of this. "Gnutella's going to collapse! We need new innovation! It doesn't scale!"

    Well, it doesn't scale. In it's current incarnation Gnutella isn't worth shit. I hear Freenet does nice things with automatic mirroring atc. something which gnutella definately needs.

    Let Gnutella split into multiple networks.

    Gnutella could be split into multiple networks based on content, one for music, one for movies etc. Geography could (should) also be used, let Europe and America have separate networks, downloading from an overseas modem user is just dumb. Still, the protocol does need work to become usable.

    --

  • When it comes to a system like Gnutella then the increasing fragmentation that has arisen due to bandwidth problems can only be a bad thing in the long run, because it means that what was originally a resource allowing you to search through a huge amount of material becomes much more limited, encouraging people to stop using it.

    Consider a small Gnutella network of about 50 machines. What do you think is going to be on there? Considering most users are leeches who don't want to share their own files, chances are that most of the available content will be popular crap such as Brittany Spears or Metallica. There will only be a very limited selection of minority music, and this will mean that people get fed up pretty quickly.

    But when you have a strong, centralised database or a large network then the amount of minority music will be corresponding larger, meaning that everybody has a wider selection of music, making it a more popular service. There's less temptation to give up on it in this case.

    It's a general property of this kind of system - smaller, local networks just can't offer the benefits that a larger or more centralised network can in terms of content and diversity. And if Gnutella continues to fragment, it'll reach the point where there won't be any point in using it.

  • Much as I love this software, There seems to be an emmbarrasing amount of pirated stuff. This is unfortunate. It has a lot of much more legitimate uses than this for legal free data. I fear that the software may be attacked by large corporations who consider it to be solely a piracy tool.
  • I use gnut [mrob.com] under Linux and have written a simple script which connects to http://www.gnutellahosts.com/ [gnutellahosts.com], downloads the top hosts and then fires up gnut with them.

    Over the space of the last six months the percentage of network content has dropped from 95% to now only 48.19%.

    I think this percentage will drop even more in the next 6 months.

    --

  • I think that fragmentation is only bad if it is done in a haphazard manner. If we could implement logically divided gnutella-nets grouped by reasonably specific topics, then we could address both the scalability problems and the fact that it is becoming harder and harder to find what you are looking for in the constant stream of crap that gnutella spews. Whenever a particular segment becomes too popular, it could be further divided into more specific subtopics. Divide and conquer can solve the problems we are seeing with gnutella.
  • Mainly what the issue is, is that the whole gnutella "protocol" was pushed too quickly into a "standard", so it can't be adapted to the changing environment that it has endured. Primarily because the people who created it can't touch it, and nobody else can really take a leadership role and be followed. So a proof of concept system stagnates and never really advances. Unfortunate, if you ask me, but hey, ultimately it won't matter. Poo on file sharing apps.
  • okay, enough with the Quake humor. One disturbing thing about Gnutella is how the Gnutellanet servers seem so sleazy (gnet2.ath.cx has the exact same TLD as another website whose URL contains the word "goat"; I think you know what I'm talking about). Also, Gnutella has never escaped the 0.x stages (0.4 right now); either that's because they're still working on it, or they're too lazy to make one more improvement and call it 1.0.

    Personally, I think that there should be nag lines whenever you download something ('tis better to give than to receive, leeches can't suck blood forever, et cetera). The condonation of the leech society must stop. You pay taxes to support your home country, so why don't you set up a Gnutellanet server to support Gnutella?

  • As long as there's agreement between software developers, it could be changed. I'm sure that there's many people who have considered improving on Gnutella.
  • by kwj8fty1 ( 225360 ) on Monday November 20, 2000 @05:51AM (#612853) Homepage
    I've been reading tidbits around the net, and I'd like to ask what people think about this:

    Automatic mirroring nodes

    Nodes would automatically mirror data from local (fast) mirrors, so that it's more accessible. It would need to "learn" what files are requested, and then mirror them. What would stop the script kiddies from "rating" the content they want up, so it would be mirrored more often?

    Structure

    If all of the clients are required to keep a copy of the "whole database", it is not feasible without everyone on the network having a T3+, or later OC3+ connection. But as with the data, the nodes keep track of other nodes, but only if the bandwidth permits. 56k clients could connect and ask the "net" of super nodes for the queries on content. No one node should be in control; but many based on the same rule set. You would have to have a setting on the client for "perm super-node", or just "56k browser". Even the 56k browser could contribute to the network however; two 56k modems that are on the same segment of 'net can transmit with very low latency; they can buffer queries from the super nodes, and allow for faster access.

    Content Security

    All of the content posted to the network would have meta-moderation on it; anyone can classify data, and mark it as such. People can also rate classifcations; so to prevent some spam. If a file with the same name shows up on the 'net, it could end up with the same rating. (my_garage_band_called_nirvana_that_nobody_has_hea rd_of.mp3)

    I'm sure that folks have a complex yet effect methods of rating. (flame wars may ensue) but I'd be really interested in hearing ideas.


    Privacy

    If possible, I'd like to see users IP addresses hidden; only have a unique login name/password setup for security; but this may make hackers/spammers hard to track and ban, but hopefully the meta-moderation would filter out most of it.

    Volunteers

    Anybody?

    -Eric Johanson - ericj.spambad@cubesearch.com

    This sig for rent
  • I dont understand the following:

    - Why is it the case that it is not possible to find mp3's on web pages (for which good search engines exist) while you can find them on napster or gnutella?

    - Why do people talk about freeriding on gnutella and not on napster?

  • Someone had better think of a way to get Gnutella up and running at 100% efficiency soon or else nobody's going to be able to get free music online... except on IRC and that's no fun :) I have a habit to support here... someone please help! -Duke

  • I wonder why Germany ranks highest among the non-US users... Japan I can see and Canada is in North America but why Germany? Check out this graphic [clip2.com].

    There are some cool charts examining the .net breakdown [clip2.com] (remember that cool poster [thinkgeek.com] from ThinkGeek?) and the breakdown by .edu [clip2.com] as well.
    -Duke

  • The size of the file you are downloading is irrelevant to the Gnutella network: the actual files are sent directly from computer to computer, not over the network. The purpose of caching servers would be to cache what ids have what keys, not the actual data. Probably the reason why nobody can download anything is that the whole thing is so slow and the clients so unstable that everyone gives up on the whole thing after about 5 minutes, which is just long enough to reply to a keyword, handshake, start a download, and have the thing GPF...
  • use CuteMX, http://www.cutemx.com/
    its much better than Gnutella and Scour
  • CuteMX, http://www.cutemx.com/
  • use http://www.cutemx.com [cutemx.com]
  • use :http://www.cutemx.com [cutemx.com]
  • True P2P Can't Scale? Take a look at Freenet

    Freenet is not actually P2P. It is a redundant global index similar in fashion to URLs.

  • Actually, for a long time I thought GNUtella was some weird Venerial disease, until someone pointed out the GNU and then tella
    I was pronouncing it GA-NUT-ELLA
  • after having some issues with my isp about napster (basically, they banned it) I decided to give gnutella a try and downloaded a few clients to see what was best. My conclusion: They all suck. Every last one of them looks like it was written by a retarded monkey whose experience in development was reading "Learn VB in 21 days"..bugs galore, and interface that even I found confusing and of course, the fact that It Didnt Work. I did a search for a common MP3 (metallica) to see how many it would return..after 10 minutes and nothing (and no report that it was even still searching) I decided to uninstall the hunk of crap I had downloaded. Some things need to be done: Another protocol other than gnutella needs to be developed..why use that crutch? Make a better one. And write a DECENT CLIENT for the thing. Pity I dont have more time on my hands or I would...
  • It's not as though I was searching for some rare file..any mp3 by metallica i figured would be common enough. The specs for the protocol themself (yes, i've read it) even say they dont know how well it would work if it got popular. If gnutella is ever going to be considered a real alternative to soon-to-be-subscription services such as napster, it has to be ALOT easier to use. Do I know how many hosts I connected to? not a clue. The docs didnt explain anything and the interface was cluttered. An all around poor design. Here's my bug report: "Complete rewrite with simpler more informative interface". Come to think of it, there were *no* docs other than "type a word here"..

    Undoubtedly this will get a reply such as "well those who should use gnutella will know how to use it", thus showing off the elitism of slashdot again.

    Gnutella suffers the same development problem that linux suffers - its ran by elitists who havent a clue about how the majority of people in the real world doesnt know (or care) how a computer works, just so long as its easy to use and works correctly, forever relegating it to "that other software".

  • an idea i came up with over this summer was each client maintains an IP list of other clients running compatible software. You could connect to another client (or one of a few main IP-servers) and download a long list of IP's running your software. You could then scan through all those IP's looking for the file you wanted. If an IP didnt respond, it would be taken off your list. Everytime you would connect to a client, it would ship you a new list of IP addresses, weed out the ones who you already have and could be added to your database. Every once in awhile your client could scan through and ping all those clients to be sure they were still alive. In the end, you'd have a huge listing of IP's running your software from which to scan through, rather than having your search passed on and on and on like in gnutella, where you don't know where its going or how successful it's been. The beauty is that you can use a "main-server" to locate clients easier but it doesnt HAVE to...
  • It's quite simple actually. Instead of sending a query through your connected hosts, then through all of it's connected hosts, is to directly query all hosts in the database... Each client would create a database of it's connected hosts, and from each host it would download their databases, and combine them. Using the IP as the primary key will prevent any duplicates automatically.

    When you search for the word "linux" your host will ping the first host in the host list, if the host is unpingable, it is removed from the database and the next is pinged. If it does reply to the ping, then you send the query to the host... It returns all the "linux" files it has and then says goodbye. Your client queries the next machine in the list and so on. This way you only need about the same ammount of bandwidth it takes to search a normal search engine like yahoo.com.

    Queries will still take time for those on modems but at least it will work no matter how dense the network gets or how slow your connection is!

    Feel free to email me with any insight/comments/questions about this.
  • Good point. Now perhaps if people would stop bitching about how Gnutella doesn't work, pull their thumbs outta their bums and make it work. I know some of you 31337 H4X0R5 out there have the skills to make it work, but are to concerned with watching other people search for pre-teen porn on your GTK clients. I rant because I openly admit I do not have the skill to do it my self, nor do I have the tie to acquire the at the present.

    Seems to me than someone *COUGH* ^ Bell *COUGH* is just waiting for Gnutella to die for some reason. Got a bet going Tawko?

  • What rubbish!

    What good is a product if the core is bad? Oh, it may look nice, such that any fool could use it, but if it barfs everytime you do something users will just walk away. And all because the development effort went in to the user interface and not the architecture.

    Is there any reason it couldn't be a command line app? Probably not. Anyone remember archie for ftp searching? It was a tool that worked.

  • by tewwetruggur ( 253319 ) on Monday November 20, 2000 @06:21AM (#612872) Homepage
    Ok... as I see it, we've just been able to hook up a monkey to a robotic arm, over the internet... now, certainly, there must be some useful p2p implementation of this technology, perticularly if you blend it with the Infinite Monkey Protocol Suite... perhpas this could be the dawning of a "Speedy Monkey Brain Protocol":

    Hundreds of monkeys, eating bananas, swingin' away on ropes, with their brains hooked together with wireless broadband technology, all for the purpose of file sharing!

    And before anyone gets a change to say it...

    ...Imagine a Beowulf cluster of those monkeys...

    There... it's been said.

    Thank-you, and good day!

  • by nirvana_am_i ( 256409 ) on Monday November 20, 2000 @06:13AM (#612873)
    One of the reasons that Gnutella flounders is the Interface. The creators had a novell idea of creating a node-based network for file sharing, but their interface needs some work.

    Napster, and then Scour, both simplified their application so any nitwit (even some mac users using macster) could gain access to the resources.

    A better interface as well as some way to have the top hosts from gnutellahosts.com automatically be used everytime the application is loaded up is definately a must.

    What the developers need to do is try a rapid prototyping model . As much as I hated it while doing that damn internship, it really does work. People need to be surveyed on how the application should work. The only way to come up with a good product which cathes the broadest audience is to get feedback from that audience.

    Thats enough bs from me.
  • "Among US-Centric hosts, COM, NET, and EDU"

    they are counting canadian isps as us-centric. dont believe those numbers. they are wrong. canada has the highest internet use per capita. and also the highest high speed internet use per capita, the type of people who use this kind of app. if they did a true breakdown you would see canada at #2 easily. and #1 if it were per capita.. also easily.
  • also let me point out that it shows @home being 50% of the .com addresses. half of the @home subscribers are from canada.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...