Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet

Death of the P2P net Predicted! Film at 11! 132

87C751 writes "Cnet has a preachy, whiny piece bemoaning the peer to peer "phenomenon" and its lack of commercialization potential. The humor comes when they claim that bandwidth limitations will ultimately doom P2P (as though bits that traverse through a server somehow take less bandwidth than bits sent from one box directly to another). " Alright, I'm a little softer then the submittor, although I agree with some points. The area that I do question is how much is actually shared - most of the people I see out there are taking, not contributing to the Gnutella and the like.
This discussion has been archived. No new comments can be posted.

Death of the P2P net Predicted! Film at 11!

Comments Filter:
  • by Anonymous Coward
    Bring on the armegeddon,
    can you break the wall down.
    Big business running rampant,
    just look what they have found.
    The Internet, the brave new world
    the thing they most despise.
    To take control they must embrace
    if not they loose their size.

    There's nothing less exciting
    to the people in the world,
    than the prospect of the future
    run by the business whorl.
    We wish we had a say
    in what our life 's about.
    Instead we're getting paid
    to shut our goddamned mouth!

    Why, oh why do we
    accept the current greed.
    Where and how can we
    grow out and up in deeds.
    We must find ways to be
    more than another cog
    within machinery
    of business founding fog.

    So gather round, come near
    and understand our plight.
    I'm asking all that hear
    to stand up and to fight.
    Don't let them take away
    the things we call our rights
    It's time to bring them down!
    The walls come down tonight!

    -The Anonymous Poet
  • by Anonymous Coward

    Maybe, but the existence of per minute call charges in many countries puts a bigger hole in a P2P network model than asym. network bandwidth.

  • If peer-to-peer does go away, it will be because service providers (telcos running DSL and cable companies with their cable modems) restrict upload speeds to intolerable levels.

    At my home, upload speeds are limited to 112kbps (about 14kB/s). It isn't terrible, though uploading a single MP3 can bog down your downstream rate too (since the two-way handshaking that goes on takes up a small but significant amount of the TX/RX time).
    --
  • Has anybody done any theoretical research here?

    I'm working on it, both analytically and in simulation. Gnutella requires O(N) bandwidth at each host, or O(N^2) overall. Napster requires O(1) bandwidth at each host, or O(N) overall. The single point of failure indeed buys better network scaling.

    I'd guess that in a P2P network the bandwidth required to carry meta-information would go up O(N^2)

    That depends on the design. Not all designs are as bad as Gnutella. A hierarchical index can provide scalability without being entirely centralized. Caching can probabilistically reduce traffic. Don't assume that just because Gnutella is bad, P2P as a whole is bad.

    The Napster architecture, while introducing a single point of failure (at least from a legal standpoint)

    Napster's architecture has a single point of failure from reliability and scalability standpoints as well. Illegality is not their only weakness.

    centralizes meta information allowing O(N) growth of query bandwidth in nodes, and decentralizes data transfer

    In that respect, Napster is directly analogous to how search engines on the web work. Napster has the same scalability problems as search engines, too: there are just too many documents for any one central point to store all the meta information.

    Yes, there is a better way. I'll publish it once I have a working prototype.

  • Aggregate bandwidth requirements would change in an Akamai-like caching server system,

    Akamai reduces the amount of wide-area bandwidth used for retrieving content. It doesn't reduce the wide-area bandwidth or server load consumed by searches/lookups and meta-information. The original post in this thread was talking about how much bandwidth is consumed by meta-information, which Akamai does nothing to alleviate.

  • I think it was something along the lines of "Unix|Mainframes are dead".

    I guess IBM and the Linux community don't have the sense to quit.

    Misfit
  • The entire premise of that article is totally flawed... Did CNET start publishing articles about how USENET was going to die because it is not commercially viable, or how IRC has no future because it's noone makes money on it?

    More ancient readers on slashdot were astute to point out the similarity btw. current P2P file sharing and things like Archie.

    I find it pathetic how these authors seem to have little sense of history about how certain internet applications became popular.

    -Dean
  • Every bit that is downloaded must be uploaded by somebody. I suppose you meant that the number of people downloading is larger than the number of people uploading? That was the claim of the Xerox PARC report.
  • At the risk of being a bit boisterous this is basically a horseshit argument. If your network is predominantly users connected with 33K modems it has some validity. But you need a high speed connection to conveniently be downloading these sort of files anyway. And downloads are NOT across the gnutella network. They are directly client to client. Search responsivity can be sluggish but the bandwidth needed by the protocol is not onerous if you have the high speed connection needed anyway for a reasonable download time.

    To go one step further it isn't the bandwidth and it isn't lack of scalability, it is vulnerability to DoS attacks while the network remains small. The reason I say scalability is not an issue is because if the packet TTL is properly handled then a network of 10 billion nodes is no more congested than a network of 10 thousand nodes. That is because under reasonable assumptions (ttl = 7, 3 or 4 connections per node) a packet won't see more than 10 thousand nodes no matter how large the entire network is.

    Given fixed resources for malicious behavior, it doesn't take a rocket scientist to see that the potential for disruption is diminished as the network gets larger. Conversely, if pinhead journalists can discourage enough people from trying things like gnutella, that helps to limit the size of the network, make it more vulnerable to attack, and serve as a self fulfilling prophecy.
  • Perhaps Gnutella and the like need some way to dynamically enable uploads. Maybe something as simple as checking to see how busy your pipe has been for the last half hour, and if it has been idle then open the floodgates.

    This sort of thing will do much better once there are some widespread industry standards on bandwidth shaping. It would be great if there was a way to keep a certain portion of your pipe open for critical traffic, and let the rest be used for whatever. Unfortunately that requires the ISP to coordinate and cooperate with the end user. Yeah, that'll happen.
  • Hmm, I wonder if you could use something like moderation points. The more points people give you by downloading your files, the more points you have to spend downloading from other people. It wouldn't be automatic either, if someone downloads a file from you, and it sucks, they don't have to give you a point.
  • When you get down to it, what really is the difference between the peer-to-peer model, and the client-server model. When all the bells and whistles are scraped off, and all you've got left is the core, somewhere there's always a client, and there's always a server. The only real difference is that all the world's a potential server under P2P, a "server" that can't necessarily be controlled by some commercial entity.

    If anyone looked at the graphic in the article, the method they show describing P2P is something akin to using the Metacrawler search, to search through many search engines (with the distinction that there are still central databases to search, instead of asking all 4 billion hypothetical systems what they've got). Why? Probably because the Internet was created as a peer-to-peer system. From the lowliest PC-XT running DOS-based software, all the way up to the Sun E10000's they're all "equal" on the 'net; if it can be connected, it can provide content (given that it isn't prempted by some silly Internet Content Provider's rules), yet the peer containing the content is the server, and the peer requesting the content is the client.

    One last thought for Hemos: A bit that gets taken from Gnutella, or its cousins, is a bit that was contributed to it.
  • The humor comes when they claim that bandwidth limitations will ultimately doom P2P (as though bits that traverse through a server somehow take less bandwidth than bits sent from one box directly to another).

    Well, I don't know if you have a shared connection (like a cable modem), but I sure as shit saw the bandwidth going away as more and more people started using Napster.

    The whole model of "overselling" your bandwidth (as an ISP) because most people weren't using it all the time starts to break if the people you are selling it to are using it all the time. Just as dial-in ISPs had to rework their rules of thumb for how many dial-in lines they needed per user because people were spending more time online.

    Napster (and other p2p apps) change the game because they make it easy for people to use all of their bandwidth. Prior to them, you had a much harder time finding sites to leech from (well, maybe not you but certainly the average Internet user). Now, it's easy.

    Hell, one time I was running Gnutella (and actually sharing my files) until people in my office started complaining that their requests (to websites) were timing out, and that the net seemed really slow. I killed Gnutella, and pow, everything was zippy again.

    That scenario can happen all over the net. Popular p2p apps flat out consume more bandwidth.

    Jordan

  • Sure. It becomes warez and hotline.
    Life seems not to have changed much after the lawyers attack. There's proscribed data out there, somewhere, on a constantly changing set of servers used by a bunch of 14 year olds (whether or not I'm referring to physical or mental age is left as an exercise for the reader). Boom. Napster is relegated to the category inhabited by USEnet 7 years ago and becomes an annoyance when people like me bring up ("remember when you could post to alt.sex.bondage and not lose your important mail in the hellstorm of spam? I used to carry tapes fulla newsgroups up hill for miles, covered in snow...").

    You actually hilight the difference - published (accessible via a known node) vs. unpublished (find it, if you can). And this is really what the current battles are about.

    Napster has a single point of failure. Take out the company of the same name, and most of the work-alikes will run away and hide. The few that don't will be attacked, with legal precedent behind the attackers.

    Freenet has a chance, but it is still too hard to use, too cryptic, too geeky. It works well, but it is a bit similar to PGP - "understand these things, and you'll be able to use the magic to your advantage". Contrast with "click here for more Metallica".

    I do hope these things can be made to work well. The sharing of data between individuals is under attack. Imagine if the telephone had been limited to companies who could pay an intermediary to carry a message. (OK, it isn't quite that bad, but the potential for limiting the evolution new methods of communication is being held ransom to AOL/Warner and Sony.)

    OK, enough ranting. Bottom line is, warez doesn't change anything, and directories of user provided data does.

    I think there is a solution to metadata distribution similar to Freenet's method, but without an assumption of being global that keeps the Gnutella problem under control. More to come, maybe.

    -j

  • Back in the Fall of 1969 when the ARPAnet came
    up for the first time, all networking was P2P.
    Most networking stayed that way until the WWW
    technology made a client/server form of networking
    possible in the middle 1990s. There is absolutely
    nothing new or unusual about P2P - every basic
    technology on the Internet was designed to support
    it from the earliest days.

  • Yet another set of commercial ventures predicts the death of something the use of which they can't figure out how they can charge money for.

    Oh, and I really like how CNet's ``printer friendly'' version of their pages removes the graphics that are associated with the article but leaves the banner ads. Pathetic.


    --

  • As many people have probably said, that's totally uneccesary. You don't need much upload bandwidth to download stuff. Just enough for the occasional ACK packet.

    I always leave uploading on. I just wish that gnapster had a rate limiter built into it. If it did, I'd leave it up most of the time.

  • What's needed is some P2P at the high end, to allow for seamless mirroring and reporting, with the lower end being more heirarchy based.

    That way (for instance) you ask your local server for a file and it uses a P2P method to retrieve that file across the network of 'big servers', then sends it to you. That way your dial-up connection isn't slowing down the large scale network.
    _____
  • If you're transferring files from a friend and you're a bit bursty adn so are they, you end up downloading a lot more slowly than you could in theory. (imagine that you spend 30 seconds each minute running at half speed and so do they, you only spend (on average) 15 seonds each minute downloading at maximum speed).

    If you've got a high-bandwidth cache in between you can get a 50% higher throughput. Also, if the file has been cached beforehand (because someone else downloaded it) you can get it at your maximum speed.

    _____
  • -- Don't you hate it when people comment on other people's .sigs??


    Yes, I really, really do


    Just couldn't resist :-)

  • Ah, you are right in that they are different data streams. However, if I open, say, Napster on my cable modem, within minutes there are probably 10-20 users downloaden from my box.

    Then, when I try to browse the net, read /. or do other usefull things :-) my HTTP-requests (upstream) do not get through, or at least very slow (like, it takes 10 seconds to reach even my local ISPs page). Same goes for request/ACK/whatever-packets that are sent, also while downloading! This will slow down your download quite a bit.

    So while you are theoretically correct, it doesn't entirely work that way.
  • These days the attitude seems to be that unless something is a business model then it isn't viable. These P2P services will be around forever now that they exist. People will realize that each individual can pay for a portion of the network-bandwidth-server infrastructure required to maintain the P2P service without requiring a corporation to control it. Of course the people with the least amount of resources (bandwidth for instance) will take more than they give, but I think the system (I guess I'm talking in abstract terms here about some sort of ideal or at least workable P2P service like Napster or Gnutella) will tend to balance itself out. To each as they are able. As broadband connections increase more people will leave their client/server running all the time. I for one haven't seen a problem on the networks, have you? Just because a corp can't create a business model to support some astronomical farce of an IPO doesn't mean something can't be successful.
  • Take a look at mojonation [mojonation.net]. you earn/spend a currency called mojo backed on your bandwidth/cpu/storage capacity.

    "p2p" (i hate buzzwords) has a bright future!
  • This would be a good topic to discuss with Marc Andreesen, don't you think?

    Yes, I've had that discussion, as I suspect have a lot of /. readers. That's exactly what I was thinking of in writing my message. Andreesen may not have singlehandly revolutionized technology, but he sure started something, despite how much we all might be tempted to say he got lucky, right place at right time, etc. There's probably a bit of truth to both sides of that argument.

    Actually in some respects, I think what Fanning did was more revolutionary: he didn't just put a new user interface on an existing service (Mosaic on WWW), thus making it more usable, he conceived and created (or if you prefer, packaged) a new service. Andreesen+BernersLee==Fanning? ;-)

    I'm just making the point that P2P isn't any different, technologically, than tried and true networking fundamentals, and so the argument that it will fail on technical merit is completely flawed.

    But there is a difference which has a technological symptom: a new and significantly higher demand for a high-bandwidth type of P2P, namely media file exchange. Nothing on the scale of Napster has existed before - an online distributed database in the multi-terabyte range which exists on individual user's home computers, rather than on managed servers, with high-volume data traffic. Napster has caused more trouble for bandwidth management at places like universities than any other service I'm aware of. So although the basic components of the service may be familiar, the emergent behavior of the system is not. So it's not necessarily invalid to argue that it might fail because of technical constraints, not to mention issues like the "tragedy of the commons", although I don't happen to think that'll be the case with P2P "file sharing" in general.

  • Just because some kid slapped a web interface onto a hack of anonymous FTP doesn't suddenly make it a different technology. Just because he made it distributed doesn't make it anything more than simply 'convenient'. Searchable FTP has existed for a long time, also since before the www. Anyone remember the Archie tool? Indexing, and making it transparent is the next obvious step, not some revolutionary break-through.

    If it was so obvious, why didn't someone do it three years ago? Seemingly minor or incremental improvements in the usability or packaging of existing technology can be a breakthrough if the result is that hundreds of thousands of people suddenly become able to do something which they want to be able to do, but couldn't previously.

    I suspect you have a narrow technical definition of what you think constitutes a revolutionary breakthrough. The fact that the recording industry is shaking in its boots right now is proof enough of the revolutionary nature of P2P file exchange. And it's this specific application and incarnation of P2P "technology" that the CNET article is about. Not that I agree with the article itself - I'm simply reacting to your unjustifiably dismissive comment.

    Bits is bits is bits.

    Uh-oh, Nicholas Negroponte is posting on Slashdot now!!!

  • How much is being shared? A whole hell of a lot. Where do you think all the people who "take, take, take" are taking from?

    Here's some stats for my MP3 sharing over HTTP only. This doesn't count what I share on OpenNap [sourceforge.net].

    dwarf:/var/log/apache$ head -1 access.log
    127.0.0.1 - - [22/Oct/2000:06:26:35 -0400] "GET /robots.txt HTTP/1.0" 404 204
    dwarf:/var/log/apache$ tail -1 access.log
    XX.XX.XX.XX - - [27/Oct/2000:10:27:36 -0400] "GET /mp3/Pink_Floyd/ HTTP/1.1" 200 3651
    dwarf:/var/log/apache$ grep -v '^127.0.0.1' access.log | grep '\.mp3' | wc -l
    503

    (I changed the IP address to XX's to protect the identify of the person who made that last request.)

    503 mp3 file transfers (some of which are partials and resumes, of course) in 124 hours -- or about 4 per hour. And that's a very small number compared to the activity I get on OpenNap (which I don't log, currently, but trust me -- it's much more than 4 per hour).

    Those of us who share may be in the minority, but we definitely exist.

  • Ratios... points... offering services to earn virtual currency which you can spend to download information... sounds like a pretty good idea, eh?

    That's what the Mojo Nation [mojonation.com] folks thought, too.

  • If most people are just taking from communities like Napster & the like, how is there any content at all? For these to even work, people need to be giving -- and they are, in copious quantities!

    Yes, MORE people take than give, but as long as some people are giving, these file-sharing systems will continue to flourish.

    Scott

  • As we all know, it doesn't matter that we can freely share things with people we know, or give things away just because we can. No... the Internet... nay the world is all about making the almighy buck! If there is no way to capitalize or expand the market share what's the point? Why bother? Why would anyone in their right mind want to create something and then give it away? These companies (and C|net themselves) are nothing but money-grubbing bastards. If the world had it's head on straight they would be the ones who don't have a future....
  • ...that are killing P2P, but it's lack of scalability.

    The more people searching through a gnutella-kind-of-P2P-network, the more traffic is used by searches and searchresponses. That's all bandwith you can't use for filetransfers anymore. The bandwith problem only kicks in because your searches are going through slower links in the P2P chain, making effective searches a real problem.

    There was an article about this [zdnet.com] earlier, which was also posted to Slashdot [slashdot.org].
    ;
  • Of course, one of the biggest mistakes they made with gnutella is not putting in some way to block dialup users. There should be some kind of bandwidth checking in the connect code. Maybe have it send 50K of random data from the node being connected to which then times how long it takes to get back the CRC of that data, more than 3 seconds and no connect. That would cut out all the slow users and keep ping times between nodes to a minimum.
    Hmm... maybe I should go hack the gnut code and put up a high bandwidth only net.
  • by BilldaCat ( 19181 ) on Thursday October 26, 2000 @04:51AM (#674461) Homepage
    when I suck stuff down, I usually block/disable uploads, because I need that bandwidth.

    I should probably turn it on at night when I'm not using the machine to give back, but I haven't bothered.. there's no penalty if I don't do it, so why should I? I know I'm not alone in that line of thinking, though it may be wrong.
  • That's right up there with 'the Internet is too distributed; it's not centralized enough, and so it's doomed, because it's not commercially viable.'
    and
    'Linux? It's not viable. Free software is not commercially viable'

    Yeah. peer2peer is 'doomed'. Right. It's only going to get better, not worse.

    Damn, though, don't you hate the buzzword? peer2peer? p2p? (that used to mean /point to point/)...... it's just inventive ways of sharing files!

  • do a traceroute sometime and see where you're going -- you may be "connected" to a router in another part of the country... that's what I've seen with speakeasy.net -- their router is located in Seattle, WA, so it's a frame-relay (200ms) jump from a friend's apartment to seattle... ugh.

    I've seen that using some higher-bandwidth programs that require consistent connectivity (vnc) drop like crazy. I believe Darwin networks may be at fault, but it's just as likely that it's the cable modem on my end as well.
  • ... how many people do you know watch a lot of TV but never seem to contribute to TV content? Besides, does all of the 'net have content worth sharing?
  • No, there is no insentive to upload. Why should I waste my time/bandwidth and upload something? What do *I* get out of it. Good will to man? Maybe.. but even then, I don't go around running happy thinking day to day, "yay, i uploaded a file" more than i smile in triumph, "I can't wait to get home when my download of 50 megs of mp3s are done"

    ---
  • No, you stated that people download more than they upload for laziness and legal reasons You are now arguing a different point which i made no statement on.

    ---
  • In a peer-to-peer configuration, both peers serve as both client AND server.

    Not true. One end can be just a server, while the other end need only be a client. Peer-to-peer simply means that 2 machines normally though of as "clients", e. g., home PC's or workstations, can communicate with each other without the aid of a server machine.

  • Upload and download are different data streams at data-link (modem) level, not at the transport level: TCP is quite chatty and all those ACKs have to go back somehow. If your connection is not to the same computer the upload is going to, ACKs cannot piggyback your packets and have to be sent as stand-alone packets. These packets will have precisely 1 bit of useful information, but still use "minimum packet size" bytes. Quite wasteful!

  • ...you killed Napster, you bastards.

    Or something.
  • The articles on freenet's architecture are a bit vague, but it looks like they do something like a space/time tradeoff to achieve higher performance -- more space is dedicated to data and metadata (by duplicating it across servers) so that the path to any datum is shorter.

    Finding stuff is a different matter, and I suspect part of the solution here is to learn to accept imperfection by design.

    Yes, I see the Freenet design implements a finite TTL on a request. Combine with caching data it means the network adapts to more popular data, serving it across fewer hops. So the effective network radius you have to search is limited for popular data, and TTL limitations puts a hard limit on less popular data searches. This caps the growth of bandwidth usage per user, which is a good thing, but it also means that you can't deterministically find something that actually exists.

    It's anyone's guess to how performance and reliability would be affected by scaling to, say, a good fraction of the current size of the web; I expect there will be some interesting chaotic phenomena that will be uncovered with respect to the precise way parameters such as TTL and cache size are tuned.

    Anyway, my hats off the freenet people -- they're in for an interesting ride.
  • A good lawyer can take down any published set of servers.

    The freenet architecture is interesting because it is even more decentralized and the servers are networked to share metadata. This means that in addition to taking down a number of well known servers, the lawyers will end up in a netherworld where metadata is passed in a highly connected and nondeterministic way.
  • by hey! ( 33014 ) on Thursday October 26, 2000 @05:19AM (#674472) Homepage Journal
    as though bits that traverse through a server somehow take less bandwidth than bits sent from one box directly to another

    The aggregate bandwidth needed for file transfers won't change; it's the bandwidth required for meta-information -- catalogs, searches and search responses, that goes up.

    Has anybody done any theoretical research here? I'd guess that in a P2P network the bandwidth required to carry meta-information would go up O(N^2) -- that is you want to have a network of information distributing nodes that is some fraction of a complete graph. The Napster architecture, while introducing a single point of failure (at least from a legal standpoint), seems closer to optimal from a purely technical standpoint -- it centralizes meta information allowing O(N) growth of query bandwidth in nodes, and decentralizes data transfer.
  • In case you need a refresher:
    http://slashdot.org/articles/00/09/12/1217200.shtm l [slashdot.org]

    The problem with gnutella and a lot of P2P is that is assumes all peers are equal. When the primary routing goes through some over-bandwidthed, over-funded .com, peer to peer works okay, but when you're relying on your query to go through some yahoo with a 28.8, it ain't gonna fly too well.

  • My current favorite MP3 sharing program is Audiogalaxy. It has a security and anonymity oriented design, but on the discussion board people are boasting about how many files they are sharing and how many gigs they've shared so far. Contrary to the GNUtella experience, most of these folks seen to be taking advantage of anonymity to share more, rather than less. Of course that could change.

  • by EnderWiggnz ( 39214 ) on Thursday October 26, 2000 @05:00AM (#674475)
    Same Shiite, DIfferent Protocol.

    i remember talking with my father in 1992 about this whole "internet" thing. he thought that no one would be able to make money on it, and that there is no compelling reason for it to be used.

    and then came the web and all hell broke loose in 1994.

    now we've got a different protocol, one that keeps true to the original intenet of the internet, and allows "Peer to Peer" sharing.

    geez, the internet has always been peer to peer sharing, this is just allowing us to go back to this philosophy, and allow everyone to truly contribute back, instead of only those with large amounts of cash needed to generate hits.

    so, all of a sudden, we will be back to the model that allows anyone to communicate with anyone else.

    We're taking the power back with P2P. Using the internet what its meant to do - communicate, not make a buck...
    tagline

  • In most cases, uploads and downloads don't interfere with each other. They're different data streams. Try it sometime.

    Back in the day when I was running a BBS connected to FidoNet, the preferred protocols for transferring mail and echoes were bidirectional...stuff got sent and received at the same time to minimize long-distance phone bills and maximize a BBS's availability to callers. As long as you weren't using a modem with grossly asymmetric transmit/receive speeds (such as one of USR's HST modems, which ran at speeds up to 16.8 kbps in one direction but only 300 or 450 bps in the other), you'd get decent speeds both ways. There were also bidirectional file-transfer protocols available to callers, such as HSLink. (With me being the leech that I was, though, I usually stuck with ZMODEM. :-) Hell, I even still use ZMODEM occasionally today for the odd task or two.)

    Theoretically, the same ought to hold true over your Internet connection with file transfers today. In practice, though, if you're on a dial-up connection, some modems handle bidirectional traffic better than others. In some (mainly cheaper) modems, not enough processing power is available in the modem's controller to keep up maximum speed both ways. If you're sucking down MP3s/pr0n/warez at 5 kbps and then someone starts sucking files off of your computer, odds are good you'll see at least a slight drop in your download speed. If you're using a winmodem of some kind, it gets even worse as the modem now has to contend with everything else going on in your computer for processor time.

    (Of course, we're all using cable modems or DSL now. :-) These seem to not be affected by this problem as much. About the only time I notice a speed deficiency is if I'm logged into my server from someplace else while it's in the middle of a download...it sometimes takes a second or two for keystrokes from the ssh client to get through. Screen updates, though, are still quick (4x faster than dial-up).)

  • Ok, here was my idea for a whizbang P2P architecture.

    What do we need? Something relatively anonymous. Something relatively stealthy. Something relatively standard, familiar, easy to use.

    So I says to myself FTP. Just hack on some extra functionality that allows pseudo-links to *other* FTP servers. Clients would traverse the filesystem and not know that they were actually getting listings from N servers away (much like Gnutella, but an FTP interface). Hey, why not slap on a new command, say, REVERT, which, when passed with a secret key, makes the FTP server revert into "dumb" normal FTP mode. Great.

    So then I look up the FTP spec. And what do I realize? FTP *already has defined a seperation of control and data flow*. FTP *already theoretically supports proxying*. FTP *is* Gnutella effectively. Somebody please read the FTP spec, and implement a server which will transparently do proxying like this (the nested remote filesystem stuff would be nice too - not sure if that is specified by the RFC).

    This has the nice added feature that any law that attempts to strike this down, will have to strike down the FTP protocol...it will then be laughed right out of court.
  • by speek ( 53416 ) on Thursday October 26, 2000 @06:17AM (#674478)
    It depends on how you design your p2p architecture. If you design it like Gnutella, with communication done via multicast, then yes, your bandwidth usage can go up O(N^2). If, however, you're a bit smarter, and do things like, say, Freenet, then you avoid a lot of that problem. If you want more information about how Freenet works, then go there [sourceforge.net].

    Essentially, the problem comes down to how do you find each other, and how do you find stuff. Finding each other is generally done with centralized services (eg DNS). But, there are other options, including limited multicast, expanding spheres of knowledge (ie you learn about 1 other node, and it tells you the nodes it knows, and they tell you the nodes they know, and so on - this is similar to Freenet). But, once you've found a node to talk to, bandwidth is the same as a non P2P network.

    Finding stuff is a different matter, and I suspect part of the solution here is to learn to accept imperfection by design. No, you can't search everything because that would involve going to every node and querying it, which would be impractical. However, you can spider out through the nearest nodes, and they should be able to point your query in the most promising directions, and you could configure your search to be as far-reaching (and slow) or as near-sighted (and quick) as you like.

    Another point to make is that there is the potential for our bandwidth capabilities to go through the roof in the relatively near future. With fiber, optical switching technology, we could easily see bandwidth essentially being removed as a bottleneck - perhaps in the next 5 -10 years.

  • I would leave Napster on all day long if eat didn't eat up every ounce of my upload bandwidth (same for Gnutella but it is a much worse bandwidth hog). This is especially bad because I share the cable modem with my college roommates. Contrary to what people may think upload bandwidth is somewhat useful, and my roommates do come a knockin' if things slow down. I know what some of you are thinking, but some websites *do* slow down when your upload is saturated. If I could do bandwidth throttling Napster would be on damn near all the time for me (same goes for Gnutella).

    JOhn
  • I wish more people would realize this. It seems as if the latest interest in P2P networking has been generated by folks like Intel, who seem to want to find a way to make a "new" market out of it. Although it isn't new, I believe the realization of a much more distributed internet(IE everyone is a server and a client)is more likely possible now than ever before, as bandwidth to the average household and business continue to grow, in some cases exponentially.
  • by ToLu the Happy Furby ( 63586 ) on Thursday October 26, 2000 @06:20AM (#674481)
    When I use Napster, I only share about half my MP3s (still over 2 GB, and generally my more interesting stuff), because if I share them all it takes abominably long to log in.

    When I use Gnutella, I often don't share at all, because my CPU utilization goes very high when I do, and then I can't listen to the new MP3s I'm getting without skips. (I assume this is due to my computer needing to check every search string that comes through against my list of shared files.)

    Both of these problems are fixable with increased bandwidth and computing power. (Or maybe I just have a buggy version of Gnutella.) I'm very enthusiastic about the possibilities of P2P, and I genuinely try to share as much as possible. While I realize not everyone on Gnutella or Napster is as idealistic, I have a feeling the percentage who are is a good bit higher than the 2% (or whatever) reported. Of course you can't blame CNet for taking the "corporate whore" view of human nature, but in my experience people like to share with each other, and will especially do so whenever it is easy and doesn't have noticable drawbacks.
  • Basically, it comes down to 2 points:
    1. Napster / Scour are easy to use, by anyone, even complete cluebie luser college kids. They set sharing on by default. Most users are too lazy, don't care, or arent technically clueful enough to bother changing this.
    2. P2P file sharing systems are less developed, and harder to use; and most of their users are fairly clued up, as a clueless person isn't really going to see the benefit of P2P file sharing over napster. The version of Gnutella I have doesn't even allow uploads (it is quite old though), and AFAIAA most P2P systems don't set sharing on by default.
    As a result, there will always be less files accessible via P2P systems until someone releases a client that is easy to use, and enables sharing by default.
    Without large numbers of college kids / AOLer / et.c. sharing their entire song collections, it's never going to reach the level of success Napster has.
  • The problem with gnutella is not the bandwidth as the artile is saying. Granted, modem users don't have enough bandwidth but even if you get every gnutella users with t1 connections, the problem will lie in propagating information up and down the network.

    I believe in a study done on internet routing, about 90% of the routing updates were useless and wasting the bandwidth.

    So in a very superficial view, the problem would seem to be a bandwidth problem but the bandwidth problem is caused by the stupid routing information being passed arround and this is a problem with the internet as a whole, not just p2p applications.
  • Aggregate bandwidth requirements would change in an Akamai-like caching server system, at least if not all content is transferred with equal probability, because if clients contact the closest (in terms of links) caching server and the content is still available there, it can deliver it over that shorter distance rather than heading to the server.

    If many clients share the same caching server/proxy, and they have similar tastes with regard to what they download, the savings in byte-meters (or whatever unit you choose to use) could be quite significant.

  • While it's true that the local aggregate bandwidth will be unaffected, any P2P protocol with a concept of distance between peers, and file cloning will use less aggregate bandwidth from the perspective of the entire internet.

    I'm sure someone else has coined a clever phrase for this already, but let's refer to this notion of bandwidth * hops in units of MBit-hops (akin to a kilowatt hour).

    If I download a file from my ISP, am I not using fewer megabit hops than if I download the same file from Outer Mongolia? Of course I am. This reduces the overall congestion between my ISP and Outer Mongolia, freeing those megabit hops to either be wasted, or used by someone else.

    -
  • I just searched this discussion, and no matches for virus. I know its not entirely on topic, but while we are at it...

    ...what do people think about p2p and malicious code? I mean just because it says photoshop.exe, doesn't mean it is- or hasn't been silkroped to launch the next assault on ebay.

    Yeah, the article is bogus- AND it took three of the fucking goons to write it. Total conjecture with some quotes from Forrester. What Husserl would have called "Arm chair reflection"

    But comparing star versus distrib(help), bandwidth, reliability and civil rights aside- what about security and vulnerability.

    I mean M$ shipped outlook with vb processing turned on by default- isn't this similiar?

    Just curious....

    -Sleen
  • The main problem is sharing just one upload stream on a 56k connection makes things unusable. On a DSL/Cable connection the load is practically unnoticeable, and I think sharing becomes a lot easier.
    --
    DigitalContent PAC [weblogs.com]
  • Intel did a crap job [oreillynet.com] of organizing the working group. Instead of a decentralized, standards-based organizatiion Intel tried to make it an old-line "scratch our back" hierarchy. P2P developers rejected it.
    --
    DigitalContent PAC [weblogs.com]
  • The P2P system in my sig encourages users to share by use of a distributed karma system (a little bit like a certain web news site ;)

    The release this Sunday will have file sharing enabled.

    0.02,
    Mike.
  • I remember when I was into Hotline (www.hotlinesw.com) there use to be lots of free servers, then folks figured out that you could make people go to a web page and click through on a bunch of ads to get the password for the account that would allow downloading. Most of the big sites went to that model quickly.
    ---
  • Let me see if I've got this straight -

    P2P won't work because the network and the users can't support it at a traffic (hardware/bandwidth) level. This, at a time when more people are buying faster hardware and broadband than ever before, and access, even over POTS, is moving towards the "ludicrously cheap" end of the monthly utility spectrum?

    Granted, most users are selfish assholes who don't, for whatever reason, bother to share the files they have "shared" from other people. The motivations for this behavior elude me - my files download into the same directory that I share by default.

    With the looming goodness of wireless broadband, FTTC (fiber to the curb), and ever-more-powerful personal computers and handheld devices, the whole Chicken Little argument comes down in tatters.

    Rafe

    V^^^^V
  • There are constraining factors on p2p, but these will actually fade away as more people get broadband.

    I think it's also more than that. These reports say that it's too slow, it's too hard it's too -insert problem-, but they don't realize that the architecture, and it's implementation are still immature and under heavy development. In the end, the implementation is what makes the the architecture viable and considering current p2p implentations are still in the 0.x release phases(freenet, mojonation, etc.), I think it's a little premature to declare them dead.

    -----
    "People who bite the hand that feeds them usually lick the boot that kicks them"
  • most of the people I see out there are taking, not contributing to the Gnutella and the like

    For every take there is a contribute :-)

  • I like to share also, but I'm also paranoid. (When will the Napster exploits begin?! When will jack-booted FCC thugs come breaking down my door?!) Besides general paranoia, it seemed like I had strange problems with my cable modem when a lot of people started downloading my files.

    So now my napster stays off. Of course, I don't download very much either. Most of my collection came from alt.binaries.blah.blah.mp3.whatever over a 28 and 56K dialup connection. (And what a slow torturous process that was!)

  • My favorite option in slashdot is being able to turn off sigs :-)

  • The whole P2P thing strikes me as going the way of Usenet. Originally just for discussions, as soon as it became practical to stick binaries on it then NNTP became the protocol of choice for warez, pr0n, virii etc. There is a feeling of anonimity (although true anonimity is hard if not impossible to attain) and there's no real "publisher" of the data.

    Almost all regular users pull far more data than they push - I'd guess that a good majority of binary Usenet users never post anything. Lately, the whole thing has become spam central, and the S/R ratio is terrible.

    Gnutella et al seem to be similar - no regulation (even less than the web), little or no accountability, and mroe consumers that producers. I'm not so worried about there being too little bandwidth - more that spam and other "noise" will increase to such a level that the system will become unusable, and also that the "powers that be" will find a way to regulate it. It's unfair, but over here in the UK ISP's like Demon are being successfully sued for content on their news hosts - the same could happen to ISP's who's users put illegal material on their P2P servers.

    Just my 1500 lira's worth...
  • Okay, essentially everyone here is going to agree that this CNET article is mostly a bunch of poorly-considered crap. Note that the only "P2P founder" of the four that they mention who agreed to provide quotes for the article said "Well, hopefully it will accelerate the democratization of the media, so there are more Slashdots and fewer CNETs."

    I have a suggestion. Should we call writting an article like this that is certain to bring thousands of hits from the Slashdot faithful, followed by inevitable articles that must explain the facts as we almost all agree they are, an example of "karma pimping"?

    If I couldn't break it with a hammer and a blowtorch, you shouldn't be able to patent it.

  • Seriously, come on guys. A couple months ago all the pundits were telling us how great P2P was going to be; now (at least some of them) are telling us why it won't work. RIGHT NOW ALL THEY ARE TALKING ABOUT IS VAPORWARE!

    Why can't the pundits wait until they have something to talk about before they start talking.

    Oh wait, it's because they are pundits.
  • Think of it : common modems are 56 K download and only 33K upload. In my country, the standard ADSL offering is 512K download and 128K upload.

    These things do favor a server-centric internet over peer-to-peer connections. The common user is supposed to be a content consumer more than a content producer ( well, honestly this is quite true for 99% of users - including me ).

  • Oh, I don't know about that.

    I've been hearing for years that P2P is bad, evil, a waste of time and bandwidth, etc. This assertion, of course, is nonsense.

    I'm wondering why CNET goes to the trouble to invest so much time and energy in such an article. Are they beholden to some corporate interest that would prefer we only use their "real" servers?

    Just thinking aloud...
  • In a peer-to-peer network, someone must be giving up those bytes or else there would be no bytes to take ;-)

    I understand the client-server model of downloaders never giving back. But when the network is P2P, some other user is providing the bytes to download and is therefore uploading.

  • I'm disturbed by the gratuitous use of the word 'socialist' in this article. P2P is constantly being characterized as having 'socialist roots' and compared to Linux (another 'socialist' movement). As a real live socialist (in the US!), I have to say that it's nice to get credit for such a great movement and great technology, but I don't think it's really deserved. Since when are the ideals of individual freedom and a distaste for corporate rule over communication exclusively socialist? Don't libertarians and liberal capitalists also share these ideals? Libertarians hate it when I say this, but libertarian philosophy (notice the small 'l', this is not the same as the Libertarian political party) and socialism have a lot in common. Most socialists support strong civil liberties (this is not true of the statist-socialists -- Stallinism falls into that category) as do libertarians. The distinction is that socialism has an economic theory as well.

    I think the one thing that free/open source developers have in common (myself included) is that individuals are capable of producing things of value without a profit motive. They can even enjoy producing without making a profit. This is not socialism (although socialists would agree that it is good). People from nearly every economic/political background are capable of engaging in these kinds of activities. I wish that the media would stop calling every non-corporate movement socialism and stop using the word as a scare tactic to keep people away from the scary 'socialist' technologies.

  • I've had to pull the plug on my housemate's computer when he's left Gnutella running while he was at work. We have cable modem and things were slowing down so bad my e-mail kept timing out, or short text-only messages took 30 seconds to download. At first I blamed the ISP, but when I looked at the switch I could see all sorts of activity on his port. I made sure he wasn't doing anything critical and dumped it. That solved my problems that day... It wasn't just his bandwidth being eaten up, but that of the whole household, and any of our neighbors using cable modem, too! snakelady
  • The Napster architecture, while introducing a single point of failure (at least from a legal standpoint)

    Napster's centralized server is not [napigator.com] a centralized point of failure thanks to OpenNap [sourceforge.net].

  • A good lawyer can take down any published set of servers.

    The game then becomes whack-a-mole [8m.com]. If the server software is freely available (even beer!), it _will_ be in warez archives, and other servers _will_ pop up. Think Hotline [bigredh.com].

  • Oh yeah, thats it. Thats what people said about the internet.
  • I do too, but more because i have a quota outgoing. One easy way to control p2p would have to be limited outgoing connections, with really high rates!
  • 1) THAN is in the condition of the IF, THEN is the result of the condition:

    IF (a > [THAN] b) THEN ...
    IF (me SOFTER [THAN] him) THEN ...


    2) CGI has no boolean operators. If you SUBMIT you can't do OR -> submittOR is invalid

    I hope this helps 8)
  • Yes, you are right - however, when @Home caps you (as they do here) to 15k/sec upload, you are screwed completely when someone starts to download at that speed on your cable modem.

    Sorry buddy, it's not a myth.
  • If you read the ACLU brief in Napster, you will see that what RIAA is asking for is to kill P2P.

    They will only permit P2P if there is a mechanism that checks a user's authorization before permitting a P2P transfer.p>

  • The only way to get decent download speeds (especially if you want Photoshop 6) is to shell out the $80/year and get a decent news-server subscription. Then, you can download to your heart's content -- you just have to wait for someone to post what you want and hope that all the parts are there. (SFV, right Rosie?)

    --
  • by don_carnage ( 145494 ) on Thursday October 26, 2000 @04:56AM (#674512) Homepage
    It's been said before and I'll say it again: the best way to get this whole deal to work is with ratios. Just like in the good old BBS days.

    Oh yeah...then we'll just have some jackass uploading Britney Spears mp3's renamed just to get download points...*sigh*

    --

  • Has anybody done any theoretical research here?

    Ben Houston's P2P Idea Page [exocortex.org]

  • I think people are only concerning themselves with finding and taking what they need and them quickly logging off. Concequently, the files they had set to share are only avaliable for a short time... I do this my self.
  • Sadly, I do agree with you. File sharing (peer to peer) will not survive very long if people only use it when they want something. Some X girlfriends out there might also agree with this logic. Basically this technology will only survive if people stop being greedy and start distributing. But then there is human nature :(
  • That was a sad day when after a few minutes realizing everything was predominately banner-fied. This is unfortunately the down side to distributed underground files. It's a seller's market so to speak.

    Slashdot knows that if you can't get the content of their site, you won't visit it, so they give you an option strategicly placed at the top, and its a good system. Those that are interested click, and vice versa.

    The New York Times wants you to register, as well as have ads, 'cause they're the NYT and think their content is that much more valuable that they can get your valuable demographics. I think that's fair.

    P2P file sharing (no matter what its form) is going to be by nature cut throat if it can be. FTP sites and hotline allowed for displaying of goods contingent on you performing something. Napster has no such mechanism, sans that the other person just might not be letting you download from him or they are firewalled. IRC has a little more polotics: sometimes things can be first come first serve, or more accessible through who-you-know, or just a cut throat as anywhere.

    So, this I'm sure describes every facet of underground -- drugs, prostitution, and yes illegal intellectual property.

    I think it'd be interesting to see how close the mindset of warez leecher and a prostitute are.

    ----

  • The area that I do question is how much is actually shared - most of the people I see out there are taking, not contributing to the Gnutella and the like.

    For my own part, I think there is a certain feeling of "it's OK if I take, but if I share, I'll be caught" It's the diference between finding a $10 bill on the sidewalk and intentionaly shortchanging someone. The guilt level is oh so much less.

  • It all depends on the application.
    Peer-to-peer works well in some instances, and star networks (server) works well in others. Just because more people see star networks working more often than peer-to-peer doesn't mean they are going to die.
    Two examples:
    1. Directly downloading a file from a friend. Peer-to-peer is by far the fastest way. The server would just be a middle man slowing you down.
    2. A FPS game would be incredibly laggy on a peer-to-peer because of the size of the overhead. The star-network is the better choice here.
    If you think about it, the internet is kinda both a star and a p2p network. There is no 'one central server', but a Peer-to-peer network of servers. So the P2P type of connection is going to die? I don't think so...


    -- Don't you hate it when people comment on other people's .sigs??
  • by rainbowfyre ( 175300 ) on Thursday October 26, 2000 @04:54AM (#674523)
    I think that alot of people don't share on Gnutella, because they honestly don't know how. Once, they do, they never really get around to it anyone, because the current setup works for them.

    Napster, and its near equals like Scour, all have sharing set up by default, and they both encourage you to stay online even if you're not using the program. Yesterday, on Scour, a whole bunch of people figured out that I had some somewhat rare anime videos. Instead of logging out when I was done, I just sent people messages that I wouldn't be there to monitor the transfers, and went to sleep. I think this happens more often than people think.

    People love sharing; it makes them feel generous. However, it CAN'T be difficult to do. In Gnutella, it is.

    -Rainbowfyre
  • by Luminous ( 192747 ) on Thursday October 26, 2000 @05:21AM (#674525) Journal
    It seems to me it is becoming more and more common for tech writers to proclaim the death of one thing or another, even when it isn't true. Content, the web, desktop computers, and mp3's have all at one time or another said to have died, yet as far as I can tell, all are still doing quite well.

    P2P has just scratched the surface. To say it is dead before it even gets out of the starting gate is a level of eagerness that surpasses morbidity.

    There are constraining factors on p2p, but these will actually fade away as more people get broadband. Sharing will become more prevalent when it is made easy and has an obvious level of security (like Napster, where you choose which folder other's get access to). Also, as soon as it is decided what can and cannot be shared, that will open things up. I know I get a bit leary when I see people downloading my Juice Newton tunes, wondering if it is actually Juice's lawyers gearing up to sue me.

    P2P may not be the next killer app, but it will become a mainstay of the internet like ftp. So let's stop paying attention to doomsayers who are just trying to be seen as prophets of the internet through Kassandra-like proclamations.

  • many believe that business adoption is necessary for the peer-to-peer concept to be accepted by the mainstream masses, as well as to overcome today's technological barriers.

    Who's "many"?

    Sounds like a biz journalist looking for the Next Big B2B Thing and, coming up empty, bitching about it. Last time I checked Napster was still going strong. If 30M users aren't "the mainstream masses," um, who is?

  • Ratios are practical only when they can be verified and human-approved in a timely fashion. With a BBS this was made easy since the transfer speeds were very low and we had fancy-shmancy Dos utils to test/scan/repack/sort the archives. The common file-uploading BBS user was also expected to know a thing or two about PC's, unlike the flood of MP3 lusers who think that Zipping their music to save 12kb is a "leet thing". These two dramatic changes have led the traditional ratio-based currency system into the flaming pits of Hell. The typical mp3 serving geek doesn't necessarily have the time to listen to each and every uploaded mp3 to check its validity and subsequently give download credits to the sender. It's too much hassle for next-to-nothing. However the concept of "leasing" disk space/bandwidth as in MojoNation seems promising. When people have to pay (even if it's a microscopic fee) to store/xfer files, they'll usually think twice about wasting that precious bandwidth with mindless filler. Of course you'll always run into a rich idiot with time to waste and people to piss off, but that's beside the point.
  • One of the big (well, big to me anyway) issues about P2P networking is the number of folks like me whose primary access to the 'net is through their corporate firewall. P2P software is not nearly as useful to me, and others like me, because we plain and simple can't use it -- our firewalls come up hard against pretty much any ports other than those used for "business". Right now, I can use HTTP, FTP Get, and that's about it. And I can well imagine a day when even those will go away.
  • by maddogsparky ( 202296 ) on Thursday October 26, 2000 @05:09AM (#674531)
    I look at P2P as having a ripple effect. Like a ripple in a puddle, each of the water molecules only has an effect on the molecules near to it. It has very little direct influence on molecules far away. However, the laws governing propogation transfer the effect through all the interveining molecules and do effect further molecules.

    Okay, enough of the analogy! The point is that long distance bandwidth (influence) is limited. However, short distance bandwidth to a limited number of peers is not a limiting factor. So, peers only need to look in their local "neigborhood". Since each "peer" has a slightly different "neighborhood",

    drum roll pleese....

    the information on the P2P network will propogate reguardless of bandwith restrictions on long-range connections.

    Obvious to anyone that understands how news servers work, but aparently not CNN.

  • People are downloading more than uploading for two reasons:

    1. They are lazy
    2. They are afraid.

    The penalty for possessing copyrighted material is much less than that for distributing it.

    --

  • The area that I do question is how much is actually shared - most of the people I see out there are taking, not contributing to the Gnutella and the like.

    Of course; after all, most people can't act as a server, even if they have broadband or DSL. That's why most of the ISPs which use those two use asynchronous connections (upload much slower than download). That way, users are driven away from acting as any kind of server, but are more than happy to download files and connect to multiplayer games as a client.

    From what I've heard, though, Covad uses restricted SDSL. That's nice, however, it's hard to find a reliable connection to that over here in Verizonland. I've tried to run a Q3 server on my 640/90kbps dn/up DSL connection; it wasn't pretty. My friend kept getting booted off for no reason, and pings were upwards of 300 for the clients.

  • by jabber01 ( 225154 ) on Thursday October 26, 2000 @05:45AM (#674543)
    First off, I DO understand the concept behind Napster, Gnutella and the like..

    However, the whole idea that P2P is at all different than server-to-server is ridiculous. Just TRY to set up a P2P connection on the net without going through an ISP.. If you can, then you ARE and ISP. You are a 'server' - whether you have clients of not is irrelevant. Even major corporations today have to go through an ISP for their connection to the backbones. My little workstation has to make just as many hops to get to Mae West as Sony's data center.

    There is no technical difference between gnutella and a couple of buddies running anonymous FTP servers on their home machines. There is no technical difference between that and IRC - except for volume of bits. Bits is bits is bits. The difference, the ONLY difference, is that there isn't a corporation extracting an additional toll on the data that's transmitted. There lies the 'problem' with P2P.

    If Guntella and Napster were used to share vacation photos NOBODY would care. ISP's might jack up their rates based on how much pipe you use, but that's it. If the data transfered wasn't (arguably) someone's 'intellectual property', this would not even be an issue.

    People have been running private FTP servers in a P2P fashion since before the WWW made server-to-server the defacto mode of operation. Before ISP's got on the band-wagon, is was all workstation to workstation, account to account, peer-to-peer.

    Just because some kid slapped a web interface onto a hack of anonymous FTP doesn't suddenly make it a different technology. Just because he made it distributed doesn't make it anything more than simply 'convenient'. Searchable FTP has existed for a long time, also since before the www. Anyone remember the Archie tool? Indexing, and making it transparent is the next obvious step, not some revolutionary break-through.

    P2P is nothing new, and it is nothing 'different' than what has always been done. Servers talk to each other as 'peers' too, don't they?

    Just because a bunch of corporate-types label the same technology in two different ways, depending on wether they get a cut of the profits or not, does not make one way doomed and the other saved. Just because somone calls this 'piracy' and that 'a stable business model' does not make the two ways into different technologies.

    P2P, S2S, B2B... It's all the same technology. It's the same protocols and algorithms. It's all the same bits. The difference is only in who is in CONTROL of THE DATA. He who controls the INFORMATION, controls the Universe.

    As for P2P 'failing' due to low bandwidth at the 'local loop', well, that's just a hot, steaming pile of BS. Ye Olde Bulletin Board Systems (the ORIGINAL P2P networks) thrived on 2400 baud.. They thrived even more on 9600, then, when 14k came, the Internet had started to mature and began to offer more 'value', farther reach and more neat stuff. But the BBS's didn't 'fail'. Not due to poor performance or inequitable sharing of files within the communities they supported. In fact, the only times BBS's were put out of business (except for their owners personal choice) it was due to... (drum roll) PIRACY and kiddie porn.

    The REAL jabber has the /. user id: 13196

  • It's one that people have been making [slashdot.org] about gnutella for a while (where there is in fact a lot of overhead), but gnutella's hardly the best example.

    If you haven't looked at freenet [sourceforge.net] yet, then do so. Not only is it peer-peer, but it's anonymous, and it's working TODAY. There are smart folks developing it, and they're being very careful not to make the same mistakes gnutella did.
  • We are hearing journalists over and over pick up on a technical truth -- the fact that p2p networks such as Gnutella are slow because their design limits practical size, most people are taking but not giving, etc. -- and saying "look! it doesn't work!" Meanwhile, the people who designed these networks in the first place (and many others) are busy finding ways to make the technology work. That's the spirit of this thing -- "Napster is a legal problem? Ok, we'll make a decentralized network. The new network protocol is slow? OK, here's one that works..."

    We are just in the time between the identification of the problem and the solution. Expect to see this one figured out.
    -----------

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...