Death of the P2P net Predicted! Film at 11! 132
87C751 writes "Cnet has a preachy, whiny piece bemoaning the peer to peer "phenomenon" and its lack of commercialization potential. The humor comes when they claim that bandwidth limitations will ultimately doom P2P (as though bits that traverse through a server somehow take less bandwidth than bits sent from one box directly to another). " Alright, I'm a little softer then the submittor, although I agree with some points. The area that I do question is how much is actually shared - most of the people I see out there are taking, not contributing to the Gnutella and the like.
Re:Sounds familiar (Score:1)
can you break the wall down.
Big business running rampant,
just look what they have found.
The Internet, the brave new world
the thing they most despise.
To take control they must embrace
if not they loose their size.
There's nothing less exciting
to the people in the world,
than the prospect of the future
run by the business whorl.
We wish we had a say
in what our life 's about.
Instead we're getting paid
to shut our goddamned mouth!
Why, oh why do we
accept the current greed.
Where and how can we
grow out and up in deeds.
We must find ways to be
more than another cog
within machinery
of business founding fog.
So gather round, come near
and understand our plight.
I'm asking all that hear
to stand up and to fight.
Don't let them take away
the things we call our rights
It's time to bring them down!
The walls come down tonight!
-The Anonymous Poet
Re:Server-centric internet connectivity (Score:1)
Maybe, but the existence of per minute call charges in many countries puts a bigger hole in a P2P network model than asym. network bandwidth.
Asymmetric connections (Score:2)
At my home, upload speeds are limited to 112kbps (about 14kB/s). It isn't terrible, though uploading a single MP3 can bog down your downstream rate too (since the two-way handshaking that goes on takes up a small but significant amount of the TX/RX time).
--
Re:Aggregate bandwidth (Score:1)
I'm working on it, both analytically and in simulation. Gnutella requires O(N) bandwidth at each host, or O(N^2) overall. Napster requires O(1) bandwidth at each host, or O(N) overall. The single point of failure indeed buys better network scaling.
I'd guess that in a P2P network the bandwidth required to carry meta-information would go up O(N^2)
That depends on the design. Not all designs are as bad as Gnutella. A hierarchical index can provide scalability without being entirely centralized. Caching can probabilistically reduce traffic. Don't assume that just because Gnutella is bad, P2P as a whole is bad.
The Napster architecture, while introducing a single point of failure (at least from a legal standpoint)
Napster's architecture has a single point of failure from reliability and scalability standpoints as well. Illegality is not their only weakness.
centralizes meta information allowing O(N) growth of query bandwidth in nodes, and decentralizes data transfer
In that respect, Napster is directly analogous to how search engines on the web work. Napster has the same scalability problems as search engines, too: there are just too many documents for any one central point to store all the meta information.
Yes, there is a better way. I'll publish it once I have a working prototype.
Re:Aggregate bandwidth (Score:1)
Akamai reduces the amount of wide-area bandwidth used for retrieving content. It doesn't reduce the wide-area bandwidth or server load consumed by searches/lookups and meta-information. The original post in this thread was talking about how much bandwidth is consumed by meta-information, which Akamai does nothing to alleviate.
Heard something like this in college... (Score:1)
I guess IBM and the Linux community don't have the sense to quit.
Misfit
Substitute USENET or IRC for "P2P (Score:1)
More ancient readers on slashdot were astute to point out the similarity btw. current P2P file sharing and things like Archie.
I find it pathetic how these authors seem to have little sense of history about how certain internet applications became popular.
-Dean
Re:sychophants (Score:1)
Re:It's not the bandwith limitations... (Score:1)
To go one step further it isn't the bandwidth and it isn't lack of scalability, it is vulnerability to DoS attacks while the network remains small. The reason I say scalability is not an issue is because if the packet TTL is properly handled then a network of 10 billion nodes is no more congested than a network of 10 thousand nodes. That is because under reasonable assumptions (ttl = 7, 3 or 4 connections per node) a packet won't see more than 10 thousand nodes no matter how large the entire network is.
Given fixed resources for malicious behavior, it doesn't take a rocket scientist to see that the potential for disruption is diminished as the network gets larger. Conversely, if pinhead journalists can discourage enough people from trying things like gnutella, that helps to limit the size of the network, make it more vulnerable to attack, and serve as a self fulfilling prophecy.
Re:I admit, I do it.. (Score:1)
This sort of thing will do much better once there are some widespread industry standards on bandwidth shaping. It would be great if there was a way to keep a certain portion of your pipe open for critical traffic, and let the rest be used for whatever. Unfortunately that requires the ISP to coordinate and cooperate with the end user. Yeah, that'll happen.
Re:Ratios... (Score:1)
P2P/Client-Server...What's the difference? (Score:1)
If anyone looked at the graphic in the article, the method they show describing P2P is something akin to using the Metacrawler search, to search through many search engines (with the distinction that there are still central databases to search, instead of asking all 4 billion hypothetical systems what they've got). Why? Probably because the Internet was created as a peer-to-peer system. From the lowliest PC-XT running DOS-based software, all the way up to the Sun E10000's they're all "equal" on the 'net; if it can be connected, it can provide content (given that it isn't prempted by some silly Internet Content Provider's rules), yet the peer containing the content is the server, and the peer requesting the content is the client.
One last thought for Hemos: A bit that gets taken from Gnutella, or its cousins, is a bit that was contributed to it.
Haven't you noticed slowdown?!? (Score:1)
Well, I don't know if you have a shared connection (like a cable modem), but I sure as shit saw the bandwidth going away as more and more people started using Napster.
The whole model of "overselling" your bandwidth (as an ISP) because most people weren't using it all the time starts to break if the people you are selling it to are using it all the time. Just as dial-in ISPs had to rework their rules of thumb for how many dial-in lines they needed per user because people were spending more time online.
Napster (and other p2p apps) change the game because they make it easy for people to use all of their bandwidth. Prior to them, you had a much harder time finding sites to leech from (well, maybe not you but certainly the average Internet user). Now, it's easy.
Hell, one time I was running Gnutella (and actually sharing my files) until people in my office started complaining that their requests (to websites) were timing out, and that the net seemed really slow. I killed Gnutella, and pow, everything was zippy again.
That scenario can happen all over the net. Popular p2p apps flat out consume more bandwidth.
Jordan
Napster is a serious point of failure (Score:1)
Life seems not to have changed much after the lawyers attack. There's proscribed data out there, somewhere, on a constantly changing set of servers used by a bunch of 14 year olds (whether or not I'm referring to physical or mental age is left as an exercise for the reader). Boom. Napster is relegated to the category inhabited by USEnet 7 years ago and becomes an annoyance when people like me bring up ("remember when you could post to alt.sex.bondage and not lose your important mail in the hellstorm of spam? I used to carry tapes fulla newsgroups up hill for miles, covered in snow...").
You actually hilight the difference - published (accessible via a known node) vs. unpublished (find it, if you can). And this is really what the current battles are about.
Napster has a single point of failure. Take out the company of the same name, and most of the work-alikes will run away and hide. The few that don't will be attacked, with legal precedent behind the attackers.
Freenet has a chance, but it is still too hard to use, too cryptic, too geeky. It works well, but it is a bit similar to PGP - "understand these things, and you'll be able to use the magic to your advantage". Contrast with "click here for more Metallica".
I do hope these things can be made to work well. The sharing of data between individuals is under attack. Imagine if the telephone had been limited to companies who could pay an intermediary to carry a message. (OK, it isn't quite that bad, but the potential for limiting the evolution new methods of communication is being held ransom to AOL/Warner and Sony.)
OK, enough ranting. Bottom line is, warez doesn't change anything, and directories of user provided data does.
I think there is a solution to metadata distribution similar to Freenet's method, but without an assumption of being global that keeps the Gnutella problem under control. More to come, maybe.
-j
P2P was the original application of the Internet (Score:1)
up for the first time, all networking was P2P.
Most networking stayed that way until the WWW
technology made a client/server form of networking
possible in the middle 1990s. There is absolutely
nothing new or unusual about P2P - every basic
technology on the Internet was designed to support
it from the earliest days.
Ho Hum... (Score:2)
Yet another set of commercial ventures predicts the death of something the use of which they can't figure out how they can charge money for.
Oh, and I really like how CNet's ``printer friendly'' version of their pages removes the graphics that are associated with the article but leaves the banner ads. Pathetic.
--
Re:I admit, I do it.. (Score:1)
As many people have probably said, that's totally uneccesary. You don't need much upload bandwidth to download stuff. Just enough for the occasional ACK packet.
I always leave uploading on. I just wish that gnapster had a rate limiter built into it. If it did, I'd leave it up most of the time.
What is needed is _some_ P2P (Score:2)
That way (for instance) you ask your local server for a file and it uses a P2P method to retrieve that file across the network of 'big servers', then sends it to you. That way your dial-up connection isn't slowing down the large scale network.
_____
I disagree about file transfer (Score:2)
If you've got a high-bandwidth cache in between you can get a 50% higher throughput. Also, if the file has been cached beforehand (because someone else downloaded it) you can get it at your maximum speed.
_____
Re:It all depends (Score:1)
Yes, I really, really do
Just couldn't resist
Re:That doesn't make any sense (Score:2)
Then, when I try to browse the net, read
So while you are theoretically correct, it doesn't entirely work that way.
Not everything has to be a business-model (Score:2)
make a market for bandwidth - mojo (Score:1)
"p2p" (i hate buzzwords) has a bright future!
Re:Total ignorant BS? (Score:1)
Yes, I've had that discussion, as I suspect have a lot of /. readers. That's exactly what I was thinking of in writing my message. Andreesen may not have singlehandly revolutionized technology, but he sure started something, despite how much we all might be tempted to say he got lucky, right place at right time, etc. There's probably a bit of truth to both sides of that argument.
Actually in some respects, I think what Fanning did was more revolutionary: he didn't just put a new user interface on an existing service (Mosaic on WWW), thus making it more usable, he conceived and created (or if you prefer, packaged) a new service. Andreesen+BernersLee==Fanning? ;-)
I'm just making the point that P2P isn't any different, technologically, than tried and true networking fundamentals, and so the argument that it will fail on technical merit is completely flawed.
But there is a difference which has a technological symptom: a new and significantly higher demand for a high-bandwidth type of P2P, namely media file exchange. Nothing on the scale of Napster has existed before - an online distributed database in the multi-terabyte range which exists on individual user's home computers, rather than on managed servers, with high-volume data traffic. Napster has caused more trouble for bandwidth management at places like universities than any other service I'm aware of. So although the basic components of the service may be familiar, the emergent behavior of the system is not. So it's not necessarily invalid to argue that it might fail because of technical constraints, not to mention issues like the "tragedy of the commons", although I don't happen to think that'll be the case with P2P "file sharing" in general.
Re:Total ignorant BS? (Score:2)
If it was so obvious, why didn't someone do it three years ago? Seemingly minor or incremental improvements in the usability or packaging of existing technology can be a breakthrough if the result is that hundreds of thousands of people suddenly become able to do something which they want to be able to do, but couldn't previously.
I suspect you have a narrow technical definition of what you think constitutes a revolutionary breakthrough. The fact that the recording industry is shaking in its boots right now is proof enough of the revolutionary nature of P2P file exchange. And it's this specific application and incarnation of P2P "technology" that the CNET article is about. Not that I agree with the article itself - I'm simply reacting to your unjustifiably dismissive comment.
Bits is bits is bits.
Uh-oh, Nicholas Negroponte is posting on Slashdot now!!!
How much is shared? (Score:2)
How much is being shared? A whole hell of a lot. Where do you think all the people who "take, take, take" are taking from?
Here's some stats for my MP3 sharing over HTTP only. This doesn't count what I share on OpenNap [sourceforge.net].
dwarf:/var/log/apache$ head -1 access.log127.0.0.1 - - [22/Oct/2000:06:26:35 -0400] "GET
dwarf:/var/log/apache$ tail -1 access.log
XX.XX.XX.XX - - [27/Oct/2000:10:27:36 -0400] "GET
dwarf:/var/log/apache$ grep -v '^127.0.0.1' access.log | grep '\.mp3' | wc -l
503
(I changed the IP address to XX's to protect the identify of the person who made that last request.)
503 mp3 file transfers (some of which are partials and resumes, of course) in 124 hours -- or about 4 per hour. And that's a very small number compared to the activity I get on OpenNap (which I don't log, currently, but trust me -- it's much more than 4 per hour).
Those of us who share may be in the minority, but we definitely exist.
Re:Ratios... (Score:2)
Ratios... points... offering services to earn virtual currency which you can spend to download information... sounds like a pretty good idea, eh?
That's what the Mojo Nation [mojonation.com] folks thought, too.
How can everyone be taking? (Score:1)
Yes, MORE people take than give, but as long as some people are giving, these file-sharing systems will continue to flourish.
Scott
It's all about commercialization (Score:1)
It's not the bandwith limitations... (Score:1)
The more people searching through a gnutella-kind-of-P2P-network, the more traffic is used by searches and searchresponses. That's all bandwith you can't use for filetransfers anymore. The bandwith problem only kicks in because your searches are going through slower links in the P2P chain, making effective searches a real problem.
There was an article about this [zdnet.com] earlier, which was also posted to Slashdot [slashdot.org].
;
Re:It's a common argument (Score:2)
Hmm... maybe I should go hack the gnut code and put up a high bandwidth only net.
I admit, I do it.. (Score:3)
I should probably turn it on at night when I'm not using the machine to give back, but I haven't bothered.. there's no penalty if I don't do it, so why should I? I know I'm not alone in that line of thinking, though it may be wrong.
Yep. (Score:1)
and
'Linux? It's not viable. Free software is not commercially viable'
Yeah. peer2peer is 'doomed'. Right. It's only going to get better, not worse.
Damn, though, don't you hate the buzzword? peer2peer? p2p? (that used to mean
Re:ISPs are biased against P2P (Score:1)
I've seen that using some higher-bandwidth programs that require consistent connectivity (vnc) drop like crazy. I believe Darwin networks may be at fault, but it's just as likely that it's the cable modem on my end as well.
think of p2p like TV... (Score:1)
Re:sychophants (Score:1)
---
Re:sychophants (Score:1)
---
Re:Digging a little deeper: (Score:1)
Not true. One end can be just a server, while the other end need only be a client. Peer-to-peer simply means that 2 machines normally though of as "clients", e. g., home PC's or workstations, can communicate with each other without the aid of a server machine.
Re:That doesn't make any sense (Score:2)
Oh my god... (Score:1)
...you killed Napster, you bastards.
Or something.
Re:Aggregate bandwidth (Score:2)
Finding stuff is a different matter, and I suspect part of the solution here is to learn to accept imperfection by design.
Yes, I see the Freenet design implements a finite TTL on a request. Combine with caching data it means the network adapts to more popular data, serving it across fewer hops. So the effective network radius you have to search is limited for popular data, and TTL limitations puts a hard limit on less popular data searches. This caps the growth of bandwidth usage per user, which is a good thing, but it also means that you can't deterministically find something that actually exists.
It's anyone's guess to how performance and reliability would be affected by scaling to, say, a good fraction of the current size of the web; I expect there will be some interesting chaotic phenomena that will be uncovered with respect to the precise way parameters such as TTL and cache size are tuned.
Anyway, my hats off the freenet people -- they're in for an interesting ride.
Re:Napster has no Central Point Of Failure (Score:2)
The freenet architecture is interesting because it is even more decentralized and the servers are networked to share metadata. This means that in addition to taking down a number of well known servers, the lawyers will end up in a netherworld where metadata is passed in a highly connected and nondeterministic way.
Aggregate bandwidth (Score:5)
The aggregate bandwidth needed for file transfers won't change; it's the bandwidth required for meta-information -- catalogs, searches and search responses, that goes up.
Has anybody done any theoretical research here? I'd guess that in a P2P network the bandwidth required to carry meta-information would go up O(N^2) -- that is you want to have a network of information distributing nodes that is some fraction of a complete graph. The Napster architecture, while introducing a single point of failure (at least from a legal standpoint), seems closer to optimal from a purely technical standpoint -- it centralizes meta information allowing O(N) growth of query bandwidth in nodes, and decentralizes data transfer.
P2P Bandwidth != Server Bandwidth (Score:1)
In case you need a refresher:m l [slashdot.org]
http://slashdot.org/articles/00/09/12/1217200.sht
The problem with gnutella and a lot of P2P is that is assumes all peers are equal. When the primary routing goes through some over-bandwidthed, over-funded .com, peer to peer works okay, but when you're relying on your query to go through some yahoo with a 28.8, it ain't gonna fly too well.
Sharing is what it's all about (Score:2)
My current favorite MP3 sharing program is Audiogalaxy. It has a security and anonymity oriented design, but on the discussion board people are boasting about how many files they are sharing and how many gigs they've shared so far. Contrary to the GNUtella experience, most of these folks seen to be taking advantage of anonymity to share more, rather than less. Of course that could change.
S.S.D.P. (Score:3)
i remember talking with my father in 1992 about this whole "internet" thing. he thought that no one would be able to make money on it, and that there is no compelling reason for it to be used.
and then came the web and all hell broke loose in 1994.
now we've got a different protocol, one that keeps true to the original intenet of the internet, and allows "Peer to Peer" sharing.
geez, the internet has always been peer to peer sharing, this is just allowing us to go back to this philosophy, and allow everyone to truly contribute back, instead of only those with large amounts of cash needed to generate hits.
so, all of a sudden, we will be back to the model that allows anyone to communicate with anyone else.
We're taking the power back with P2P. Using the internet what its meant to do - communicate, not make a buck...
tagline
Re:That doesn't make any sense (Score:1)
Back in the day when I was running a BBS connected to FidoNet, the preferred protocols for transferring mail and echoes were bidirectional...stuff got sent and received at the same time to minimize long-distance phone bills and maximize a BBS's availability to callers. As long as you weren't using a modem with grossly asymmetric transmit/receive speeds (such as one of USR's HST modems, which ran at speeds up to 16.8 kbps in one direction but only 300 or 450 bps in the other), you'd get decent speeds both ways. There were also bidirectional file-transfer protocols available to callers, such as HSLink. (With me being the leech that I was, though, I usually stuck with ZMODEM. :-) Hell, I even still use ZMODEM occasionally today for the odd task or two.)
Theoretically, the same ought to hold true over your Internet connection with file transfers today. In practice, though, if you're on a dial-up connection, some modems handle bidirectional traffic better than others. In some (mainly cheaper) modems, not enough processing power is available in the modem's controller to keep up maximum speed both ways. If you're sucking down MP3s/pr0n/warez at 5 kbps and then someone starts sucking files off of your computer, odds are good you'll see at least a slight drop in your download speed. If you're using a winmodem of some kind, it gets even worse as the modem now has to contend with everything else going on in your computer for processor time.
(Of course, we're all using cable modems or DSL now. :-) These seem to not be affected by this problem as much. About the only time I notice a speed deficiency is if I'm logged into my server from someplace else while it's in the middle of a download...it sometimes takes a second or two for keystrokes from the ssh client to get through. Screen updates, though, are still quick (4x faster than dial-up).)
Re:Total ignorant BS? (Score:2)
What do we need? Something relatively anonymous. Something relatively stealthy. Something relatively standard, familiar, easy to use.
So I says to myself FTP. Just hack on some extra functionality that allows pseudo-links to *other* FTP servers. Clients would traverse the filesystem and not know that they were actually getting listings from N servers away (much like Gnutella, but an FTP interface). Hey, why not slap on a new command, say, REVERT, which, when passed with a secret key, makes the FTP server revert into "dumb" normal FTP mode. Great.
So then I look up the FTP spec. And what do I realize? FTP *already has defined a seperation of control and data flow*. FTP *already theoretically supports proxying*. FTP *is* Gnutella effectively. Somebody please read the FTP spec, and implement a server which will transparently do proxying like this (the nested remote filesystem stuff would be nice too - not sure if that is specified by the RFC).
This has the nice added feature that any law that attempts to strike this down, will have to strike down the FTP protocol...it will then be laughed right out of court.
Re:Aggregate bandwidth (Score:3)
Essentially, the problem comes down to how do you find each other, and how do you find stuff. Finding each other is generally done with centralized services (eg DNS). But, there are other options, including limited multicast, expanding spheres of knowledge (ie you learn about 1 other node, and it tells you the nodes it knows, and they tell you the nodes they know, and so on - this is similar to Freenet). But, once you've found a node to talk to, bandwidth is the same as a non P2P network.
Finding stuff is a different matter, and I suspect part of the solution here is to learn to accept imperfection by design. No, you can't search everything because that would involve going to every node and querying it, which would be impractical. However, you can spider out through the nearest nodes, and they should be able to point your query in the most promising directions, and you could configure your search to be as far-reaching (and slow) or as near-sighted (and quick) as you like.
Another point to make is that there is the potential for our bandwidth capabilities to go through the roof in the relatively near future. With fiber, optical switching technology, we could easily see bandwidth essentially being removed as a bottleneck - perhaps in the next 5 -10 years.
Napster = Bandwidth Hog (Score:1)
JOhn
Re:P2P was the original application of the Interne (Score:1)
Re:I admit, I do it.. (Score:3)
When I use Gnutella, I often don't share at all, because my CPU utilization goes very high when I do, and then I can't listen to the new MP3s I'm getting without skips. (I assume this is due to my computer needing to check every search string that comes through against my list of shared files.)
Both of these problems are fixable with increased bandwidth and computing power. (Or maybe I just have a buggy version of Gnutella.) I'm very enthusiastic about the possibilities of P2P, and I genuinely try to share as much as possible. While I realize not everyone on Gnutella or Napster is as idealistic, I have a feeling the percentage who are is a good bit higher than the 2% (or whatever) reported. Of course you can't blame CNet for taking the "corporate whore" view of human nature, but in my experience people like to share with each other, and will especially do so whenever it is easy and doesn't have noticable drawbacks.
Why P2P currently doesn't work as well as napster (Score:1)
Without large numbers of college kids / AOLer / et.c. sharing their entire song collections, it's never going to reach the level of success Napster has.
Its the routing, not the bandwidth... (Score:1)
I believe in a study done on internet routing, about 90% of the routing updates were useless and wasting the bandwidth.
So in a very superficial view, the problem would seem to be a bandwidth problem but the bandwidth problem is caused by the stupid routing information being passed arround and this is a problem with the internet as a whole, not just p2p applications.
Re:Aggregate bandwidth (Score:2)
If many clients share the same caching server/proxy, and they have similar tastes with regard to what they download, the savings in byte-meters (or whatever unit you choose to use) could be quite significant.
Not true (Score:1)
I'm sure someone else has coined a clever phrase for this already, but let's refer to this notion of bandwidth * hops in units of MBit-hops (akin to a kilowatt hour).
If I download a file from my ISP, am I not using fewer megabit hops than if I download the same file from Outer Mongolia? Of course I am. This reduces the overall congestion between my ISP and Outer Mongolia, freeing those megabit hops to either be wasted, or used by someone else.
-
Virus distribution via p2p (Score:1)
...what do people think about p2p and malicious code? I mean just because it says photoshop.exe, doesn't mean it is- or hasn't been silkroped to launch the next assault on ebay.
Yeah, the article is bogus- AND it took three of the fucking goons to write it. Total conjecture with some quotes from Forrester. What Husserl would have called "Arm chair reflection"
But comparing star versus distrib(help), bandwidth, reliability and civil rights aside- what about security and vulnerability.
I mean M$ shipped outlook with vb processing turned on by default- isn't this similiar?
Just curious....
-Sleen
No Bandwith Is the Problem (Score:1)
--
DigitalContent PAC [weblogs.com]
Corporate Bullying (Score:1)
--
DigitalContent PAC [weblogs.com]
Shameless plug... (Score:2)
The release this Sunday will have file sharing enabled.
0.02,
Mike.
There's no way to even try to make $ (Score:2)
---
Good Grief (Score:1)
P2P won't work because the network and the users can't support it at a traffic (hardware/bandwidth) level. This, at a time when more people are buying faster hardware and broadband than ever before, and access, even over POTS, is moving towards the "ludicrously cheap" end of the monthly utility spectrum?
Granted, most users are selfish assholes who don't, for whatever reason, bother to share the files they have "shared" from other people. The motivations for this behavior elude me - my files download into the same directory that I share by default.
With the looming goodness of wireless broadband, FTTC (fiber to the curb), and ever-more-powerful personal computers and handheld devices, the whole Chicken Little argument comes down in tatters.
Rafe
V^^^^V
Re:Tech Articles or Obituaries? (Score:1)
I think it's also more than that. These reports say that it's too slow, it's too hard it's too -insert problem-, but they don't realize that the architecture, and it's implementation are still immature and under heavy development. In the end, the implementation is what makes the the architecture viable and considering current p2p implentations are still in the 0.x release phases(freenet, mojonation, etc.), I think it's a little premature to declare them dead.
-----
"People who bite the hand that feeds them usually lick the boot that kicks them"
Seems to me... (Score:1)
For every take there is a contribute :-)
Re:maybe on Gnutella.... (Score:1)
So now my napster stays off. Of course, I don't download very much either. Most of my collection came from alt.binaries.blah.blah.mp3.whatever over a 28 and 56K dialup connection. (And what a slow torturous process that was!)
Re:It all depends (Score:1)
Just like Usenet (Score:1)
The whole P2P thing strikes me as going the way of Usenet. Originally just for discussions, as soon as it became practical to stick binaries on it then NNTP became the protocol of choice for warez, pr0n, virii etc. There is a feeling of anonimity (although true anonimity is hard if not impossible to attain) and there's no real "publisher" of the data.
Almost all regular users pull far more data than they push - I'd guess that a good majority of binary Usenet users never post anything. Lately, the whole thing has become spam central, and the S/R ratio is terrible.
Gnutella et al seem to be similar - no regulation (even less than the web), little or no accountability, and mroe consumers that producers. I'm not so worried about there being too little bandwidth - more that spam and other "noise" will increase to such a level that the system will become unusable, and also that the "powers that be" will find a way to regulate it. It's unfair, but over here in the UK ISP's like Demon are being successfully sued for content on their news hosts - the same could happen to ISP's who's users put illegal material on their P2P servers.
Just my 1500 lira's worth...
[slightly OT] Karma Comment (Score:1)
I have a suggestion. Should we call writting an article like this that is certain to bring thousands of hits from the Slashdot faithful, followed by inevitable articles that must explain the facts as we almost all agree they are, an example of "karma pimping"?
If I couldn't break it with a hammer and a blowtorch, you shouldn't be able to patent it.
Someone please shoot the pundits! (Score:2)
Why can't the pundits wait until they have something to talk about before they start talking.
Oh wait, it's because they are pundits.
Server-centric internet connectivity (Score:2)
These things do favor a server-centric internet over peer-to-peer connections. The common user is supposed to be a content consumer more than a content producer ( well, honestly this is quite true for 99% of users - including me ).
Re:nice one, Hemos (Score:1)
I've been hearing for years that P2P is bad, evil, a waste of time and bandwidth, etc. This assertion, of course, is nonsense.
I'm wondering why CNET goes to the trouble to invest so much time and energy in such an article. Are they beholden to some corporate interest that would prefer we only use their "real" servers?
Just thinking aloud...
What? Someone must be giving... (Score:1)
I understand the client-server model of downloaders never giving back. But when the network is P2P, some other user is providing the bytes to download and is therefore uploading.
Socialism? (Score:1)
I think the one thing that free/open source developers have in common (myself included) is that individuals are capable of producing things of value without a profit motive. They can even enjoy producing without making a profit. This is not socialism (although socialists would agree that it is good). People from nearly every economic/political background are capable of engaging in these kinds of activities. I wish that the media would stop calling every non-corporate movement socialism and stop using the word as a scare tactic to keep people away from the scary 'socialist' technologies.
Re:That doesn't make any sense (Score:1)
Napster has no Central Point Of Failure (Score:2)
The Napster architecture, while introducing a single point of failure (at least from a legal standpoint)
Napster's centralized server is not [napigator.com] a centralized point of failure thanks to OpenNap [sourceforge.net].
Re:Napster has no Central Point Of Failure (Score:2)
A good lawyer can take down any published set of servers.
The game then becomes whack-a-mole [8m.com]. If the server software is freely available (even beer!), it _will_ be in warez archives, and other servers _will_ pop up. Think Hotline [bigredh.com].
Sounds familiar (Score:1)
Re:I admit, I do it.. (Score:1)
coder's rules of spelling (Score:1)
IF (a > [THAN] b) THEN
IF (me SOFTER [THAN] him) THEN
2) CGI has no boolean operators. If you SUBMIT you can't do OR -> submittOR is invalid
I hope this helps 8)
Re:That doesn't make any sense (Score:1)
Sorry buddy, it's not a myth.
RIAA lawyers hate P2p! (Score:1)
They will only permit P2P if there is a mechanism that checks a user's authorization before permitting a P2P transfer.p>
Re:Those claims are just plain wrong. (Score:1)
--
Ratios... (Score:3)
Oh yeah...then we'll just have some jackass uploading Britney Spears mp3's renamed just to get download points...*sigh*
--
Re:Aggregate bandwidth (Score:1)
Ben Houston's P2P Idea Page [exocortex.org]
I'M First !! (Score:1)
Re:I'M First !! (Score:1)
Re:There's no way to even try to make $ (Score:2)
Slashdot knows that if you can't get the content of their site, you won't visit it, so they give you an option strategicly placed at the top, and its a good system. Those that are interested click, and vice versa.
The New York Times wants you to register, as well as have ads, 'cause they're the NYT and think their content is that much more valuable that they can get your valuable demographics. I think that's fair.
P2P file sharing (no matter what its form) is going to be by nature cut throat if it can be. FTP sites and hotline allowed for displaying of goods contingent on you performing something. Napster has no such mechanism, sans that the other person just might not be letting you download from him or they are firewalled. IRC has a little more polotics: sometimes things can be first come first serve, or more accessible through who-you-know, or just a cut throat as anywhere.
So, this I'm sure describes every facet of underground -- drugs, prostitution, and yes illegal intellectual property.
I think it'd be interesting to see how close the mindset of warez leecher and a prostitute are.
----
Guilty fellings (Score:2)
For my own part, I think there is a certain feeling of "it's OK if I take, but if I share, I'll be caught" It's the diference between finding a $10 bill on the sidewalk and intentionaly shortchanging someone. The guilt level is oh so much less.
It all depends (Score:2)
Peer-to-peer works well in some instances, and star networks (server) works well in others. Just because more people see star networks working more often than peer-to-peer doesn't mean they are going to die.
Two examples:
1. Directly downloading a file from a friend. Peer-to-peer is by far the fastest way. The server would just be a middle man slowing you down.
2. A FPS game would be incredibly laggy on a peer-to-peer because of the size of the overhead. The star-network is the better choice here.
If you think about it, the internet is kinda both a star and a p2p network. There is no 'one central server', but a Peer-to-peer network of servers. So the P2P type of connection is going to die? I don't think so...
-- Don't you hate it when people comment on other people's
maybe on Gnutella.... (Score:3)
Napster, and its near equals like Scour, all have sharing set up by default, and they both encourage you to stay online even if you're not using the program. Yesterday, on Scour, a whole bunch of people figured out that I had some somewhat rare anime videos. Instead of logging out when I was done, I just sent people messages that I wouldn't be there to monitor the transfers, and went to sleep. I think this happens more often than people think.
People love sharing; it makes them feel generous. However, it CAN'T be difficult to do. In Gnutella, it is.
-Rainbowfyre
Tech Articles or Obituaries? (Score:4)
P2P has just scratched the surface. To say it is dead before it even gets out of the starting gate is a level of eagerness that surpasses morbidity.
There are constraining factors on p2p, but these will actually fade away as more people get broadband. Sharing will become more prevalent when it is made easy and has an obvious level of security (like Napster, where you choose which folder other's get access to). Also, as soon as it is decided what can and cannot be shared, that will open things up. I know I get a bit leary when I see people downloading my Juice Newton tunes, wondering if it is actually Juice's lawyers gearing up to sue me.
P2P may not be the next killer app, but it will become a mainstay of the internet like ftp. So let's stop paying attention to doomsayers who are just trying to be seen as prophets of the internet through Kassandra-like proclamations.
What I thought was particularly dumb (Score:2)
Who's "many"?
Sounds like a biz journalist looking for the Next Big B2B Thing and, coming up empty, bitching about it. Last time I checked Napster was still going strong. If 30M users aren't "the mainstream masses," um, who is?
Re:Ratios... (Score:2)
P2P and corporate netizens (Score:2)
it's like a ripple (Score:3)
Okay, enough of the analogy! The point is that long distance bandwidth (influence) is limited. However, short distance bandwidth to a limited number of peers is not a limiting factor. So, peers only need to look in their local "neigborhood". Since each "peer" has a slightly different "neighborhood",
drum roll pleese....
the information on the P2P network will propogate reguardless of bandwith restrictions on long-range connections.
Obvious to anyone that understands how news servers work, but aparently not CNN.
sychophants (Score:2)
1. They are lazy
2. They are afraid.
The penalty for possessing copyrighted material is much less than that for distributing it.
--
ISPs are biased against P2P (Score:2)
Of course; after all, most people can't act as a server, even if they have broadband or DSL. That's why most of the ISPs which use those two use asynchronous connections (upload much slower than download). That way, users are driven away from acting as any kind of server, but are more than happy to download files and connect to multiplayer games as a client.
From what I've heard, though, Covad uses restricted SDSL. That's nice, however, it's hard to find a reliable connection to that over here in Verizonland. I've tried to run a Q3 server on my 640/90kbps dn/up DSL connection; it wasn't pretty. My friend kept getting booted off for no reason, and pings were upwards of 300 for the clients.
Total ignorant BS? (Score:5)
However, the whole idea that P2P is at all different than server-to-server is ridiculous. Just TRY to set up a P2P connection on the net without going through an ISP.. If you can, then you ARE and ISP. You are a 'server' - whether you have clients of not is irrelevant. Even major corporations today have to go through an ISP for their connection to the backbones. My little workstation has to make just as many hops to get to Mae West as Sony's data center.
There is no technical difference between gnutella and a couple of buddies running anonymous FTP servers on their home machines. There is no technical difference between that and IRC - except for volume of bits. Bits is bits is bits. The difference, the ONLY difference, is that there isn't a corporation extracting an additional toll on the data that's transmitted. There lies the 'problem' with P2P.
If Guntella and Napster were used to share vacation photos NOBODY would care. ISP's might jack up their rates based on how much pipe you use, but that's it. If the data transfered wasn't (arguably) someone's 'intellectual property', this would not even be an issue.
People have been running private FTP servers in a P2P fashion since before the WWW made server-to-server the defacto mode of operation. Before ISP's got on the band-wagon, is was all workstation to workstation, account to account, peer-to-peer.
Just because some kid slapped a web interface onto a hack of anonymous FTP doesn't suddenly make it a different technology. Just because he made it distributed doesn't make it anything more than simply 'convenient'. Searchable FTP has existed for a long time, also since before the www. Anyone remember the Archie tool? Indexing, and making it transparent is the next obvious step, not some revolutionary break-through.
P2P is nothing new, and it is nothing 'different' than what has always been done. Servers talk to each other as 'peers' too, don't they?
Just because a bunch of corporate-types label the same technology in two different ways, depending on wether they get a cut of the profits or not, does not make one way doomed and the other saved. Just because somone calls this 'piracy' and that 'a stable business model' does not make the two ways into different technologies.
P2P, S2S, B2B... It's all the same technology. It's the same protocols and algorithms. It's all the same bits. The difference is only in who is in CONTROL of THE DATA. He who controls the INFORMATION, controls the Universe.
As for P2P 'failing' due to low bandwidth at the 'local loop', well, that's just a hot, steaming pile of BS. Ye Olde Bulletin Board Systems (the ORIGINAL P2P networks) thrived on 2400 baud.. They thrived even more on 9600, then, when 14k came, the Internet had started to mature and began to offer more 'value', farther reach and more neat stuff. But the BBS's didn't 'fail'. Not due to poor performance or inequitable sharing of files within the communities they supported. In fact, the only times BBS's were put out of business (except for their owners personal choice) it was due to... (drum roll) PIRACY and kiddie porn.
The REAL jabber has the /. user id: 13196
It's a common argument (Score:2)
If you haven't looked at freenet [sourceforge.net] yet, then do so. Not only is it peer-peer, but it's anonymous, and it's working TODAY. There are smart folks developing it, and they're being very careful not to make the same mistakes gnutella did.
Why we keep hearing this (Score:2)
We are just in the time between the identification of the problem and the solution. Expect to see this one figured out.
-----------