Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Clay Shirky Defends P2P 113

richard writes: "Clay Shirky has responded to Jon Katz's article, Does P2P Suck?, (and a WSJ article published the same day) in an article titled "Backlash!" on OpenP2P.com. Shirky says: "P2P means many things to many people. PC users don't have to be second-class citizens. PCs can be woven directly into the Internet. Content can be provided from the edges of the network just as surely as from the center. Millions of small computers can be more reliable than one giant server. Millions of small CPUs can do the work of a supercomputer. ... These are sloppy ideas, ideas that don't describe a technology or a business model, but they are also big ideas, and they are also good ideas.""
This discussion has been archived. No new comments can be posted.

Clay Shirky Defends P2P

Comments Filter:
  • by Anonymous Coward
    just Jon Katz does.
  • by Anonymous Coward
    peer to peer is so overrated. All they did was make server software easy enough for stupid fucks to use...irc has been doing this for years, but you had to have a clue, now that joe fucknuts can do it, it's a revolution, whatever, i'm sick of "p2p" hype. why don't you post some articles about eGaots, eMilk, iPants and any other hype crap, please i thought slashdot was above this stupid hype crap, but apperantly since "p2p" is so hip the now sold-out and out of touch slashdot feels it can generate some eRevenue by shows eAds to iConsumers, yay i'm so eExcited...

    fucking crap...
  • by Anonymous Coward
    I was recently working for a P2P-based startup. Although we were considered by many in the press to be the leader in the P2P market, in reality, the company's only goal seemed to be impressing the bigwigs at .com and trying to convince them that we were hot shit. In the end .com bought us. Boy, were we surprised when we found out how little they paid for us. Boy, were they surprised when they found out how little they got for their money! P2P is cool technology. But until someone comes out with the "killer app" that they can actually make money on, all P2P talk is just a lot of hot air.
  • by Anonymous Coward
    Whatever you can share for free, without any way of stopping won't be made. So you'll have all these reliable, fast, anonymous connections without anything to share.

    Human society has been able to make "free" stuff since forever. Where is all this "free" stuff then? Nowhere. You only get stuff made when it can make a return on the time/money/effort invested.

    Don't worry, artists & creators won't go broke. They'll just make stuff that defies digital reproduction.
  • by Anonymous Coward
    nearly overnight we will have distributed processing, distributed storage and distributed hardware devices overnight.

    A single company will come up with something that actually works and works well and most importantly is easy to use. And it will be safe and secure. The programs will run in a sandbox on the users computers and be totally isolated from the actual computing resources. The file sharing will only see the directory tree that it is pointed at and no other.

    I think that the key to the storage is the directory service. Being able to find the data and then download it is crucial. We need a self organizing heirachy. Don't ask me how to do it, if I knew I would be famous already.

    The distributed processing and distributed hardware already works well, it will just be icing on the cake by adding these to the distributed storage system.

    And don't forget that a distributed database is a combination of distributed storage and distributed processing.

    Now, imagine a company with 1,000 desktops. They all run this software and join together in a single computer that has 5 TB of storage, and can run 1,000 different processes at 400MHz each. This is more powerful than most supercomputers and the company gets it for free. Of course, a lot of the storage is going to have to be redundant so that the loss of a single computer will have no effect.

    Cool!
  • Andreessen's implementation of the IMG tag is an argument FOR screwing with standards, not AGAINST. Or rather, it would appear that screwing with standards is sometimes beneficial, sometimes harmful...

    As far as IMG being the GUI, without question it laid the way for GUI approaches to the net. Witness clickable image maps, the server side of which followed pretty closely on IMG's tail. Without IMG there was no need for GUI approaches so I for one don't mind them being synonymous.

  • Part of the problem is the public's mindset of 'consumer'. The Internet has the ability to promote the average person to an active role, but is being strangled by those who believe it is nothing more than television with a keyboard.

    I think that's more than a mindset; I think that's what most of the public wants to be.

    In fact, that could be the disconnect at a lot of the dot-com woes. And possibly at the stalling of a lot of the "systems" in first- and second-world countries.

    As /. users, we are almost all active participants. If there's a conversation going on, we want to be a part of it. If something new is on the horizon, we want to hear about it. We want our thoughts to be feedback.

    Joe and Jane Sixpack don't. They don't want Tivo to help them program their own network; they want to switch on the box and have the network programmed for them. They don't want to worry about schools; they want to send their kid to school and have him or her come back educated. They don't want power generation in their backyard; they want the switch the light switch and to have the light come on.

    They make a few decisions about where they live, what they do with their time, etc. but for the most part, they don't want to be heavily involved. They know the schools are broken, but they don't want to be the ones to fix them. They know politics is broken, but their reaction is to drop out and not vote. They just want things to work.

    The active 10% is out here trying to bring new approaches to them, but they don't really care. Too much change is interesting to us, a whirlwind world we enjoy. We have an advantage in it, because we understand a lot more of it. But change is a problem to them. They don't care for instability; even if it's a pain in the ass, they want to get in their car and go pick up groceries and haul them home, because that's what they've always done and doing what they've always done is appealing in a topsy-turvy world. The more we, the active, push for innovation, the more the passives resist.

    A friend of mine is a passive. He's even a computer guy. Back in 1986 I asked him why he didn't get a PC. "I use computers all day at work, I don't want one at home," he said. In 1993 I showed him Usenet. "It's a lot of trouble to read all that stuff, and then have to write back," he said. It was like more work to him. Now he has Internet access, I asked him why he didn't check out /. and other such high-profile sites. But I already knew: he's a passive. A good guy, even a competent, intelligent guy, but he wants life to come to him.

    If we can understand these people, we can improve their lives (and they can pay us dearly to do that), but we can't expect them to take an active role. And so when you say to a passive "...all you have to do to run Linux is..." don't be surprised if the answer you get is "No, I don't have to do anything." So the challenge of not only the open source world, but the entire world, is to create paths of least resistance. If they have to do something, they won't. You have to figure out how to make them want to do something.

    Ugh, I'll stop now before I become Katz.

  • by Jordy ( 440 ) <jordan@NOSPam.snocap.com> on Thursday April 05, 2001 @07:21PM (#311746) Homepage
    Currently I find much of the Internet to be passive in nature. I think our interfaces encourage that.

    Part of the problem is the public's mindset of 'consumer'. The Internet has the ability to promote the average person to an active role, but is being strangled by those who believe it is nothing more than television with a keyboard.

    A lot of the community oriented services such as IRC and email have begun to erode that image, but there really haven't been any new truly successful technologies stressing community in years. The public has been sucked into this thing called 'the web' and been taught that's all there is to this Internet thing.
  • The risk of pandemic infection is present in any environment without diversity. In the computing world, we have more diversity than I think any sane person needs.

    Of course, with every major advance we make there is always an increasing risk of mass destruction. There is an end to all things. Do nothing and an outside force will destroy everything. Do something and at least you are in control.
  • I never particularly liked the term peer-to-peer. I'm not exactly sure who originally coined it, but it seems to cause a lot of confusion with other technologies which sometimes piggyback ontop of it.

    P2P, Distributed Aggregation and Distributed Computing are three separate but related things.

    Peer-to-peer is simply a type of network where all nodes on the system are on equal standing with each other. There are no dedicated server machines, no dedicated client machines, but rather everyone is both a server and a client and they communicate with eachother as equals.

    This type of system lends itself to a very interesting change in the way someone finds information. Instead of going to a place (e.g. slashdot.org) to get information, you go to the information to get a place.

    Distributed aggregation is a method of intelligently locating and well, aggregating resources distributed among nodes across a network. Whether these resources are files, CPU time or disk space, the method of aggregation should remain basically the same. This fits in very well with the peer-to-peer model to provide each node with a simple way of locating resources on other machines.

    Distributed computing is a method of using resources distributed among nodes across a network. Distributed aggregation can be thought of a part of distributed computing as you have to be able to find the resources to use them, but not all distributed computing systems provide or even need a method of handling dynamic changes in the network. Of course, distributed computing systems are not typically peer-to-peer. Individual nodes on the network rarely communicate with each other to share information, but instead handle jobs in batch fashion and push the results up to a central server.

    Many have argued that peer-to-peer has existed on the Internet since time began and that all things are basically peer-to-peer. This is quite true in some respects. At the protocol level, machines communicate with other machines in a manner that can be considered peer-to-peer, but historically at the application level there have been a very clear line between servers and clients.

    We currently live in a world where the majority of computers are nothing more than glorified dumb terminals utilizing only a small fraction of their computing power. My hope is that one day, the average person won't "use" the Internet, but instead "be" the Internet.

    Of course, that's just my opinion. I could be wrong.
  • Can't we spoof return packets (with the IP of the server that the other node is connected to) in order to create outgoing NAT entries and tunnel back through existing NATed routes on the other end? This probably breaks several RFCs, but it might still work.

    But I think that raw UDP is probably a better solution. Ask a Gnutella user on a 56k modem how "reliable" TCP is.
  • In a fully P2P system, which would require a fixed port for incoming transactions, you can only have one participating machine behind the firewall.

    I don't follow. Why would a fully P2P system require that the open port be fixed? Each system is going to have be enumerated on the network in some fashion...some other system(s) is(are) going to at least have to have your IP address. Why can't it also have a port number. From a firewall standpoint, each system behind the firewall could have its own forwarded port on the external interface.
  • ...too many P2P implementors are either ignorant or disdainful of related work...

    ...there's still way too much fad-following in the P2P community and not enough solid science or engineering.

    I have to concur. I know a little about networking, but mostly my experience is in high-performance computing. In parallel programming, avoiding unnecessary communication is a way of life. Once I looked at the P2P query problem, I hit upon an interesting approach to remove the broadcast requirement by using an approximate query routing scheme. This has the potential to fix P2P network scaling problems at a fundamental level. So I wrote up a paper [homestead.com] outlining the approach. I politely sent off email to the Clip2, OpenP2P, etc. briefly explaining the idea, directing them to the paper, and asking for comments. I've got nothing. Now, maybe it is not much of an idea. I am an outsider in that community, but I really wonder if anyone bothered to read it.

    In particular, everyone seems so focused on the "super-peer" concept with Reflector that they ignore underlying scaling problem that will still be there with super-peers. As a network becomes larger, it will not be able to tolerate broadcast queries...even with super-peers. Here [darkridge.com] is a nice explanation why.
  • Well, I am a scientist. I try to read papers without prejudice. I only know that looking over Ridder's paper pointed out an obvious defect in P2P query strategies, namely the need to broadcast queries. Without some intelligent routing protocol, there is no way to avoid this. We can try to fix this by throwing more bandwidth at the problem (via broadband and Reflector), but that really only moves the saturation level a bit higher.

    Though I recognize the deficiency, I do not view it as unassailable, which seems to be Ridder's conclusion. In my paper, you will also see a reference to Clip2's analysis pointing to the same fundamental scaling problems with Gnutella. Like Clip2, I believe it is fixable. I simply offer a different approach then they are pursuing. And one that I believe would compliment ongoing P2P development efforts.

    My point is that there is a problem with the query strategy, or the parallel algorithm, if you will permit me to borrow terminology from my area of expertise. It is unnecessary to impose such sizable bandwidth requirements on the network when the same query can be effectively processed with a more efficient algorithm. You don't buy a Origin2000 to allow you to run BubbleSort faster. You implement QuickSort and run in on your PC.

    And, yes, I have read the specification. There is very little there that has anything to do with query acceleration at an algorithmic level. The lack of respect that you have shown me by flatly accusing me of ignorance without even bothering to examine my paper simply underscores the point I made earlier in this thread--the developers at work here are too caught up in religious zeoltry and clannishness to open their minds to new ideas and constructive criticism.

    Should you bother to read my paper, you will see that I am not motivated by any political calling. I honestly don't have time to use Napster, Gnutella, or any of the rest. This is merely an interesting problem. My purpose is solely to add to the discussion and encourage the P2P development community to question some of the assumptions that they have made. Judge my ideas if you care to. My motivation is simply to see P2P technology advance and contribute if I am able.
  • I didn't. Where is it?
  • Well obviously since UDP is a connectionless protocol, it solves your problem by...

    Uhh

    Uhh, oh hell it actually makes the problem worse. :)
  • Don't be naive, of course they can suppress P2P development. All that is needed is to continue to delay deployment of IPv6. With the limited address space of the current IPv4 there aren't enough "real" addresses so vast numbers of computers are "read only" to the internet. They can act as clients but are unable to act as servers. These are computers with addresses in the ranges 10.x.y.z, 192.168.x.y, and the more difficult to abbreviate 172.16-31.x.y.

    The hope and strategy of those interests which want the internet to be the next cable TV is to set up the architecture to favor broadcasting and inhibit peer to peer connectivity. To unlock more of the potential of the internet we desparately need to force IPv6 as the norm. Both Apple and Microsoft support it in their new OSes and Unix OS's like BSD have supported it for some time. It appears the bottleneck right now is with the ISP's who are loathe to spend money to offer something that people don't even know they will want. In fact since ISP's are generally opposed to users running server applications at all it shouldn't be surprising that they aren't leading the charge to update the infrastructure to enable precisely that. There are many other important characteristics of IPv6 but for the future of P2P the vastly larger address space (64 bit rather than 32 bit) is the most important.
  • I made a bookmark to your paper to take a look later this weekend but after noting your reference to that ignorant, and questionably motivated, paper by Jordan Ritter I don't know if it is worth my time. If the goal is to connect everyone to everyone else for every single query then any scheme that is ever proposed will have a damn scaling problem. But the point is that the gnutella protocol makes no such attempt. Do any of you actually bother to read the protocol documents? I apologize for being so peevish but I'm getting tired of so-called experts pontificating on a topic without making the necessary effort to familiarize themselves with the specific details.

    Here is a paper about scaling and the gnutella protocol written by someone who actually bothers with the salient details: Flow Control by S. Osokine [vadem.com]. I haven't finished reading it yet, but I know Sergei well enough to trust he won't go off on meaningless tangents unlike some other papers.

    The reason why you might not have gotten any comment yet is that it is an incredibly busy time for gnutella developers. There is a lot of potential in the original protocol that requires the slogging of lots of code (UI in particular). If a sustained effort is not made to improve a client, one risks losing the necessary traction to achieve a scale worth addressing. Personally I want to abstract out a layer to allow for deployment of other applications on top of the gnutella protocol. But time is limited and coding opportunities infinite. Choices and compromises have to be made. Don't get discouraged if your ideas haven't received the attention they deserve yet. Maintain your web page and engage in conversations on the gdf mailing list and eventually it might get read.
  • nothing more than television with a keyboard.
    I think that sums it up. I would not however say that irc and email are changing that image. Irc and email are older then the web. Some would argue that email (and usenet) drove the Internet into the home as thousands of university grads were willing to pay a lot for access to email after they graduated. I think it has to be something more recent. You are correct that email and irc are the tools of online communities. However I think it is sites like Slashdot that have been promoting communities. While Slashdot may not be a great community site, it has shown people that there is more to the Internet then being consumers.
  • by Bishop ( 4500 ) on Thursday April 05, 2001 @06:45PM (#311758)

    I like the last one in particular:

    ...the majority of computers are nothing more than glorified dumb terminals...
    As a testement to that my main "workstation" is a K6 233. Workstation is an exageration. I use the 233 mostly for www, and email. I have a more powerfull machine, but it is for games.

    My hope is that one day, the average person won't "use" the Internet, but instead "be" the Internet.
    What do you mean by "be the Internet"? I thought at first that it mean that each person can serve information directly to others. I think that we are pretty close to that now. Many of those people who want to, run good presonal web sites. So I have decided that "to be" the Internet must be something greater. Something that the cyberpunk authors haven't predicted yet. I think that in order "to be" the Internet our interfaces (mostly www browers) have to change completely. In my mind "to be" anything is an active role. Currently I find much of the Internet to be passive in nature. I think our interfaces encourage that.

    just some random after bed time thoughts

  • Careful now, at this rate he'll be a Slashdot author in a month and half of us will have him filtered out the month after that.


    Disclaimer: I really don't know who the heck Clay Shirky is and I haven't read much of his writing. This is just an observation.

  • but the role of the server has changed

    To a small degree when applied to internet based systems. But I think we will still find major server types used in businesses for many long years to come. Although it sounds nice where everyone shares with everyone, in a business this ammounts to mass chaos. And concidering that companies would have a harder time backing up files from all over their network, where when it is in a centralized place it keeps the confustion down.

    The internet can change rappidly and often, but most companies don't chang that often or that drasticly.
  • Actually you're not entirely right, but Shirky is wronger. The IMG tag was entirely Marc Andreessen's invention. There's a sweet little exchange on the W3C list from about 1992 arguing whether the best solution for inline images was OBJECT, IMG or some other alternatives. Andreessen just rode over it all and announced IMG in Mosaic...pretty much his MO for the following five years. (No matter how much you bitch about IE, remember who started screwing with standards first)

    That said, an image doth not a GUI make, and Clay should know better.
  • Pictures are still separate files, but they were not included directly in Web pages as clickable objects until the invention of the IMG tag. Which Andreeson did. First. In Mosaic.

    -clay

  • Ahem.
    "Mosaic was much more sophisticated graphically than other browsers of the time. Like other browsers it was designed to display HTML documents, but new formatting tags like "center" were included. Especially important was the inclusion of the "image" tag which allowed for inclusion of images on Web pages. Earlier browsers allowed the viewing of pictures, but only as separate files. Mosaic made it possible for images and text to appear on the same page. Mosaic also sported a more user-friendly interface, with clickable buttons that let users navigate easily and controls that let users scroll through text with ease. Another innovative feature was the hyper-link. In earlier browsers hypertext links had reference numbers that the user typed in to navigate to the linked document. Hyper-links allowed the user to simply click on a link to retrieve a document."
    (http://web.mit.edu/invent/www/invento rsA-H/andree sen=bina.html)

    Its was in fact Mosaic that created "Web interface as GUI", by making clickable pictures.

    -clay
  • There are two basic solutions for dual NAT commnication. Unfortunately neither solution is very usefull for a majority of P2P applications.

    1) Use a proxy that is not behind a NAT firewal. This opens up problems all its own, which is fairly obvious.

    2) Use UDP. For a number of reasons, a lot of people hate this idea as well.

    The only hope (currently) is one of two things. First, that NAT firewall vendors will implement a suitable solution to bypass this problem (unlikely) and second, that we all get fixed IP addresses out the ass (about as unlikely).

    So, in short, break out the wallet for those beefy relay servers!

    (or use UDP :)
  • The ALPINE Network [cubicmetercrystal.com] uses a flat direct connection network for searching/discovery operations.

    There are a few tweaks which improve the efficiency of this type of network, such as a reputation/affinity value attached to peers to keep you connected to the best, while quickly filtering out the worst or dissimilar.

    The communication is multiplexed over a single UDP port and can handle hundreds of thousands of concurrent connections at the lowest layer. (higher level ALPINE connections require more overhead, and are restricted to 10,000 to 100,000 depending on user preference)

    At any rate, my point is that you can use a simple packet routing architecture like IP to accomplish a flat, large, directly connected network that is usable.

    If you want higher performance, more efficiency, and greater throughput you would need to start experimenting with some of the advanced network architectures you mention. However, the chance of such a network reaching the masses any time soon is pretty slim. :/
  • Ok, I should have been more verbose. Here is how it works with UDP through NAT.

    1) Peer_A opens a UDP socket. Sends a packet to a well known server, or servers, that simply send a reply that contains the source IP and port of the packet.

    2) Peer_A records this source IP and Port, as it is what the NAT gateway is masquerading its connection as.

    3) Peer_B does the same. It know has its masqueraded IP and PORT from its NAT gateway.

    4) Peer_A and Peer_B can now send packets to each other at their respective masqueraded IP and PORTs.
  • by swb ( 14022 )
    A bit harsh, but it's kind of hard to see what the p2p "revolution" is other than Napster. I mean, where else has "p2p" been a success? And even in that case, is Napster *really* a p2p application, with its centralized catalog server? It strikes me as more of a distributed system than a true p2p system with no center.
  • They recently changed their terms of service mandating that Juno can use its customers' computers for that very purpose. They can terminate your service if you don't leave your computer on 24x7 so that it can do the processing and dial in to Juno at whim for more data. The fun details are at www.byte.com/column/BYT20010222S0004 [byte.com].

    Naturally they portray this as a benevolent thing and a chance to be part of their "Virtual Supercomputing Project," which claims to be completely voluntary, despite the fact that their Terms of Service directly contradict this:

    2.5. You expressly permit and authorize Juno to (i) download to your computer one or more pieces of software (the "Computational Software") designed to perform computations, which may be unrelated to the operation of the Service, on behalf of Juno (or on behalf of such third parties as may be authorized by Juno, subject to the Privacy Statement), (ii) run the Computational Software on your computer to perform and store the results of such computations, and (iii) upload such results to Juno's central computers during a subsequent connection, whether initiated by you in the course of using the Service or by the Computational Software as further described below ... you agree not to take any action to disable or interfere with the operation of ... any component of the Computational Software.

    [snip]

    You acknowledge that your compliance with the requirements of this Section 2.5 may be considered by Juno to be an inseparable part of the Service, and that any interference with the operation of the Computational Software (including, but not limited to, any failure to leave your computer turned on at all times) may result in termination or limitation of your use of the Service.

    Happy computing! :)


    Cheers,

  • I've just read a few comments about this reply to Katz' latest ramblings. It seems a lot of people are all too eager to start frothing at the mouth over anything at all. Neither side of this debate is really that surprising.

    Katz is a tabloid journalist. He writes whatever will get the best ratings^W^Wmost comments. People respond accordingly. Now someone from the side he's attacked defends themselves, and we have the same reactions; some defend their position, some attack it.

    I fail to see how any of this is really that remarkable. So p2p is the latest buzzword. So what? So long as we have marketroids who have to make quotas or journalists with deadlines, there'll be buzzwords. And wherever there are buzzwords there will be people to attack or defend them.

    Personally, I'm going back to play with some cool bleeding edge stuff that might just be involved with a buzzword or two. I don't care, since I think it's cool in and of its own right.

  • The idea of a direct connection means effectively connected directly to everyone else. This means little or no routers in between. One way to accomplish this would be a massive backbone (a bus system) to which everyone is connected. However, collisions on this bus would be unmanageable. Another idea is a token ring, only gigantic. The most efficient way to do this would be to conenct consecutive addresses next to each other in net topology, which is quite restrictive and also pretty implausible. One more idea which has not been implemented currently (or at least not on a large scale) is random connectivity. Every computer would be connected to about 16 others or so (I'm guessing at this) and theoretically you get all computers connected together, but with a formless topolgy. Routing would be quite difficult in this setup.
    No topology that comes to mind seems plausible for direct connections for everyone. Currently, the star - substar topology works well, where local nodes are connected to local routers in a star topology (hub, or a bunch of dialup users calling in to one center) which is conneted to its upstream provider in a similar setup (with an ATM bus system at the top level, but this is irrelevant). This system, though theoretically not as resiliant as direct connections, keeps routing tables small and paths relatively small. The idea is to reduce complexity. Most users currently lack the bandwidth to be a user and a router at the same time! When the dream of "broadband in the home" spreads everywhere, perhaps this might be more likely, but right now current topologies satisfy. The only major obstacle is that provider-to-provider connections are not as plentiful as we'd all like. My route to AOL goes through AT&T! You'd think that with a provider my size, they wouldn't have the intermediary netowrk.
    Yet I digress.
  • With wireless networks, all the nodes who want to be connected to each other must transmit or recieve on the same frequency in order to be able to expect data (where is adata coming? always on the same bands). This causes the same fundamental problem as collisions.
    I, however, unlike you, will not rant about your lack of thought prior to posting. I had thought about Wireless LANs when I posted, which is why I did not include that.
  • I admire your wide vocabulary. It is a true sign of your mental superiority.
    [Note for the writer to whom I am responding: The above is sarcastic. I felt the need to explain this since I did not feel you would understand it without an explanation.]
  • I strangely have a feeling that I have more of a clue than you do. If, in fact, you DO have more of a clue than I do, why don't you reveal your nature and clue me in? That would show that you had a clue, whereas currently you are showing only your capacity to flame me, not respond intelligently.
    Moreover, this brings up a philosophical point. Should everyone with an opinion post in response to something? Or only those people who think others care? Everyone should express themselves, IMHO, if nothing more than to have a question answered (or an idea corrected). What is your opinion on this? Please, this time, respond with something more substantial than one line berating my cluelessness.
  • I'm sorry to reveal the truth to you, but your presence in the slashdot community greatly decreases the average IQ. If you refrained from posting in the future, I'm sure everyone would appreciate the lack of bigotry.
  • Let me guess, you are either a bigot without a thought process or you are severely mentally retarded.
  • Actually, I just think before I post, which is why I choose proper words. This also explains the lack of expletives.
  • By the way, how are things over in the UK?
    Are 11th graders there like 11th graders here (I can play too!)
  • It is also good to know that you are a scholar. How is staffordshire? Nice to see you use a stable, well tested version of linux on your box. And it's even better to see that you run ssh but not telnet! I'm proud of you even though you are a bad person.
  • by mindstrm ( 20013 ) on Friday April 06, 2001 @03:00AM (#311779)
    IT's the normal everyday sheeple that don't get this. THe fact that it's true doesn't make them understand any better...
    You and I and lots of others know how the internet works.. we don't like the 'centralized broadcast' way it's starting to be used.. and don't like how people insist that p2p is something 'new'.. but think about this.

    For mom & pop jones out there.. it IS something new. Sure they could have always done it.. but are just now realizing it. To them, it's NEW. The applications are new... everything is new. So it'
    s good to have articles like this....
  • By definition, is SETI (and distributed.net) considered to be P2P? I would have thought of it as distributed computing; then again, so is P2P...

  • once the shoe shine boys and taxicab drivers start talking about stocks, its time to get out

    Nice, I never thought of it that way :)
  • by toofast ( 20646 ) on Thursday April 05, 2001 @04:55PM (#311782)
    P2P is a revolution in the making, and tradition businesses are trying to crush it... It's as simple as that.

    It will succeed, however, simply because it's gained enough momentum that it cannot be stopped. And because it cannot be controlled.

  • by s390 ( 33540 )
    You sound so upset P2P systems have gained such popularity that you're no longer 37337 with IRC and ICQ, etc. I dismiss that as immaturity. Listen up and learn, maybe.

    Clay Shirky is a savvy guy, and he has a point that P2P is a good idea, albeit a sloppy one just now. The main thesis of his article is that power resides not only in centered servers (this is Sun's doomed wishful thinking, exactly), but also exists and is growing on the edges, as widespread use of easily acquired and highly capable software triggers the law of large numbers and a shift to topple a tipping point, to utterly overwhelm those few evil hegemonists who seek to exert centralized _control_ in order to extract artificial scarcity based revenues from the large mass of networked connected people. Your anger is better directed at those who charge $15 per CD, $9 per movie.
  • by s390 ( 33540 )
    Seeing what's going down and writing intelligently about it are two different things. I'm not entirely convinced you did the former, but you are convicted by your own posts of being incapable of the latter. Let me put it in simple terms for you. We're having a discussion here about the potential of peer-to-peer services at the network edge as contrasted with centralized client-server models. It's a current and rather interesting topic. Your negative posts are not advancing the conversation. Please take it outside.
  • One more idea which has not been implemented currently (or at least not on a large scale) is random connectivity. Every computer would be connected to about 16 others or so (I'm guessing at this) and theoretically you get all computers connected together, but with a formless topolgy. Routing would be quite difficult in this setup.

    Difficult, but far from impossible. Mesh routing - which is really what you're talking about here - is a much-studied field and pretty reasonable solutions to all of the major problems are known. Check out Routing in the Internet by Christian Huitema for a pretty good overview of the relevant theory and practice.

    IMO one of the big problems with P2P is that too many P2P implementors are either ignorant or disdainful of related work in this and other areas - usually both, which is a bad combination. I went to the O'Reilly P2P conference in SF a couple of months ago, and overall it was fantastic, but I did notice one thing. Everyone there seemed very sophisticated about crypto and security etc. but at the same time most were stunningly ignorant about routing, protocol design, performance management, and a bunch of other fields. There were exceptions, of course, don't get me wrong, but there's still way too much fad-following in the P2P community and not enough solid science or engineering.

  • That looks like a nice paper. Thanks!

  • what do you think a supercomputer is? It is thousands of CPU clustered. Only difference is that a supercomputer is pre-packaged.

    I beg to differ. One of the major things differentiating supercomputers from anything else is the presence of *huge* internal memory and I/O bandwidth. That will never be duplicated by a distributed group of machines, and so for a certain very large and important class of programs distributed computation will never be a substitute for supercomputers. Fortunately, there are enough problems out there that *do* partition very easily, so that distributed computing is still worthwhile.

  • I think one of the things people need to get straight in their heads wrt P2P is the difference between temporary and permanent roles. Obviously, if node A is always and forever a server, and node B is always and forever a client, and they're totally incapable of switching roles (e.g. neither even has the code to do so) then that's not P2P. Just as obviously, if there's not even a notion of client and server, if every node is necessarily able to perform in either role at any instant, that's about as P2P as you can get. (Note, however, that with respect to an individual transaction there is still one node acting as client/requester and another acting as server/responder).

    The real battleground is the area in between, where a node may be a server one moment, or a client the next, changing according to the needs of the network (e.g. nodes entering and leaving). Is that P2P? The P2P purists ("peerier than thou", to use Dr. Shirky's term) would say no. More practical people would say yes, or close enough to yes that it doesn't matter. One trend that more and more people are noticing is that many P2P protocols/applications are developing ideas of "supernodes" or "reflectors" or "defenders" (my candidate for stupidest term yet) that, because of their superior resources, are given additional responsibilities. In other cases, certain functions have been partly or completely centralized within a mostly P2P framework, because nobody could figure out how to make that particular piece - usually a location, searching, or indexing piece - scale within a pure P2P paradigm.

    The important thing about P2P is not "oh my god, there's a server, we must eliminate such heresy from our design!" What's important is decentralization and automatic reconfiguration, to avoid bottlenecks and single points of failure. Those are the problems we're trying to solve, remember? If the system is flexible so that work can be redistributed seamlessly from one place to handle either overload or failure, that's "P2P enough for me" even if a picture of the system at any one point in time shows some nodes in server roles and others in client roles. That generally means that each node must be capable of performing the different roles - i.e. the code must be present, the protocols must support it - but whether a given node actually does ever perform a given role doesn't matter.

  • Some of the articles I have read by Shirky are composed entiely or overhyped buzz-acronyms. I am amazed at the quote in the slashdot blurb for this article - he is speaking in english word sentences, communicating an IDEA!
    impressive.
  • by superid ( 46543 ) on Thursday April 05, 2001 @05:09PM (#311790) Homepage
    P2P isn't a revolution, its just another overhyped buzzword, being oversold to zealous people. Money is being dumped into anything that even remotely sounds "peerish". JP Morgan back in the 20's (or earlier?) said something like "once the shoe shine boys and taxicab drivers start talking about stocks, its time to get out"...well, even my 8 year old knows about dot bombs, and P2P is next.

    SuperID
    Free Database Hosting [freesql.org]
  • by Tyrant Chang ( 69320 ) on Friday April 06, 2001 @12:32AM (#311791)
    I don't think the author truly understands the issues or the words he is using. He claims that millions of computers will be more reliable than one big computer but what is his definition of reliable? Unless you are talking about non-serilizable or non-consistent clusters of computers, the point is mute. Consider a cluster of databases...are they more reliable? No, if one of the computer in the cluster fail, the entire cluster fails because of the basic unsolvability of group membership. And if he is talking about non-consistent clusters of computers, then how will answers be guaranteed?

    Millions of CPU doing a job of a supercomputer...what do you think a supercomputer is? It is thousands of CPU clustered. Only difference is that a supercomputer is pre-packaged. To have million "regular" PC clusters, it simply won't be able to scale for most applications (with notable exception being SETI@HOME), since the cost of routing information quickly overwelms the useful information that is being passed arround.

    He also claims that PCs are second class citizens and they need to be servers. But does he have any idea what this will entail? Think of the security issues...think of the privacy issues...think of the performance issue it will bring to the entire internet. Even if the cost of routing scales linearly, that still sucks because number of computers that are being connected is increasing exponentially.

    I'm not here to dismiss p2p - I think p2p will have a great future for some applications. But I think the author needs to think hard about his statements - which I think has very little meaning at all. The article seems too much hot air religious fervor about p2p.
  • I really thought that his point about making PCs first-class citizens of the 'Net was the most important, and one that needs to be driven home as much as possible. I still think that true freedom for web users (who must always rely on corporate connection providers) will derive from widespread, mainstream adoption of something like Freenet. I argued this point in an article I wrote for freshmeat called the World Free Web [sourceforge.net]. I had hoped that we could jump-start that process by integrating Freenet with web browsers, effectively using Freenet as a huge, decentralized backup to the web - on that was out of any entity's control. I'm still working on getting people to work on this idea, so email me if you are interested...
  • What game is that from? It could be the next ALL YOUR BASE!

    ***
  • I agree that a proxy is has it's own problems, the least of which is that is is no longer P2P. I'm not certain how UDP is supposed to solve the problem. If the initiator is trying to contact a person behind a NAT, they still can't to it with UDP.

    The only solution I can see is to add a layer between TCP and IP. Call it NATCP (or NCP, if you like nested TLAs). The NATCP data would be a chain of DWORD used to map back to a machine on the network. In the simplest implementations, it would be the IP address of the machine on the inner network, or be completely separate from the local IP address. 0.0.0.0 would be reserved for termination.

    For those that fear that this takes away from the protection that NAT give the machines on the inner network: that is not the intention of NAT. NAT is used to give many machines access to the internet from a single IP address. Protection is provided by the firewall (which will often be you NAT server).

    Note, that while I put this bewteen TCP and IP, it could easily go on top of TCP (although it more logically goes between the two).

    For those that say, "why not just switch to IPv6", IPv6 doesn't solve this problem. ISPs will still only dole out a single IP address, because they want you to pay for each machine you connect at home (or at work). In this case, you get a single IP address, and you chain the NATCP address together, so even if you ISP is behind a NAT, their NATCP address gets prepended to yours. You can thus chain NAT networks with ease. This may be a better solution to the IP address shortage and allocation issues than IPv6.

  • by Fjord ( 99230 ) on Thursday April 05, 2001 @06:01PM (#311795) Homepage Journal
    Until P2P comes up with a solution to the NAT problems, it will continue to suck more and more. NATte home networks are going to become more and more commonplace, as handhealds and kid's computers get 802.11b in them and Airports or AirStations [buffalotech.com] are sold to home consumers
  • C'mon. Its suprising how many people, even here, have no understanding of what this is beyond file sharing. That's not much different from saying in 1992 that the net was not much good for anything more than email.

    In the 90's the net (via the web) morphed into the broadcast model. Yahoo or Cnet supplies, I receive. All about content. Imagine if all you were able to use a phone for was to listen to the news or buy something. Kind of sub-optimized.
    The P2P (yuck) stuff is for giving people the ability to communicate directly. Sometimes that communication is about content, sometimes its just shooting the shit. The point is, they decide for themselves what they want to use it for, or if they use it at all.

    If the technology is useful, then people will figure out how to make money with it. All the bullshit about nobody has a business model is almost totally irellevant to anyone that does not have money invested in any of these startups.
    If the technology is usefull, people will use it. If not, they won't.

    So a lack of good business models won't have much effect on adoption. Likewise brillaint business models won't save it if the technology is useless.
  • Um, your recourse is the service they offer to you...
  • P2P is only a revolution for those who don't undertand computers. You want something I have...we can trade on IRC, Email...etc.
  • If there are interesting, non-warez uses for P2P file sharing that are better than server-based methods, please enlighten me!

    1. I am sure there are many terabytes of movies, music and books available that are no longer protected by copyright (they have been liberated) [danny.oz.au] laws. Distribution via a server based method might be better, more reliable etc, however, with no possibility for profit it would not exist.

    2. It might be possible to use some sort of P2P system to distribute the traffic from high traffic websites more evenly around the internet. (what i am thinking about is kind of like caching, however each web browser would become a cache for machines around it... don't know if it is possible)

    I am sure there are more...

  • Money is being dumped into anything that even remotely sounds "peerish".

    Oh really? That's news to me. Actually there's very little money going into P2P right now. For example, early P2P company Infrasearch just sold out to Sun at a firesale price because they couldn't raise money, even with Marc Andreessen opening VC doors for them.

    Since your 8 year old knows about dot bombs, ask him/her to explain current market conditions to you.
  • In an accident of history, both of those movements were transformed in January 1984, and began having parallel but increasingly important effects on the world. That month, a new plan for handling DARPA net addresses was launched. Dreamed up by Vint Cerf, this plan was called the Internet Protocol, and required changing the addresses of every node on the network over to one of the new IP addresses, a unique, global, and numerical address. This was the birth of the Internet we have today.

    I distinctly remember having to learn the IP stack in 1981. And isn't it Vince Cerf [rl.ac.uk]?
  • That's not recourse (going to a higher authority to redress differences) that's consideration (something of value used in an exchange).
  • The main thesis of his article is that power resides not only in centered servers (this is Sun's doomed wishful thinking, exactly), but also exists and is growing on the edges, as widespread use of easily acquired and highly capable software triggers the law of large numbers and a shift to topple a tipping point, to utterly overwhelm those few evil hegemonists who seek to exert centralized _control_ in order to extract artificial scarcity based revenues from the large mass of networked connected people.

    So remember this the next time your bluetooth palmphone slows down because your ISP is running their billing on it.

    Sheesh.
  • I hate to be the bearer of bad news but PCs will never be first class citizens on the NET until everyone either has Cable, DSL or Fiber Optic running to their house and their PC is reliably connected to the NET. That is why up 'till now PC's have always remained second class citizens, its not the PC's fault. Processing power is not a problem here, it CONNECTIVITY. Imagine if your PC and everyone elses were connected to the NET reliably and 24/7, also each PC had it own unique IP address or someway of identifying it (pray for IP V6). Think of the power, the possibilities, that is where we want to be but until we get the reliable bandwidth and permanent "on-all-the-time" connections, this is nothing more than a pipe dream.

    Nathaniel P. Wilkerson
    Domain Names for $13
  • The possibility of P2P networks has been around for ages, at least at Universities and other persistently connected places. Having been introduced to the Net really for the first time upon entering college (in 1993), it didn't occur to be that my computer was really any different from anyone elses. But I think that's really the way a lot of people do think about it. Here's my little desktop machine, I use to to browse the web, send e-mail, write paperes, play games. I turn it off at night, and that's that. People (not-tech people) drop their jaw in wonder when I tell them I run a webserver from under my desk. Or when I connect to my home computer from work. I am similarly in shock when they "forget to bring a file from home". But none of this is cool enough or inconvenient enough to make people change the way they use the machines.

    Enter Napster. Suddenly, everyone has a reason to think of their computer as no different from all the other computers (even if they are going through a centralized server). It becomes clear that there is great utility in being connected, and having access to other machines, both upstream and downstream. Now that the populus has gotten a taste of this, I doubt they'll go back. Napster will be re-implemented as Espra [espra.net] over Freenet [freenetproject.org], and given the much more generalized architecture, Peer to Peer networks will branch out into all kinds of new spheres of influence. I can't wait to watch it happen!

    ---

  • by Raunchola ( 129755 ) on Thursday April 05, 2001 @08:10PM (#311806)
    P2P is a revolution in the making...

    The very concept of peer-to-peer has been around since the early days (very early days) of the Internet. ARPANET was originally intended to be decentralized, in case of Global Thermonuclear Warfare (sorry, just finished watching a DVD of War Games :)), so that if one node died, the others would still be around.

    "P2P" (God I hate that stupid buzzword) is just a commercially friendly term to describe something that's already been around for 30 years. I fail to see the revolution here.

    ...and tradition businesses are trying to crush it... It's as simple as that.

    With the obvious exceptions of the RIAA and the MPAA, what businesses are trying to destroy the peer-to-peer concept? Hell, not many businesses are even getting into the concept. Why? Because, thanks in part to Napster, businesses don't see a lot of worth in the concept, unless they want to trade MP3s (or porn or movies). Granted, Napster is moving along to a subscription-based service, but there's still no guarantees that, in the end, it'll be successful. Maybe if someone develops a peer-to-peer service (yes, I'm aware of Freenet) that isn't being utilized by people trading Metallica MP3s, Jenna Jameson pictures, and Quicktime files of Gladiator, then maybe one of the big players will jump in.

    As I see it, "P2P" (Did I mention that I fucking hate that buzzword?) is just a fad. The concept already exists people, just ask the people who worked on the ARPANET. Giving the concept a hip new acronym and a few evangelizers doesn't make it any bigger of a revolution than it already is.

    --
  • Did you post this to the decentralization list? I don't think I've seen it there.
  • I just had to check that article out. How can anyone legally be allowed to use your property for their own profit, without any recourse to you? Surely this can not stand up legally, can it? Obviously the only way anyone could get away with that is to do it quietly. Correct me if I am wrong, but I understand TOS do not overrule common law rights, either expressed or presumed?

    So, if say I conencted my laptop, which is legal noitced to forbid non corporate usage and I use a Juno accoutn, would that mean Juno are legally liable for illegal access to my machine?
  • If you have an IP address and bandwidth, you're already a citizen. Isn't P2P just trying to make the internet into another internet?
  • I'm sure this idea has been though of before, but why don't we have an entirely different network, aside the Internet, dedicated just for P2P services? Maybe have the traffic split at our ISP to either network. Would solve a lot of issues such as bandwith concerns.

    --
  • I would even reason that "free" ISP's such as juno, netzero, and others, could make use this technology to pay for their service instead of the banner ads that they currently sport

    Juno is already considering using distributed computing to pay for their "free" internet service.

    ---
    The AOL-Time Warner-Microsoft-Intel-CBS-ABC-NBC-Fox corporation:
  • SETI, distributed.net, and others of that ilk are distributed computing. Napster, freenet, hotline, etc. are distributed storage. Which one of these is considered P2P? Both? I would imagine that distributed.net would be kept in the client/server model until users can easily submit their own calculations to be run on the net. SETI (AFAIK) allows you to use a radio tuner card to join add data to the search, but you can't use SETI for your own purposes, only to search for little green people.

    On a completely different note, P2P is an entirely new method of data havening (a la freenet). I would challenge the admin of any college campus to try to get a somewhat popular mp3 or zip off of a resident network. People have been doing this in colleges since the PC became prevalent in the home.
  • by babykong ( 163360 ) on Friday April 06, 2001 @03:54AM (#311813) Homepage
    Og has flint

    Zog has shells

    Og trade Zog

    This is the most natural model, and very old.

    Then Og moves other side of mountain. Og and tribe appoint a representative to trade with Zog and tribe.

    Then smart caveman becomes profesional representative, calls self merchent.
    Many merchants ally and become company.

    Og and Zog get dsl. Don't need Merchant anymore.

    Everything back to normal.
  • It stands up fine if you want to connect to their network your choice.
  • This is an important point. I am beginning to believe that B2B is a bastardization of the P2P concept in that the First-Class citizens retain their preemininence, and the 'fringe users' stay on the fringe. Napster (the only successful example of P2P yet, IMO) changed the way we all looked at our computer. Suddenly we had a reason to leave the comp on all night, and it enriched our lives without costing a cent. Obviously the corporations could not have this and had to quash Napster immediately.

    In my opinion, there is a war going on between people and corporations over the use and rights inherent in the Internet. People have no clout to stop the corps from doing whatever they want with the net (except for thousands and thousand of posts, which sometimes actually works), but corps with their gaggle of lawyers are quick and eager to point out and punish those who take the net and use it in innovative, if costly ways.

    What started out as 1) a wonderful system of free information dissemination has devolved into either 2a) a way to harness each and every transaction into a money-making proposition, or 2b) a way to get really really neat stuff for free.

    Of course, it will end up as neither, and the only people once again, who will realize the full potential of the net will be lawyers, as the net becomes the equivalent of modern television. I hope I'm wrong, but P2P may be reduced to nothing because there is no money to be made without that broker between every txn...
  • No, IRC and email require me to manually:

    1. Find you
    2. Ensure you have what I want
    3. Request you to send it to me

    This is a very laborious process. The point about computers is that the do these processes for us. The new "P2P" systems are automating these steps away. Older systems don't do that.

    So, P2P is probably the wrong name, but I can't think of a better one :) The point is, the automation of the three steps is a real revolution.

  • For Napster, at least, I find ipmasqadm portfw works very well, i.e. port redirection from the firewall to the machine running the napster software.

    This works best in a quasi-P2P system like Napster, which employs central servers, so you can let everyone know which redirected port to connect to. If you have multiple participating machines behind your firewall, you allocate them their own ports and configure the software to give out that port number.

    In a fully P2P system, which would require a fixed port for incoming transactions, you can only have one participating machine behind the firewall. The participating machine would then act as a server for the P2P system for the rest of the network. For home systems, this isn't a problem, unless you want to decentralise your family/flatmates too :)
  • I see what you are saying. Maybe I'm missing something.

    A 'fully peer to peer system' would require that NO system is 'enumerated on the network in some fashion'. I presume you are implying some kind of central database of participants in the peer-to-peer system.

    Without such a central database, there are two options for joining the system:

    + You try out candidates to see if they are participants

    + Someone tells you a participant by some offline method

    I was thinking along the lines that, in the first case, it would be considerably more efficient to only have IP addresses to 'try' rather than IP addresses and port numbers.

    In the second case the system becomes 'members only'.

    Hmmm. I think I'll go and read the Freenet documentation again....
  • I mean, where else has "p2p" been a success?

    Games like Starcraft work on an almost entirely peer-to-peer basis. The server's only role is in giving the peers a place to match up with each other - much like napster. And I think starcraft qualifies as a commercial success [gamespy.com], partly because of the online multiplayer scene.

    --

  • What game is that from? It could be the next ALL YOUR BASE!

    I doubt it...but it does still crack me up. Anyway, the game is called BattleRangers, and I found the image zanyvg.overclocked.org [overclocked.org]. Most of the stuff featured there is boring but I should give credit since that's where I stole the image from myself. I put the image on my roommate's computer so I could get a shorter url that would fit within the 120 char sig limit.

    --

  • JP Morgan back in the 20's (or earlier?) said something like "once the shoe shine boys and taxicab drivers start talking about stocks, its time to get out"

    Actually that was Joseph Kennedy, who sold in the summer of 1929, supposedly because a shoeshine boy gave him a stock tip. And the story is probably totally apocryphal - Kennedy himself denied it [nctimes.com].

    --

  • by wytcld ( 179112 ) on Thursday April 05, 2001 @05:14PM (#311822) Homepage
    Shirkey says "The invention of the image tag, as part of the Mosaic browser (ancestor of Netscape), brought a GUI to the previously text-only Internet in exactly the same way that, a decade earlier, Apple brought a GUI to the previously text-only operating system." History simplified to the point of almost being wrong does us no service. The image tag was neither a GUI nor invented for Mosaic. It came out of CERN where the physicists wanted to put illustrations in their documents. Mosaic was just an early implementation of a browser that could run in existing GUI OS's and that implemented the CERN standards.

    A few paragraphs later his theme is "big, sloppy ideas." Yeah, fine, he gets big and sloppy about his ideas of the past, and then parallels that distortion to a present he doesn't begin to define, and this passes for analysis? In a really vague way he may be waving his arms in the right direction, but why are we even trying to listen to someone whose prattling skirts close to the edge of intellectual dishonesty? It's like those old "make millions from the Internet" spams. Sure, you could make millions back in the day, by not by following the advice in those missives. It's because the likes of Shirky have been listened to by too many VCs and editors that the tech economy is so shakey now - false intelligence is more dangerous than ignorance.
  • NAT effectively hides a network of computers behind a single machine. Computers out there on the Net can only see the NAT machine so they can't connect directly to anything else on the local network (the machines behind the NAT server have to initiate all connections). Both Napster and Gnutella have solutions to this (and I imagine every other P2P system does too). Basically the idea is to use a machine that already has a connection to the desired target to request that the target initiate a connection. In the case of Napster the central server can do this, in the case of Gnutella a 'push' request is issued to network at large. For Napster this works quite well, but Napster is entirely P2P. In Gnutella's case it's less effective.
  • For Napster this works quite well, but Napster is entirely P2P

    I meant that Napster is not entirely P2P, of course.

  • Whether the average Joe would actually run these services on his home PC is a totally different matter.

    Don't get me wrong, but just bundle such services with the latest version of Windows and turn them "on" by default. That's how a lot of things considered "standard on a PC" became "standard". The Joe User won't even notice.
    Actually, be honest: Linux distribs do this all the time too, I yet have to see a Linux distribution that doesn't come with a webserver enabled by default (Even the small 150 Meg distrib I use has one in it, which I disabled of course).

  • now that joe fucknuts can do it, it's a revolution

    Not to be rude, but I would say that that defines a revolution for almost anything. Once anyone can publish books (printing press) or own a telephone etc it does become a sort of revolution eventually changing society. Of course, plenty of stuff becomes super widely available but doesn't really change our lives, but...
  • I have a "server" connected to my DSL line; this 486/66 is WAY less powerful than even the bottom end PCs available today. What gives it the chance to be useful is the connection. (And Linux + Apache of course ;o)
  • Tim Berners-Lee explicitly states that Andreesen's Mosaic was the first browser to display inline images on his homepage FAQ [w3.org].
  • Jordy, as the level of interconnection increases isn't there a concurrant increase in the risk of pandemic viral infection?
  • Because, thanks in part to Napster, businesses don't see a lot of worth in the concept, unless they want to trade MP3s (or porn or movies).

    I guess NextPage is thinking EXACTLY the opposite.
  • Unfortunately, the media is associating P2P with Napster and stealing.

    I dunno - for all the the high-brow talk, is there another use for distributed file sharing? Gnutella and Freenet, as far as I can tell, consist of illegally shared MP3s and the same porn that's been passed around Hotline for years. (How many copies of gym.mpg do I need?) Besides being slow, buggy and unstable.

    To my mind, if you have text documents you want to share, you put them on a web page. If you want to distribute demo MP3's for your band, a web page is definitely the way to go. If you're a political dissident, it seems to me that sharing from your computer is the last thing you do. You send the files to someone in a free country to make them available -- on a web page or FTP site.

    Am I missing something? Of course, everyone here had all sorts of pious explanations for what they were using Napster for. Uh, yeah right.

    If there are interesting, non-warez uses for P2P file sharing that are better than server-based methods, please enlighten me! I'm going to bed now, though, so flames and accusations of being a paid RIAA agent will go unnoticed (unless you're Freddy Krueger).

    Unsettling MOTD at my ISP.

  • I wonder if the above are really P2P? They all rely on one or more servers to operate and can't do without them. That's why napster is attackable, there is one central point - the napster server which tells you the ip of a file server.

    I did not look up the definition, but for me _true_ P2P would be a network of computers without need of a dedicated central server to accomplish the task they are up to.
  • SETI recently attended a P2P conference hosted by my company. From what I heard (I could not attent), they consider it to be P2P.
  • Unfortunately, the media is associating P2P with Napster and stealing. On the other hand, the little man loves his free music, so he might get freenet! It's a double edged sword, folks!
  • I fail to understand the hype about this "new" peer-to-peer stuff. In 1983/84, I set up a group of Apollo workstations. There was no server. Symbolic links allowed the hard disks on the workstations to act like one big disk, so everything appeared local to everyone. There was no server. That was over 17 years ago. What's "new" about it?

    Oh, goody. PCs won't be "second-class citizens" anymore. Hmm. In 1996, the PCs (mostly Pentiums, but including a 486 or two) in my small company were all connected to the net. One of the Win95 machines ran an IRC server. Linux boxes ran FTP and various Java client and server apps and bots. The only tasks reserved for the "big servers" (Suns) were DNS, RealAudio (which I later ran from a Pentium as well), and the main Web site. Everything else was distributed, and mostly to PCs.

    So it's not new, and it's not innovative. What's the big deal?

  • by djanossy ( 320994 ) on Thursday April 05, 2001 @07:15PM (#311844)
    If you look at the internet as our collective brain, with each node representing a neuron (broad metaphor i know but go with me for second on this) then client-server is like the motor cortex - you can map out a direct link between certain neurons and a particular service of your body - the use of your left leg for example. Damage a particular part of your motor cortex, and your leg wont work. Turn off the slashdot server, and nobody will see the site. But things like memory and thought in the brain are more distributed, just like a p2p system - even if you kill off a bunch of the neurons, the system can still function and even relearn what it has lost. Even if you kill off half the gnutella nodes, you will probably retain more than half the info that was accesible on them. Since distributed processes like thought and memory are considered "higher functions" of the brain, I would state that P2P does not suck and in fact it is like a higher function of the internet (when you look at the net as an extension of our brains) and so of course it is only the intelligencia (the frontal cortex) that will find it of much use - so be it! If the rest of the world cant figure it out or doesn't want to, thats fine - more bandwidth for me!
  • amount Chaucer got paid to write The Canterbury Tales = $0.00

    amount Van Gogh received from sales of all but 1 painting = $0.00

    copyright fees collected by Bach = $0.00

    ...amount of St. Mathew's Passion which is "borrowed" work from other composers = 1/5

    amount they received in sampling/licensing fees = $0.00

    amount (currency converted to USD and adjusted for inflation)


  • You know, you're right, it is stealing. Guess what else, I don't care. I don't say "yesterday I got two songs off of Napster" to my friends; it would probably be more like, "yesterday I stole two songs off of Napster, fuck the RIAA!" Fair Use is bullshit, so what? The corporation is the most disgusting invention in the history of mankind. It allows us to escape blame for our actions and has no compassion for individual humans. I'd never steal from a person, a local store or even a large but not corporately owned company, but a corporation? Every single time I get the chance. I don't think of this in terms of right and wrong because I don't believe that such things are static and unchanging, I simply no longer care about the welfare of corporations. Oh and by the way, calling us dorks doesn't show much in the wittiness department.

Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke

Working...