Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

P2P in 2001 56

nihilist_1137 writes: "Zdnet is reporting that P2P is becoming more used in business. "It's now over two years since a few underground song-swapping services put peer-to-peer technology firmly at the forefront of the IT agenda... A look back at some of the more significant P2P stories of 2001 shows that -- although not a new concept -- P2P is starting to assume a very important role in the corporate space, as tech giants scramble to succeed in this new market."" Hard to believe that the Napster battles have been going on for two years now.
This discussion has been archived. No new comments can be posted.

P2P in 2001

Comments Filter:
  • Frankly, it's about time this technology has been adapted to good use (not saying that Napster & others are not a good use :) )
    These programs can be invaluable for companies that work on big projects and need to collaborate information from a vast variety of sources. The Kazaa network has shown just how vast that information can be...
    Anyway, that's my 2. Back to downloading movies...
    • p2p is good technology, but from a management standpoint, p2p is missing one major portion that the client-server model isn't: Control

      Even if authentication is there, if logging is there, management ( at least the ones I have run into) like the idea of a central, impenetrable bastion of information, with big pretty accounting graphs. It is a large firewall to bringing about change in anything other than a pure technology-oriented business.

      AWG

      4 out of 5 doctors think that the 5th one smells.
      • The issue is, how do you prevent some dumb ignorant chair warmer from publishing a confidential company document to the whole world. Eh?
  • Intel's P2P library (Score:5, Informative)

    by Pinball Wizard ( 161942 ) on Sunday December 30, 2001 @08:39AM (#2764534) Homepage Journal
    I didn't know that Intel had released a P2P library until I read this article. There's no link to the library from the article, so I looked it up. Turns out the library is released under the BSD library and is hosted on Sourceforge [sourceforge.net].
  • I've discovered Gnutella and found it cool,
    finally Dragonball GT Episodes (which won't
    ever be available in my country in near
    future - about the next fifty years).
    Way better than napster which I once tried
    as I had heard of.
  • So now distributed computing has this neat new "p2p" hax0r acronym, and the fact that you can write distributed applications if you've got networked computers is news. That Ray Ozzie is planning on writing a distributed application is news. That the army wants to hook GIs together in a wireless network is news.


    Sorry, but does anybody remember CORBA? DCOM? Or any of the zillion other frameworks for writing distributed applications that've been around for over a decade? A whole freaking lot of corporate applications ALREADY DEPLOYED are distributed applications that, in some way or another, are "p2p" applications. The one I'm personally most familiar with is Tivoli, which was a distributed app with installed clients interoperating via a distributed framework as far back as 1992. Does that make us Tivoli people futuristic super-geniuses? No, it doesn't -- because distributed apps have been on people's minds since networking was born. I mean, duhh. But hack together something that lets people swap ripped songs, and *poof* it's a "new wave".


    And does anybody else feel like we've been hearing about soldiers wired together for years and years (and years)?

    • So now distributed computing has this neat new "p2p" hax0r acronym, and the fact that you can write distributed applications if you've got networked computers is news.

      Well said!

      Watch now as the corporate giants wake up and start to co-opt the methodology and recast it as their innovation and file patent suits against any and all they perceive as transgressing their IP.

      Watch the partnerships a la Groove Networks foment:
      http://www.microsoft.com/presspass/press/2001/oct0 1/10-10GroovePR.asp [microsoft.com]
      and then watch as any work-alike initiatives are crushed in the courtrooms of America.

    • How can this parent not be flamebait? This issue here is not distributed computing. Distributed computing implies that there will be more than more CPU in more than one PC, working together to accomplish a single result. P2P doesn't really match this pattern at all. P2P does two things on your box. it sends and it receives. Now, I'm not trying to over simplify the deal, but short of their own very special search/connect algorithm, these are a glorified FTP server/client combo apps. But they are not truly "distributed".
      God bless'em for what they are, and what they have allowed me to see and hear. The reason this type of application is news worthy is because it is the absolute fastest, easiest, and most reliable way for me to access content on the net that I can't find via other channels. These apps make big news because it fell into the laps of every day citizens, and opened u[ a whole new world for them.
      pointym5, do you see what I mean? No, wait. I don't care. your do elite for these things, I don't know why I bother.
      • Distributed computing implies that there will be more than more CPU in more than one PC, working together to accomplish a single result.


        That's certainly one application, but it's by no means the only meaning of "distributed computing". I think the basic idea is that the application code exists around the network on the machines that want/need/request services.

        A network of "simple" point-to-point file transfer agents is not really that simple.

        • if that is your idea of distributed computing, then that includes ANY software (not even unioned by software or use types) interacting with another peice of software. Not only an FTP client connecting to a FTP server, but a telnet client connecting to a HTTPD server.
          The Technical term refers to one application using clock time of two or more proccessor, in two or more physically seperate machines.
          That being said, While I disagree with your usage of the term, it's a free contry, and I don't want to impose any view help by any type of concortium on your vocabulary...
          About the term(s) and it's meaning I digress, but as this story was about P2P, I have to agree with the moderation.
          • if that is your idea of distributed computing, then that includes ANY software (not even unioned by software or use types) interacting with another peice of software.


            No, it doesn't. For example, I don't consider a simple FTP client connecting to an FTP server based on a user-supplied address to be a distributed application. But an automated file downloader that operates off a local preference database and that locates its "servers" by using some search algorithm it runs itself, well that's in the gray area. Of course if the app is able to serve as well as act as a pure client, there's no discussion.
      • The reason this type of application is news worthy is because it is the absolute fastest, easiest, and most reliable way for me to access content on the net that I can't find via other channels. These apps make big news because it fell into the laps of every day citizens, and opened u[ a whole new world for them. Amen... Surprised anybody missed that ?
  • So, if Napster has been going for two years, how long have *actual* Peer-to-peer programs been around (rather than the old server-client ones like Napster which have been around for ages)?
  • yeah (Score:3, Funny)

    by Sk3lt ( 464645 ) <pete@adoomedmarinTOKYOe.com minus city> on Sunday December 30, 2001 @10:25AM (#2764656)
    Yeah P2P is pretty cool.. A friend of mine has started a web-design business with someone in America (he lives in Australia) and they chat through ICQ (I know its not too good but it allows them to get in touch at a small cost). The guy in America does the graphics while my friend makes the templates and programming, the business is going pretty well as well.

    It goes to show you that without P2P software we wouldn't have as many online businesses.
  • P2P has caught on in a big way since 2000. It has grown from the closed protocol quasi-p2p Napster, to a number of file sharing applications. The most popular & reliable seems to be Gnutella. The most attractive part is that free (as in speech) gnutella clients are available.

    The HTTP protocol & Push facilities in Gnutella are great for the firewall ridden. With a search engine on the web (find it yourself), we can download shared gnutella data even with a plain old browser. This feature of the gnutella protocol (backward compatibilty) allows it to by pass the toughest restrictions in corporate gateways ,including firewalls and proxies.

    I for one have used gtk-gnutella. That stuff just rocks !. Win32 guys also have a free (again, as in speech) client in Gnucleus. All this leads to one small point P2P is here to stay.
  • IM usually != P2P (Score:2, Interesting)

    by mikey573 ( 137933 )
    "One area of P2P that saw plenty of development in 2001 was instant messaging."

    Yahoo! IM, ICQ, AIM, etc. are not P2P. They are pretty server-centric systems.

    I think I'm going to go try Jabber [jabbercentral.org].
    • Yahoo! IM, ICQ, AIM, etc. apparently does allow P2P when doing file transfers.

    • I can't speak for the other IM software, but AIM is definately P2P. Although it connects to a server to notify you of buddies coming online, once you're happily typing to a buddy, you can "directly connect" and bypass the central server altogether, and even sign off if you so desire. The P2P capabilities of AIM includes sharing out directories, connecting using voip, and sharing images and sounds using the html interface.
  • since when is napster P2P?
    • Napster may not be P2P for the initial log-on or searches, but all the song downloads are done using a P2P connection.

      If Napster hosted any songs on their system it would have been taken down an awful lot faster!
  • by Dr. Awktagon ( 233360 ) on Sunday December 30, 2001 @12:12PM (#2764989) Homepage

    In the old days, our computers talked to each other. I send you a mail, and my VAX sends it to your Sun. Then, everybody put a PC on their desk, and everything was centralized. I send you an email, it goes to my mail server, then to your mail server, then to your computer.

    Well, now we're back again! Imagine that! Bring out the VCs! Bring out the patents!

    I predict by 2005, we'll see a new form of P2P that uses a Central Peer for maximum performance. Get this folks, we all know how great P2P is, but sometimes it can be inefficient. What if your peer is down? Why not forward your data to a Central Peer, which is a beefy computer that can handle lots of data, and let it worry about the details? So your computers are on the Edge, and the big computer is at the Center of a big conversation. In fact, the Edge computers don't even have to talk to one another, they can just communicate with the Central Peer.

    I dub this exciting new invention: Center/Edge computing. I have a patent, and lawyers.

    Yawn.

  • by Anaplexian ( 101542 ) on Sunday December 30, 2001 @12:23PM (#2765018) Journal
    The folks at OpenCola [opencola.com] have thought up a really cool use of P2P - to Save a website's Bandwidth Problems. The technology allows websites to send parts of a large file to individual users, and then each user uses P2P to get the rest of the file. I think it's a really cool way to stop net congestion. No wonder they're one of Fortune's 25 cool companies of 2001.
    • Having each user cache part of a web site they visit _sounds_ like a good idea, and technically it makes sense, but the economics and psychology of it are kind of iffy. Economically, you, the end user, would be providing a service that you wouldn't get paid for, and wouldn't receive any real benefit from. Now, you might argue that while you wouldn't benefit from helping, you _would_ benefit from others helping, but that's not really true. A website cached on a home pc would be slower to access, not faster, than one on a central server, even if it was a slowass server. All that you are doing is lowering the cost for the website owner. And you're not being paid for it. Psychologically, most people would also be uneasy serving content that they hadn't personally validated, so people would only volunteer to do this for sites they liked and approved of.

      Finally, opencola's economics also look a bit iffy. Who pays for swarmcast? I don't think that the end users will, since they don't get anything. That leaves the website owners. And it seems like those who need it the most, e.g. small university sites that get slashdotted, would be least able to afford it.
  • Preposterous. (Score:5, Informative)

    by jordan ( 17131 ) on Sunday December 30, 2001 @12:53PM (#2765108) Homepage
    P2P is an abstract model for communication between 2 or more points. It exists in our phone switching networks (~100 years), Internet (~25 years), most common network-based software (~20 years), etc. Does your company use sendmail, or maybe even Exchange? SMTP is a P2P transport. Perhaps you read news? NNTP is a P2P transport.

    The notion that Napster (or any other file sharing system) can lay claim to any part of the P2P phenomenon, aside from raising awareness, is absolutely ridiculous. The notion that P2P is just now starting to gain a foothold in businesses is fiendeshly drug-induced.

    The hype still continues. Ignorance pervades. What they really mean is "distributed", and even then most reporters are still talking out of their asses.

    --jordan
  • The biggest problem with the conventional P2P programs (Napster, gnutella, and the like) is they may sort by speed, but they never sort by location. These programs wreaked major havoc on universtity networks, but they didn't have to. Chances are that 95% of the data transferred could have been done inside the college network without ever hitting the upstream. Certainly, it could have caused bandwidth issues internally, but it was doing that anyways.

    This is relatively simple too. Just measure hops. Find out where the backbone routers are, then separate out any servers that are found inside that router, and give priority to those.

    -Restil
    • Re:The biggest flaw (Score:3, Interesting)

      by Jordy ( 440 )
      Actually... Napster's backend can sort by network distance (we have the ability to determine the network distance between two arbitrary IPs in log(N) time), but we only enabled it for Internet2.

      So for example, we know Sprint peers with UUnet and so Sprint users would see Sprint users first, UUnet users next. Doing it at the AS level is far easier than actually attempting to map the actual hop distance between every arbitrary point on the Internet.
  • Every indication is that the next generation internet is going to be P2P. Probably a freenet type model. If we have tough copyright enforcement, it will be at odds with this.

  • by JamieF ( 16832 ) on Sunday December 30, 2001 @03:37PM (#2765574) Homepage
    If you define P2P to include distributed computing, where a central server tells a bunch of nodes what to do (which seems outside of the scope of P2P to me), then wow, P2P has really taken off. Heck, with that definition, Pixar uses P2P render farms for their movies.

    On the other hand, if you define peer-to-peer in a more pure sense, where each node is a peer, doing its own thing and maybe using one or more directory servers or repeaters to find others, then Napster looks like the only winner I can think of, and it's clearly dead now that it's gone legit. Most IM apps look like client-server to me, although they have some P2P aspects such as file transfer... they're not any different from IRC + DCC, really.

    I interviewed with a couple of local (SF) "P2P" companies (really internet-based distributed computing platform vendors) a year ago, and they were having trouble selling their concept even then. Yes, there are CPU-intensive tasks out there that companies would pay to accomplish, but they tend to operate on a lot of data, and that data tends to be sensitive/confidential. One company was refocusing on internal deployments only - using corporate desktops inside the firewall to run distributed tasks at night. That mostly solves the bandwidth and sensitivity issues, although in a WAN environment you might not be able to use remote LANs if the pipe to the remote LANs are too small for the amount of data being crunched.

    It's hard to think of too many true P2P applications. P2P architectures that don't include central directory servers or reflectors tend not to scale - think back to old LAN protocols that didn't scale well a WAN context. It's the same problem but at a higher level. The more scalable protocols use some form of central servers or at least a group of more centralized peers (routers, PDCs, whatever) to find one another. Pure P2P doesn't scale due to network inefficiencies (think Gnutella without repeaters); pure client-server doesn't scale due to node scalability limits. A hybrid such as Napster or the WWW scales very well, though. (The whole web isn't on one big server...)

    With appropriate signatures, open-source software distribution might be a good P2P application. Instead of hunting around for a fast mirror, why not grab it from a peer, provided the signature is valid? Only the signature has to come from the main server (or a mirror).

    The problem with that is the same as what everybody finds when using Bearshare, Kazaa, etc. - upstream bandwidth from peers is very limited. ADSL, "56K" modems, and cable modems all tend to be asymmetric, limiting a P2P network run over them to the collective upstream bandwidth. Imagine 10 people with DSL, trying to swap 10 files - no matter how you slice it, everybody might as well be downloading from one guy. A P2P file sharing program called eDonkey2000 tried to avoid the single-source problem Napster and Gnutella face by using hashes to request files by hash rather than filename, so multiple peers can send you slices of the file even if the name differs, and even if some of them drop out over time. It's nice for big files because you will eventually get all the parts from somebody, but it's still slow.

    I think that perhaps multicasting is the only solution for this. P2P plus multicasting would eliminate the problem of popular servers being swamped by requests.

There is no opinion so absurd that some philosopher will not express it. -- Marcus Tullius Cicero, "Ad familiares"

Working...