Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Education

Peer-to-Peer for Academia 100

Andy Oram has a good speech online about peer-to-peer and universities. He discusses a variety of possible research topics under the p2p umbrella and urges university administrators to promote this instead of squashing it.
This discussion has been archived. No new comments can be posted.

Peer-to-Peer for Academia

Comments Filter:
  • A Peer2Peer application that traded, and then firewalled IP addresses known to belong to RIAA and MPAA companies, or the 'IP Watchdog' companies that work for them?
  • by turbine216 ( 458014 ) <turbine216NO@SPAMgmail.com> on Wednesday October 31, 2001 @10:05AM (#2502416)
    ...peer-to-peer file sharing in a purely academic sense is not discouraged or directly banned. However, internet file sharing programs (beginning with Napster) were banned due to the hit that they put on my school's available bandwidth. With over 12,000 100Mbps dorm room connections, it proved a little bit too easy for the student body to overrun the entire network by queueing up 100+ songs on Napster.

    I would imagine that it is the same for most universities...they don't discourage file sharing in a more academic capacity, but they know that it's going to be used for Napster-esque file sharing, and thus they are forced to implement an overall ban.
    • the flavour of the month is kaza and morpheus. Both install with sharing turned on by default so regardless of whether or not the student understands the implications of this, (they often don't) significant school bandwidth is consumed by people who have no affiliation whatsoever with the school.

      And then people complain about transfer rates.

    • But I agree with the speech. We have analyzed all the procotols used here in Federal University of Sao Carlos (Brazil) network, and guess what...
      98% of use was netbios protocol (samba-windows machines)...
    • Since several posters have suggested that academic environments are actually open to peer-to-peer (however they might define it), I should give some meta-information about this speech. The organizers of the panel explicitly told me in advance that the audience would have prejudices against the whole P2P concept because of bandwidth issues around file-sharing. The reports of other people on the panel (which were very good) confirmed that there's an assumption at many colleges that P2P==file-sharing and that it's just a problem, not an opportunity. (As some posters point out, there may be split between researchers and people responsible for day-to-day operations, too.) My speech was specifically directed at overcoming that prejudice (although, nevertheless, I see some people giving me flack for spreading the prejudice, ho-hum).
    • Yes, this is true, but at my university we also don't care about intranet traffic only traffic that leaves to go out to the rest of the internet. A few of us have set up a DirectConnect server from neo-modus.com? and that allows us to make sure that nobody from outside the university can access our files, so the resident networking people don't care what we do, and as a result all of our downloads are about a factor of 300 faster than it would be off of napster or kazaa. It's really a win/win situation. BTW: at last count we had over 4.5 TB online! (1 TB of mostly pr0n)
  • "and its representatives were level-headed enough during discussions of the Anti-Terrorism Act to offer an amendment that specifically gives copyholders this right to be intruders"


    Anyone know the status of this amendment? Did it get tacked onto the bill that passed a few days ago?

    • Retracted (Score:3, Informative)

      by Root Down ( 208740 )
      As I recall, there was an attempt made to tack on a provision that would allow these organizations to inspect your machine, but that the attempt was retracted. There are serious issues in enacting a provision like this - how do they know which are legal copies, for instance. There was a /. post about this a few weeks back. Any other readers may feel free to correct me on this, but I believe that was the 'state of the union' so to speak.
    • They (RIAA and MPAA) must not have contributed enough to various senatorial campaigns. The "United We Take Away Your Rights" act passed without it.
  • by mystery_bowler ( 472698 ) on Wednesday October 31, 2001 @10:09AM (#2502433) Homepage
    Peer-to-peer, like many other technologies, has it's advantages and disadvantages. For some purposes (not just file-swapping), it's absolutely ideal (OK, OK so I'm a fan of SETI@Home :) ).

    I just find it rather surprising that academia has taken this long to embrace p2p. It's not as if p2p has been an unknown or undiscussed topic in the realm of computer science. When I was in college, it seemed that the university was eager to stress the importance of object-oriented programming and relational databases...well, as soon as the market stressed their importance. :) As I was taking my mandatory networking classes (which I wish I had paid more attention during), we discussed p2p quite a bit. By my senior year (Waaaaay back in...'98 :) ) there had already been several groups of students who created p2p final projects.

    Is the market the core of the issue? Do colleges only adapt to teaching new technologies quickly when the market demands it? If that's the case, it would seem like more CS degress would be the equivalent of training at a vocational/technical school.

    • Peer-to-peer, like many other technologies, has it's advantages and disadvantages. For some purposes (not just file-swapping), it's absolutely ideal (OK, OK so I'm a fan of SETI@Home :) ).

      Uh, SETI@Home is NOT peer-to-peer.

    • I just find it rather surprising that academia has taken this long to embrace p2p. It's not as if p2p has been an unknown or undiscussed topic in the realm of computer science.

      Here at my school, the academics (from the CS faculty to the French Lit dept.) have known and used p2p for a while. It's the administration which is ignorant/fears it.

      A friend of mine worked in doc support of the IT office, and let me know that their worries of Napster et al came from the top, not from the techs. The VP of IT announced that they would be blocking Napster because it was sucking 46% of the network. Then, when it happened, they lied and said that they saw a 46% jump in performance the moment they began blocking.

      Funny that it should have been closer to a 90% jump (since the system was supposedly running at 54% before).

      Anyway, the academics have been flying under the radar of the administration and using p2p for a couple of years now. There's even http://educommons.org/, a p2p program at Utah State to allow teacher to swap instructional materials.

  • I have often thought of P2P as being similiar to neural networks or the brain. Nodes that are structurally similiar and carry info to and fro.

    Do our brains have bandwidth issues? No, because supposedly we only use 10%. Gnutella is always ridiculed because of it's overhead though. But Napster and the rest don't really count though because they are centralized, so how does our brain not get overwhelmed and how can this be applied to P2P.

    • Perhaps the reason we only use 10% of our brains capacity is due to the bandwidth constraints it currently has...
    • Just imagine if we could implant fiber optics into our spine and major nerves! We'd have the fastest reaction time ever!

      Actually, our brains/nervous system do have 'bandwidth' issues - which is why the doctor does that little 'smack you on the knee with a tiny hammer' test. It's like pinging your brain for a response, and how long does it take for your brain to respond appropriately.

      P2P networks are the next Big Step in computing if you ask me. Free neighborhood wireless networks will probably be the next step in networking too. We've had global community with wire-based networking, now it's time to bring community back TO the community.

      • Actually, our brains/nervous system do have 'bandwidth' issues - which is why the doctor does that little 'smack you on the knee with a tiny hammer' test. It's like pinging your brain for a response, and how long does it take for your brain to respond appropriately.

        Sorry, but that's not how reflexes work. 'Reflexes' do not involve the brain at all. The signal from the hammer-hit goes to your spinal cord then immediately gets re-routed back to your muscles (in addition to continuing on to your brain).
    • As someone else said.. it does have bandwidth limits. An analogy in gnutella would be like 100 people serving a file that only 10 people wanted to download... its really efficient in that scenario.
      Kind of like people that serve up classical music mp3's :)
    • I know this is OT but since it comes up so often I thought we would all benefit by knowing that the idea "We only use 10% of our brain!" is a myth [csicop.org].
      The two points snipped from the article:

      1.) Brain imaging research techniques such as PET scans (positron emission tomography) and fMRI (functional magnetic resonance imaging) clearly show that the vast majority of the brain does not lie fallow. Indeed, although certain minor functions may use only a small part of the brain at one time, any sufficiently complex set of activities or thought patterns will indeed use many parts of the brain. Just as people don't use all of their muscle groups at one time, they also don't use all of their brain at once. For any given activity, such as eating, watching television, making love, or reading Skeptical Inquirer, you may use a few specific parts of your brain. Over the course of a whole day, however, just about all of the brain is used at one time or another.
      2.) The myth presupposes an extreme localization of functions in the brain. If the "used" or "necessary" parts of the brain were scattered all around the organ, that would imply that much of the brain is in fact necessary. But the myth implies that the "used" part of the brain is a discrete area, and the "unused" part is like an appendix or tonsil, taking up space but essentially unnecessary. But if all those parts of the brain are unused, removal or damage to the "unused" part of the brain should be minor or unnoticed. Yet people who have suffered head trauma, a stroke, or other brain injury are frequently severely impaired. Have you ever heard a doctor say, ". . . But luckily when that bullet entered his skull, it only damaged the 90 percent of his brain he didn't use"? Of course not.

      As the article says "For a much more thorough and detailed analysis of the subject, see Barry Beyerstein's chapter in the new book Mind Myths: Exploring Everyday Mysteries of the Mind [1999]"
      • Thank you for educating me, instead of modding me down unecessarily. As for the whole 10% thing, I always understood it to mean that at any given time, 10% of the whole brain is being utilised,
        rather than "that lower left hand glob underneath the cerebellum, the rest is fat".

  • There's nothing quite as satisfying as having academics argue for perfectly logical systems that will also allow me to continue downloading porn, warez and mp3s.

  • the funnell leaks? (Score:1, Informative)

    by Anonymous Coward
    That's right, particularly in the area of commerce, inf./cash has flowed primarily in one direction for decades. Be a shame to break IT up, but we'll [scaredcity.com] be doing our best to help do just that. fud IS dead.

    Meanwhile, you could possibly get some serious p2p going, at this catchy web address [opensourceworks.com], if you are shrewd enough to follow some simple directions.

    Have you seen these face scans, etc... [opensourcenews.com], of the REAL .commIEs? I thought so.

  • Excellent point. (Score:3, Interesting)

    by andres32a ( 448314 ) on Wednesday October 31, 2001 @10:18AM (#2502479) Homepage
    "I'm not surprised that colleges would complain about Napster bandwidth requirements because I hear the same wringing of hands over education in general. I hear there are too many applicants to top colleges. Excuse me, but wouldn't it be good to educate more students? Instead of saying there are too many applicants, why don't you work on increasing the availability of high-quality course offerings? I know you don't have tenure-track positions for all the people awarded doctorates, but it's not your job to offer everyone a position; it's your job to educate them."
    Excellent point.
  • p2p in academia (Score:2, Insightful)

    by Anonymous Coward
    P2P is a topic of some interest for networking and distributed systems researchers. There are people all over the world already working hard on this topic for a few years and probably for more than that without even recognizing or uttering the word p2p. The speech strikes me as not terribly timely. Researchers don't give a damn about whether or not Napster is banned on the campus, at least with respect to working on, say, distributed file systems or reliable and efficient p2p routing. For example, at my school, I am working on an open reimplementation of a P2P routing scheme for a project course (the original implementation is closed as it was developed while working for the bad people). How many small teams of graduate students and seniors do you think are doing the same thing at other universities? I would guess a half dozen around the world. Academia is already busy fixing the problems with P2P and it has access to manpower and hardware (cluster machines and network simulators). We don't have to worry that people will be scared to think about this topic.
  • by gillbates ( 106458 ) on Wednesday October 31, 2001 @10:22AM (#2502492) Homepage Journal
    Most universities and ISP's have a CYA approach to P2P. As long as you're not a bandwidth hog, and they don't get complaints, they don't care.

    The real sticking point, however, is what happens when general file-sharing software becomes popular, and people are sending each other pictures of the kids, notes, and all other sorts of digital goodies in addition to music.

    Napster was banned for two reasons: bandwidth and copyright infringement. What's likely to happen in the case of general purposes P2P apps is that universities and ISPs will start to block out the software(such as gnutella) rather than individual users when they get complaints of copyright infringement, making the public suffer for the actions of the few. Worse, all of those legitimate users of P2P software will be labeled as "pirates."

    • What's likely to happen in the case of general purposes P2P apps is that universities and ISPs will start to block out the software(such as gnutella) rather than individual users when they get complaints of copyright infringement, making the public suffer for the actions of the few.

      Of course, that's how justice is done now-a-days. If a person does something wrong, an entire group gets punished. I can only think of a few exceptions, but that's what happens. Ever since I was in first grade, that's what happens. Someone does something wrong, and all the boys have to stay after -- if a girl does something wrong, the entire class stays after. If an Arab blows a few buildings up, all Arabs get in trouble. I think that the idea is that the group will keep its members in check, so they don't all get in trouble. At least that's what my CIS teacher at the Ottawa County Careerline Tech Center in Holland, Michigan, a big proponent of group justice, said anyway. Another advantage that groupjustice has over the canonical form of individual responsiblity is that the authorities don't have to waste their time investigating -- all they have to do is get a general profile. Group justice is the wave of the future, better get used to it.

    • The best legal defense against this is to find a legitimate academic use for p2p. Like the "high press", "high academia" has, although I know this is not justified, a priveleged place in the courts. So, if I'm using a totally open p2p filesharing network to share raw data - which is one of the things that I want to do - and incidentally someone else on the same network is sharing Metallica, then someone who's right to free speech actually counts (a scientist at a respected institution) is being threatened if they try to shut the network down.

      Incidentally, one of the biggest problems in meta-analysis of scientific results, the filecabinet full of nulls, could be dealt with through p2p. In general, it is harder to publish null results than it is to publish positive results, so, if you do meta-analysis on many small published studies, the result is skewed away from null. If everyone makes their raw data available on a filesharing network - or a random subset of people inclined to share it not related to who got positive results - this problem goes away.

      It resurrects the problem of namespaces mentioned in the article in a big way! When the results of science become politically important (say, tobacco research, health effects of PCPs) you have to worry about people with an interest in the topic releasing false data (this is allready a problem, but we know who they are). You also have to know who someone is because you have to be able to go to them and ask how to verify their results, even unpublished ones, and so on.
  • more bandwidth (Score:3, Insightful)

    by underpaidISPtech ( 409395 ) on Wednesday October 31, 2001 @10:27AM (#2502516) Homepage
    College administrators have fallen into the same rut as telephone companies that are slow to roll out high-bandwidth lines, or the recording industry that is shutting down Napster. These institutions all find it more profitable to manage scarcity than to offer abundance.
    (emphasis mine)

    That's the problem right there. As resources become abundant, price should drop, availablility goes up, the product reaches a wider audience. It took how many years (lack of competition) for Microsoft to ship a decent product? How many DSL providers dissapeared? The RIAA and MPAA want to strangle any revolutions in the distribution of their product. What kind of market model is that!?!

    When companies can hold back on the resources they control to keep profits rising, there's a problem.

    • "The RIAA and MPAA want to strangle any revolutions in the distribution of their product. What kind of market model is that!?!"

      For them, the current P2P filesharing isn't a usefull/helpful market model at all. If it became completely successful they would go out of business.

      As for them "holding back resources they control to keep profits rising," that's their job, to keep profits rising. if they don't, their fired.

      Not that I'm against file sharing, i've done my fair share of music downloading, but free P2P service isn't something we can resonable expect them to accept. The biggest problem i have seen with them is what you brought up earlier in your statement: "As resources become abundant, price should drop". As they got a bigger and bigger hold on the market they never dropped their prices. Now they are seeing the problems that caused. If a cd is well priced, i like the music, and want to support the band, i'll buy it. If its going to cost me more than i'm willing to support, then i go dl it. i know alot of people with this attitude, if they would lower the prices then they would see increases in their number of sales.
      • True. I would buy a hell of alot more music if it wasn't so damn expensive. As it is now, I rarely download music off the net, most is crap ( I like vinyl [soundmethod.net])
        and of poor quality. I can understand the reluctance of the companies to fight for their survival in the game, but they really need to wake up and see that this truly is a revolution in information.

        Co-exist not compete.

    • It took how many years (lack of competition) for Microsoft to ship a decent product?

      Twenty-six and counting... :)
  • My university recently imposed a bandwidth cap on outgoing data, which means people can only send files outside the university network at 30K/sec. The reason they did this was because p2p filesharing applications, and the ftps a lot of students were running were hogging too much bandwidth. They figured the best way to do that is to allot users a fixed bandwidth cap and let them deal with it. What really annoys me is that rather than blocking only the real bandwidth hogs, they made everyone pay.

    The thing is that occasionally there does arise a legitimate need to send a large file outside the university. It's really frustrating to have to wait several hours for a file transfer that could have taken 20 minutes. What's odd is that this in no way reduces piracy - people can still download whatever they want at ungodly speeds. I don't understand why they only blocked the sending.

    So far, I don't know of any way to get around the cap, though I've tried a few little things. I don't know how it's implemented, but do let me know if you have any ideas. Or you can just rant at me.

    • Sounds like the best way would be to split up files and send from different nodes. Or perhaps you could use multiple nodes with one computer. I wouldn't know if you could just put a few NIC in your computer and send at say 90k, but it's worth a shot. Don't mod me down for being stupid, I'm just thinking out loud.
  • I'm going into CS graduate school next year and my proposed research area is something close to P2P optimisation. Does anyone know of professors already doing research in P2P?

  • by Anonymous Coward
    Growing Trend in Peer-to-Peer Girlfriends

    Stamford, CT - Internet consulting firm Gartner Group predicts that growth in peer-to-peer girlfriends will explode in the coming months. "Right now the P2P girlfriends are in the hands of early adopters in the tech community. We think that by the end of the year they will have reached critical mass and move into the mainstream. We forecast that by 2003, 65% of girlfriends will be peer-to-peer," said consultant Dawn Haisley.
    One of the first movers was Computer Science student Neil Joseph, "I was pretty pissed when she told me she slept with someone else, but when I found out she was one of the new peer-to-peer girlfriends I was geeked. I love being a beta-tester. My friends are telling me I should leave her, but I know they are just jealous."

    The beauty of a peer-to-peer girlfriend is that one peer doesn't know what the other peer is doing. Anonymity is extremely important in maintaining the integrity the network. Most girlfriends report that the speed between peers is more satisfying in a local network, but anonymity is easier to keep in a world wide network.

    Some techies aren't pleased with P2P girlfriends. "These consultants throw around terms like peer-to-peer and they don't even know what the phrase means," said networking guru Mitch Mead, "P2P girlfriends aren't even a true peer-to-peer network. They are just a client-server model trying to jump on the P2P bandwagon."

    Tom Mansfield agrees, "I had a so-called P2P girlfriend, but she was more like a lyin', cheatin' slut."

  • ... I'd believe that no "technology" or "idea" should be squashed. Isn't that the point of academia? To learn?

    I think all the systems and networks at a university should have a splashing of all the old & new technologies, throughout.
  • Load of nonsense (Score:5, Insightful)

    by Twylite ( 234238 ) <twylite.crypt@co@za> on Wednesday October 31, 2001 @10:57AM (#2502654) Homepage

    It is often that I read knowledgeless prattle on Slashdot ... usually only from fellow commentors. This is not a troll, it is serious criticism of an article that is blatently wrong. Let's examine Mr. Oram's discussions of P2P ...

    Did Universities try to stop P2P? Napster, certainly. Probably many other file sharing systems too. Why on earth would they do that? Bandwidth, security, liability. I'll elaborate later.

    Mr. Oram asserts that P2P is a great way to overcome limited resources. Then expounds on how Internet2 and IPv6 are going to remove the resource barriers to P2P.

    Is P2P new? No. IRC's DCC extensions have been around for at least 8 years; ytalk is even older. The idea of dsitributing information on a whole lot of servers without central control is, surprise surprise, the basis for the Web. P2P simply involves direct communication between clients, at most using a server to mediate discovery.

    I'm going to ignore the anti-DMCA dissertation, because its been heard before. It also has nothing to do with P2P; just a few specialised services that use P2P as a means to swap copyright information. If it wasn't for people like Mr Oram confusing P2P with specific P2P applications, then P2P as a whole wouldn't have a bad name.

    A little later we hit the "IPv6 will help" argument, to which I can only say: security. Sure, you get rid of NAT. But at the risk of placing your device in the line of fire. Even if it is "secure by default" (so end users don't have to worry too much), it is still accessible from everywhere. That means DOS vulnerable, attack vulnerable when a security hole is found, and each and every individual is responsible for their own security. That doesn't work in corporate of group/organization networking. A central point needs primary control over security for the entire network. NAT, firewalls, and prevention of arbitary data coming IN to the network unsolicited are significant defenses against attack.

    Which brings up the strongest point for universities to deny P2P: they would have to allow access to P2P services (yes, P2P is actually a client and a server on each machine) behind their firewalls, causing a security risk. Typically universities have a limited number of computers providing services behind firewalls, and take care to guard them against attack, and quarantine them in case of breach. With P2P, this approach goes out of the window.

    For the same reason Mr Oram has ignored the security communities hatred of SOAP, a protocol explicitly designed to penetrate those nasty firewalls that administrators put up. Tell me, why don't we just set up a public inbound IP-over-TCPIP tunnel available on all firewalls so that we can get past them?

    Now Mr Oram turns to debunking the security argument. Totally missing the point of course. You can encrypt and sign until your CPU is blue in the face, and still have zero security because your computer has been compromised. Unless you can adequately secure ALL services on your computer, you are insecure. One of the best ways to secure a service is to shut it down. The more services, the more ports of entry. Not surprisingly, P2P is a service.

    Sendmail and apache serve massive amount of network traffic every day. They have taken years to mature to a point where they are mostly secure, yet new hacks are found for them every so often. How long until P2P implementations reach this level of maturity, and security?

    The McAfee example is laughable, to say the least. Multitier client-server technology isn't P2P, not matter what this supposed expert wants to believe. Oh yes -- what was that announcement two weeks ago about an attack on the McAfee auto-upgrade feature?

    While most of the assertions regarding bandwidth are true (shock!), Mr Oram is WAY OUT on the University issue. You see, students may be downloading the same amount irrespective of whether they use P2P or FTP ... but there is the issue of UPLOADING. Having administered a network for just a small company at the time of Napsterism, I saw a massive increase in bandwidth use just from Napster fielding and responsing to queries, even before local users started downloading the music.

    Finally we conclude by returning to nonsense: Seti@home is P2P?!? In what universe does distributed computing offloaded by a central server and in which none of the computing nodes communicate with each other get classified as P2P?

    Please, Mr Oram. Understand at least the vaguest basics of a topic before spewing garbage about it.

    • A little later we hit the "IPv6 will help" argument, to which I can only say: security. Sure, you get rid of NAT. But at the risk of placing your device in the line of fire. ... That means DOS vulnerable, attack vulnerable when a security hole is found, and each and every individual is responsible for their own security.

      Glad to know that firewalls dont work unless you have a NAT in there!

      Actually, NAT and individual addressing have nothing to do with security. Firewalls can filter subnets just as easily as they can filter a single IP.

      Also regarding SOAP and HTTP tunneling, etc, your blowing smoke. If your firewalls allows outgoing TCP connections (like all of them?) then you can tunnel protocols. If someone wants to do it, they can. This is a non issue.
    • This post, as I interpret it, is a description of how current networks have weaknesses and vulnerabilities that lead responsible people to have doubts about opening them up further through P2P. That's a useful point that perhaps I should have said in the speech. But I did lay out at the start: "these systems have created all kinds of new problems."



      So I appreciate Twylite's points, except when they get twisted into a critique by unnecessarily placing issues in opposition to each other (for instance, presenting IPv6 as a threat to security instead of an issue to pursue in addition to security).

  • There are a couple of aspects Oram didn't address here. First, for almost all colleges and universities, there is one unavoidable chokepoint: the line to the outside. The bandwidth issue comes when you have to pay for another pipe to the outside. For a lot of suburban/rural campuses, that means you also have to pay for the installation, and possibly for the connection at the other end. With a finite number of students (constrained by available housing in many instances), that means higher fees to support flat rate connections.

    Second, P2P may work fine within the university with current equipment for current applications. Now add in P2P video, streaming audio, you name it. Now you're talking about multiples, or decades, of new traffic for your new P2P applications. (Almost nobody wants to do great new things with ASCII text, alas!) Soon you will need new switches, routers, all within your on-site network. A $12,000 router may not be too bad, until you need one for every 1,000 users. And if traffic keeps growing, you may need to replace it in 3-5 years. Flat rate fees? Going up!

    You get what you pay for.

    TANSTAAFL.

    Now just how bad do you want more bandwidth?

  • First off, academic bandwidth should be used for academic purposes. Sure, limited personal use is fine but its main purpose isn't entertainment. That being said, I know my university doesn't care if you run SETI@Home but do care when you run Kazzaa or other file sharing software. I think that's a good stand by the administrators. My university doesn't force anyone to use their network services. (They actually encourage you to get @Home.) If you want to use P2P software, get @Home or DSL and do whatever you want.

    I'm not sure in the States, but in Canada Universities have very small budgets that are being cut yearly. I'd rather the University had a decent network and focus spending on research rather than worry about supporting P2P stuff.
  • Isn't P2P just the current Internet reversed? I see alot of articles that seem to focus on the software, which is no small part. However, isn't p2p just changing the communications model from a network of dumb terminals attached to smart backbones, to that of a network of intelligent terminals attached to less intelligent backbones?

    Instead of my stupid computer being passive when retrieving information, I can have passive retrieval while agressively distributing information too. This means that we are all content providers, with high redundancy. This is supposed to be a Good Thing.
    The problem there of course is that that opens up a can of worms on intellectual property, and copyright all that crap.

    But certain powerful groups want to curtail this, much like the church despaired when literacy reared it's ugly head. Too bad for them. We all know that information should be democritized. And only civil disobediance will be able to counter pressures against that democritization.

    Things have gotten worse since television, when our entertainment and our news/information became entwined a little too close together. P2P allows us to change this. But lableleing people pirates and copyright thieves is the Old Way. It really is. Forget about the dot-bomb and IPOs, a new found ability to communicate amongst one another is at risk right now.

    We all need to pay for the goods and services we use to access information, and those who work hard to build that infrastructure need to reap the benefits. I think that Freenets and private neighbourhood nets are a good thing, as are commercial ventures, but the actual money value of information will go down simply because it is now so easily reproducible. Profit should be made in it's distribution and not in the hoarding of easily gotten patents and copyrights. That does no one any good.

    ramble ramble ramble....gnashing of teeth

    • The problem there of course is that that opens up a can of worms on intellectual property, and copyright all that crap.

      It only opens such cans of worms if it is abused. The problem is the tendencies of the napsterite thugs to confuse providing information with providing entertainment. The confusion is deliberate, because "access to information" sounds like something one may believe they're entitled to, while "access to entertainment" isn't.

      a new found ability to communicate amongst one another is at risk right now.

      If these P2P tools really were being used to "communicate", this wouldn't be an issue. I'd argue that distributing someone else's creative work is not "communicating" at all, it's more like providing a free entertainment service at someone else's expense. No-ones trying to ban web-servers, because these typically are indeed used for "communication".

      We all need to pay for the goods and services we use to access information, and those who work hard to build that infrastructure need to reap the benefits.

      I'm not clear on what your point is here.

      but the actual money value of information will go down simply because it is now so easily reproducible.

      Not sure on this point either. Maybe you mean "market value" ? The utility of information doesn't change.

      Profit should be made in it's distribution and not in the hoarding of easily gotten patents and copyrights. That does no one any good.

      The problem with this is that if you're prepared to make the basic assupmtion that people will act in their own economic interests, then the result would be that everyone would want to distribute and no-one would want to create. Obviously, the only sensible and morally acceptable system is one where anyone who does useful work, whether it be distribution or creation, is compensated.

  • IPv6 will definitely help. It will, we hope, bring users' systems out into the open, eliminating the current Network Address Translation system that hides the users.

    I think that if there is anything that will make users systems less obscurely identified on a network, it will *not* be IPv6. With the power that the general public will have over IP addresses, NAT may be only slightly less useful, and IP's will change so frequently that nobody will be able to figure out where the 'ghost host' went. I for one, prefer the miniscule amount of obscurity my wireless NAT'd connection provides me when browsing.

    Try setting up a machine that's completely open to cookies and the like, but only use it to occasionally browse the type of sites you normally wouldn't- say Pokemon and Barney sites. Just watch the spam and pop-ups accumulate relative to those subjects. Nah, I'd rather not "log in automatically" or "save your username and password" - disable all those people tracking devices, and change IP's / MACs on a regular basis.
  • by magi ( 91730 ) on Wednesday October 31, 2001 @11:22AM (#2502817) Homepage Journal
    I'd be more worried about the tendency of some universities to build strong firewalls around their networks that filter out all incoming traffic, thus preventing the use of any private servers and peer-to-peer clients of students as well as researchers.

    Our university [www.utu.fi] did this, which has annoyed especially many computer science students. For me, it closed down my largeish website, together with many CGI programs for research (such as a data equalizer for neural net research) and personal purposes.

    I wrote a long complaint [funet.fi] (in Finnish sorry) about the problem, but since most people don't need (or don't know they need) the service, they don't care. The students still can put up their web page to a poorly administered and always outdated main server, which doesn't have any DB or other softwares, and has very severe restrictions on disk space (on the order of 10 megs while I'd need some 10 gigs).

    I see this also as a serious threat to the development of new Internet services. If you look at most of the existing Internet technologies (http, nntp, smtp, bind...), they were all created in universities as "gray research", often by students. In a tightly firewalled Internet, they might never have made it out.

    Sure, researchers and deparments of our university can theoretically have their own servers, if the department's head takes personal official responsibility and the department officially allocates money for the upkeep. This means absolute ban for almost all "gray research" projects (often part of larger projects.)

    In our case, firewalling was explained with need for tighter security. However, an easy-to-use unofficial port registration would have solved most of the security problems. It's difficult to say what's the real reason; perhaps over-enthusiasm for "high-end security tech", or perhaps just low interest to administer the system - if the net isn't used it doesn't cause so much work, right?

    Oh, and we pay for our connections, although they are partly subvented. Well, it might even be profitable for the university. (Note that studying doesn't cost anything here.)
    • I see this also as a serious threat to the development of new Internet services. If you look at most of the existing Internet technologies (http, nntp, smtp, bind...), they were all created in universities as "gray research", often by students. In a tightly firewalled Internet, they might never have made it out.


      As much as I agree that universities should keep their networks open, I have to disagree with this point. Why? Because initial "gray" work can (and probably should) still be done on an isolated network. Not only does it make sure that projects don't accidently kill the campus or departmental network, it also makes debugging a heck of a lot easier. And, once the prototypical work is done, you can usually convince some professor to beat IT into submission for you. Most departments have a couple of spare boxes lying about (heck, back at my school twenty years ago, there were usually anywhere from 2-3 midicomputers lying about totally unutilized at any time). Hubs are cheap. Linux makes a relatively stable development platform for gray work. So, in the end, I don't see "sealed tight" campus networks as a huge impediment to self-motivated research (unless it's cultural research into the latest works of Limp Bizkit).

      • As much as I agree that universities should keep their networks open, I have to disagree with this point. Why? Because initial "gray" work can (and probably should) still be done on an isolated network.

        Some yes, at least theoretically. If someone makes an ingenious new important system, he could develop it first for some time, and then might get a permission to run it on an open server. Yes, possible, in theory.

        In real world, I think most projects are not so "important" or high-end that professors would give them permission at any point. Many of the projects may be (at least initially) hobby-related and professors would not appreciate them much. Notice that the reasons may need to be *very* heavy, so even having written some "new internet protocol" such as http might not qualify.

        It's basicly a problem of unnecessary obstacles which unmotivate people. If you have to struggle too much to get that one cool service you'd like to do in your limited sparetime, you'll probably do something else. This is of course rather difficult subject to consider generally, but this is my intuition, based on how I do things.
  • The key issue with file sharing is network capacity consumption, not legal issues (which are generally dealt with on a case-by-case basis, and universities are subject to the same--good or bad--laws as everybody else).

    Universities generally aren't concerned with P2P file sharing over Internet2. We have plenty of capacity. No Internet2 core circuit was ever saturated. Congestion on campus connections to GigaPoPs and GigaPoP connections to Abilene is very infrequent and easy to deal with (usually, by upgrading the circuit).

    What universities are concerned about is Internet1 usage. They generally have metered commodity connections that cost a lot of money and are often congested.

    Many universities have unwittingly become information producers for home users on cable and DSL connections, who download a lot of stuff from university dorms. This costs universities serious money while it's hard to argue that it furthers any educational goals.

  • I'm one of the developers on a new p2p network system called pods.
    Pods is a decentralised P2P network for sharing abstracted resources over nodes that need not supply or know of the resource through the use of XML.

    The Disk and Processor resources are already in place and working well.

    Have a look at it here [sf.net]
  • by srichman ( 231122 ) on Wednesday October 31, 2001 @11:45AM (#2502956)
    Javelin [ucsb.edu] is a generalized framework for fault-tolerant, scalable global computing, a la SETI@home.

    CFS [mit.edu] and PAST [microsoft.com] are P2P readonly file systems a la Napster/Gnutella/Freenet. Both had papers in this year's SOSP [ucsd.edu]. Both are based [mit.edu] on [microsoft.com] log(N) P2P overlay routing/lookup substrates.

    OceanStore [berkeley.edu] seeks to be a more general (writable) global storage system.

    And several P2P conferences [rice.edu] have [ida.liu.se] formed [www.lri.fr] and will continue to form.

    Some of these projects have been going on for years. So you shouldn't buy the "Academic networking/CS researchers are a bunch of P2P haters" line without a few grains of your favorite seasoning.

  • by Anonymous Coward
    I agree with the others who said that there IS a lot of P2P research going on in Universities. I know because I am writing a paper on it and was forced to find journal articles on it. The phrase Peer-to-peer or P2P is not always used - sometimes its 'Serverless Distributed File Systems', etc, I think in part because P2P is so mis-used.

    For more info check out these implementations:
    Farsite, xFS, Frangipani, Intermemory, OceanStore, Eternity Service, India, PAST, Free Haven Gnutella, Freenet, Pastry, Tapestry, CHORD and CAN. (Not Napster!)
  • Oram's speech is interesting, but offers little that is new.

    His assertion that academia was uninterested in p2p technologies because university administrations acted (responsibly, imho) to prevent the use of tools designed for the illegal redistribution of copyright works is a little disingenuous. Certain areas of computer science research are extremely interested in these technologies (I count myself amongst these - my PhD research involved p2p resource discovery techniques), in particular those which deal with developments in distributed systems.

    However, there is a great deal of hype about the efficacy and efficiency of p2p systems (I consider Oram's article to be an example of such), so it is right that academia should judge these systems on their merits rather than simply accepting the claims at face value.

    For example, Oram's article contains the (uncontroversial) statement that peer-to-peer technologies cannot only distribute files, it can also distribute the burden of supporting network connections and then goes on to claim that the overall bandwidth remains the same as in centralised systems - which is rarely the case (in the p2p domain I have studied - resource and service discovery - even the more efficient decentralised systems have a significantly greater communication complexity than do centralised systems).

    Valid research cannot be founded on spurious claims such as these. p2p technologies may have a number of advantages, but forgive us in academia if we don't get very excited about their disadvantages.

  • Wired News yesterday ran an interesting story [wired.com] about how Intel adopted the Napter model to distribute its own multimedia material to its various far flung offices around the world. They found the system was ten times cheaper than sending the file out from big central servers, and a lot faster as well.

    It was interesting to see the p2p idea moved beyond academic theory and actually implemented in real world situations by a commercial entity with beneficial and measurable results.

    Trickster Coyote
    Reality isn't all its cracked up to be.
  • Here at my school we only have a 3 T-1 connection to the internet for about 3000 users, and since we're in the middle of no where, they are pegged almost 24/7 since we have nothing better to do. So to help this problem a group of friends and I got together and hacked together a Gnutella clone that only works in our class B that we call Stotella [geocities.com]
    has anyone else done anything like this?
  • by Anonymous Coward
    P2P is more than swapping files efficiently, sweating about copyright and bashing the RIAA. at P2PQ [p2pq.net] for example they are doing somehting that uses P2P but which is unrelated to filesharing. This is a perfect example of P2P put to a pure academic purpose. If all the people holed up in dorms in colleges around the country used it as much as they use file swapping P2P, we would not hear so much of the tired arguments against P2P being trottedd around whenever those three characters are strung together.
  • Hey Anonymous Coward, You're right, P2P is more than just bashing the RIAA etc... it is a whole way of thinking, beyond "information should be free"... It's almost like an addition to the saying... Information wants to be free, BUT In order for information systems to work at all, we must tend to their problems and optimize the 'bad code'. I just finished an article, which happens to be on a related train of thought at Shift.com called "P2P Terror: The People Are The Network" and the subheading is "Although the RIAA won the war on Napster, peer-to-peer file sharing survives. Now the U.S. is going to war against a decentralized terrorist threat. Can they win?" I'd really appreciate some feedback, so Checker oot if you can at P2P Terror [shift.com] thanks :)

I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham

Working...