Peer-to-Peer for Academia 100
Andy Oram has a good speech online about peer-to-peer and universities. He discusses a variety of possible research topics under the p2p umbrella and urges university administrators to promote this instead of squashing it.
How about... (Score:2)
Re:How about... (Score:3, Funny)
Re:How about... (Score:1)
at my university... (Score:5, Insightful)
I would imagine that it is the same for most universities...they don't discourage file sharing in a more academic capacity, but they know that it's going to be used for Napster-esque file sharing, and thus they are forced to implement an overall ban.
Of course this year it's not napster, (Score:1)
And then people complain about transfer rates.
Re:at my university... (Score:1)
98% of use was netbios protocol (samba-windows machines)...
Re:at my university... (Score:1)
Re:at my university... (Score:2)
"gives copyholders this right to be intruders" (Score:2, Interesting)
Anyone know the status of this amendment? Did it get tacked onto the bill that passed a few days ago?
Retracted (Score:3, Informative)
Re:"gives copyholders this right to be intruders" (Score:1)
Surprised it's taken this long... (Score:4, Interesting)
I just find it rather surprising that academia has taken this long to embrace p2p. It's not as if p2p has been an unknown or undiscussed topic in the realm of computer science. When I was in college, it seemed that the university was eager to stress the importance of object-oriented programming and relational databases...well, as soon as the market stressed their importance.
Is the market the core of the issue? Do colleges only adapt to teaching new technologies quickly when the market demands it? If that's the case, it would seem like more CS degress would be the equivalent of training at a vocational/technical school.
Re:Surprised it's taken this long... (Score:1)
Uh, SETI@Home is NOT peer-to-peer.
Re:Surprised it's taken this long... (Score:1)
Here at my school, the academics (from the CS faculty to the French Lit dept.) have known and used p2p for a while. It's the administration which is ignorant/fears it.
A friend of mine worked in doc support of the IT office, and let me know that their worries of Napster et al came from the top, not from the techs. The VP of IT announced that they would be blocking Napster because it was sucking 46% of the network. Then, when it happened, they lied and said that they saw a 46% jump in performance the moment they began blocking.
Funny that it should have been closer to a 90% jump (since the system was supposedly running at 54% before).
Anyway, the academics have been flying under the radar of the administration and using p2p for a couple of years now. There's even http://educommons.org/, a p2p program at Utah State to allow teacher to swap instructional materials.
Article mentions Bandwidth issues (Score:2, Interesting)
Do our brains have bandwidth issues? No, because supposedly we only use 10%. Gnutella is always ridiculed because of it's overhead though. But Napster and the rest don't really count though because they are centralized, so how does our brain not get overwhelmed and how can this be applied to P2P.
Re:Article mentions Bandwidth issues (Score:3, Funny)
Re:Article mentions Bandwidth issues (Score:1)
Re:Article mentions Bandwidth issues (Score:3, Interesting)
Actually, our brains/nervous system do have 'bandwidth' issues - which is why the doctor does that little 'smack you on the knee with a tiny hammer' test. It's like pinging your brain for a response, and how long does it take for your brain to respond appropriately.
P2P networks are the next Big Step in computing if you ask me. Free neighborhood wireless networks will probably be the next step in networking too. We've had global community with wire-based networking, now it's time to bring community back TO the community.
Re:Article mentions Bandwidth issues (Score:1)
Sorry, but that's not how reflexes work. 'Reflexes' do not involve the brain at all. The signal from the hammer-hit goes to your spinal cord then immediately gets re-routed back to your muscles (in addition to continuing on to your brain).
Re:Article mentions Bandwidth issues (Score:1)
Kind of like people that serve up classical music mp3's
Debunking 10% of the Brain myth (Score:3, Informative)
The two points snipped from the article:
1.) Brain imaging research techniques such as PET scans (positron emission tomography) and fMRI (functional magnetic resonance imaging) clearly show that the vast majority of the brain does not lie fallow. Indeed, although certain minor functions may use only a small part of the brain at one time, any sufficiently complex set of activities or thought patterns will indeed use many parts of the brain. Just as people don't use all of their muscle groups at one time, they also don't use all of their brain at once. For any given activity, such as eating, watching television, making love, or reading Skeptical Inquirer, you may use a few specific parts of your brain. Over the course of a whole day, however, just about all of the brain is used at one time or another.
2.) The myth presupposes an extreme localization of functions in the brain. If the "used" or "necessary" parts of the brain were scattered all around the organ, that would imply that much of the brain is in fact necessary. But the myth implies that the "used" part of the brain is a discrete area, and the "unused" part is like an appendix or tonsil, taking up space but essentially unnecessary. But if all those parts of the brain are unused, removal or damage to the "unused" part of the brain should be minor or unnoticed. Yet people who have suffered head trauma, a stroke, or other brain injury are frequently severely impaired. Have you ever heard a doctor say, ". . . But luckily when that bullet entered his skull, it only damaged the 90 percent of his brain he didn't use"? Of course not.
As the article says "For a much more thorough and detailed analysis of the subject, see Barry Beyerstein's chapter in the new book Mind Myths: Exploring Everyday Mysteries of the Mind [1999]"
Re:Debunking 10% of the Brain myth (Score:1)
rather than "that lower left hand glob underneath the cerebellum, the rest is fat".
Woohooo! (Score:2, Funny)
the funnell leaks? (Score:1, Informative)
Meanwhile, you could possibly get some serious p2p going, at this catchy web address [opensourceworks.com], if you are shrewd enough to follow some simple directions.
Have you seen these face scans, etc... [opensourcenews.com], of the REAL .commIEs? I thought so.
Excellent point. (Score:3, Interesting)
Excellent point.
p2p in academia (Score:2, Insightful)
Copyright is key... (Score:4, Insightful)
The real sticking point, however, is what happens when general file-sharing software becomes popular, and people are sending each other pictures of the kids, notes, and all other sorts of digital goodies in addition to music.
Napster was banned for two reasons: bandwidth and copyright infringement. What's likely to happen in the case of general purposes P2P apps is that universities and ISPs will start to block out the software(such as gnutella) rather than individual users when they get complaints of copyright infringement, making the public suffer for the actions of the few. Worse, all of those legitimate users of P2P software will be labeled as "pirates."
Re:Copyright is key... (Score:1)
What's likely to happen in the case of general purposes P2P apps is that universities and ISPs will start to block out the software(such as gnutella) rather than individual users when they get complaints of copyright infringement, making the public suffer for the actions of the few.
Of course, that's how justice is done now-a-days. If a person does something wrong, an entire group gets punished. I can only think of a few exceptions, but that's what happens. Ever since I was in first grade, that's what happens. Someone does something wrong, and all the boys have to stay after -- if a girl does something wrong, the entire class stays after. If an Arab blows a few buildings up, all Arabs get in trouble. I think that the idea is that the group will keep its members in check, so they don't all get in trouble. At least that's what my CIS teacher at the Ottawa County Careerline Tech Center in Holland, Michigan, a big proponent of group justice, said anyway. Another advantage that groupjustice has over the canonical form of individual responsiblity is that the authorities don't have to waste their time investigating -- all they have to do is get a general profile. Group justice is the wave of the future, better get used to it.
Re:Copyright is key... (Score:1)
Incidentally, one of the biggest problems in meta-analysis of scientific results, the filecabinet full of nulls, could be dealt with through p2p. In general, it is harder to publish null results than it is to publish positive results, so, if you do meta-analysis on many small published studies, the result is skewed away from null. If everyone makes their raw data available on a filesharing network - or a random subset of people inclined to share it not related to who got positive results - this problem goes away.
It resurrects the problem of namespaces mentioned in the article in a big way! When the results of science become politically important (say, tobacco research, health effects of PCPs) you have to worry about people with an interest in the topic releasing false data (this is allready a problem, but we know who they are). You also have to know who someone is because you have to be able to go to them and ask how to verify their results, even unpublished ones, and so on.
more bandwidth (Score:3, Insightful)
(emphasis mine)
That's the problem right there. As resources become abundant, price should drop, availablility goes up, the product reaches a wider audience. It took how many years (lack of competition) for Microsoft to ship a decent product? How many DSL providers dissapeared? The RIAA and MPAA want to strangle any revolutions in the distribution of their product. What kind of market model is that!?!
When companies can hold back on the resources they control to keep profits rising, there's a problem.
Re:more bandwidth (Score:1)
For them, the current P2P filesharing isn't a usefull/helpful market model at all. If it became completely successful they would go out of business.
As for them "holding back resources they control to keep profits rising," that's their job, to keep profits rising. if they don't, their fired.
Not that I'm against file sharing, i've done my fair share of music downloading, but free P2P service isn't something we can resonable expect them to accept. The biggest problem i have seen with them is what you brought up earlier in your statement: "As resources become abundant, price should drop". As they got a bigger and bigger hold on the market they never dropped their prices. Now they are seeing the problems that caused. If a cd is well priced, i like the music, and want to support the band, i'll buy it. If its going to cost me more than i'm willing to support, then i go dl it. i know alot of people with this attitude, if they would lower the prices then they would see increases in their number of sales.
Re:more bandwidth (Score:1)
and of poor quality. I can understand the reluctance of the companies to fight for their survival in the game, but they really need to wake up and see that this truly is a revolution in information.
Co-exist not compete.
Re:more bandwidth (Score:3, Funny)
It took how many years (lack of competition) for Microsoft to ship a decent product?
Twenty-six and counting...Bandwidth cap (Score:1)
The thing is that occasionally there does arise a legitimate need to send a large file outside the university. It's really frustrating to have to wait several hours for a file transfer that could have taken 20 minutes. What's odd is that this in no way reduces piracy - people can still download whatever they want at ungodly speeds. I don't understand why they only blocked the sending.
So far, I don't know of any way to get around the cap, though I've tried a few little things. I don't know how it's implemented, but do let me know if you have any ideas. Or you can just rant at me.
Re:Bandwidth cap (Score:1)
Current Researchers? (Score:1)
I'm going into CS graduate school next year and my proposed research area is something close to P2P optimisation. Does anyone know of professors already doing research in P2P?
Plenty of lyin' cheatin' sluts at my Uni (Score:2, Funny)
Stamford, CT - Internet consulting firm Gartner Group predicts that growth in peer-to-peer girlfriends will explode in the coming months. "Right now the P2P girlfriends are in the hands of early adopters in the tech community. We think that by the end of the year they will have reached critical mass and move into the mainstream. We forecast that by 2003, 65% of girlfriends will be peer-to-peer," said consultant Dawn Haisley.
One of the first movers was Computer Science student Neil Joseph, "I was pretty pissed when she told me she slept with someone else, but when I found out she was one of the new peer-to-peer girlfriends I was geeked. I love being a beta-tester. My friends are telling me I should leave her, but I know they are just jealous."
The beauty of a peer-to-peer girlfriend is that one peer doesn't know what the other peer is doing. Anonymity is extremely important in maintaining the integrity the network. Most girlfriends report that the speed between peers is more satisfying in a local network, but anonymity is easier to keep in a world wide network.
Some techies aren't pleased with P2P girlfriends. "These consultants throw around terms like peer-to-peer and they don't even know what the phrase means," said networking guru Mitch Mead, "P2P girlfriends aren't even a true peer-to-peer network. They are just a client-server model trying to jump on the P2P bandwagon."
Tom Mansfield agrees, "I had a so-called P2P girlfriend, but she was more like a lyin', cheatin' slut."
In an academia sense... (Score:2)
I think all the systems and networks at a university should have a splashing of all the old & new technologies, throughout.
Load of nonsense (Score:5, Insightful)
It is often that I read knowledgeless prattle on Slashdot ... usually only from fellow commentors. This is not a troll, it is serious criticism of an article that is blatently wrong. Let's examine Mr. Oram's discussions of P2P ...
Did Universities try to stop P2P? Napster, certainly. Probably many other file sharing systems too. Why on earth would they do that? Bandwidth, security, liability. I'll elaborate later.
Mr. Oram asserts that P2P is a great way to overcome limited resources. Then expounds on how Internet2 and IPv6 are going to remove the resource barriers to P2P.
Is P2P new? No. IRC's DCC extensions have been around for at least 8 years; ytalk is even older. The idea of dsitributing information on a whole lot of servers without central control is, surprise surprise, the basis for the Web. P2P simply involves direct communication between clients, at most using a server to mediate discovery.
I'm going to ignore the anti-DMCA dissertation, because its been heard before. It also has nothing to do with P2P; just a few specialised services that use P2P as a means to swap copyright information. If it wasn't for people like Mr Oram confusing P2P with specific P2P applications, then P2P as a whole wouldn't have a bad name.
A little later we hit the "IPv6 will help" argument, to which I can only say: security. Sure, you get rid of NAT. But at the risk of placing your device in the line of fire. Even if it is "secure by default" (so end users don't have to worry too much), it is still accessible from everywhere. That means DOS vulnerable, attack vulnerable when a security hole is found, and each and every individual is responsible for their own security. That doesn't work in corporate of group/organization networking. A central point needs primary control over security for the entire network. NAT, firewalls, and prevention of arbitary data coming IN to the network unsolicited are significant defenses against attack.
Which brings up the strongest point for universities to deny P2P: they would have to allow access to P2P services (yes, P2P is actually a client and a server on each machine) behind their firewalls, causing a security risk. Typically universities have a limited number of computers providing services behind firewalls, and take care to guard them against attack, and quarantine them in case of breach. With P2P, this approach goes out of the window.
For the same reason Mr Oram has ignored the security communities hatred of SOAP, a protocol explicitly designed to penetrate those nasty firewalls that administrators put up. Tell me, why don't we just set up a public inbound IP-over-TCPIP tunnel available on all firewalls so that we can get past them?
Now Mr Oram turns to debunking the security argument. Totally missing the point of course. You can encrypt and sign until your CPU is blue in the face, and still have zero security because your computer has been compromised. Unless you can adequately secure ALL services on your computer, you are insecure. One of the best ways to secure a service is to shut it down. The more services, the more ports of entry. Not surprisingly, P2P is a service.
Sendmail and apache serve massive amount of network traffic every day. They have taken years to mature to a point where they are mostly secure, yet new hacks are found for them every so often. How long until P2P implementations reach this level of maturity, and security?
The McAfee example is laughable, to say the least. Multitier client-server technology isn't P2P, not matter what this supposed expert wants to believe. Oh yes -- what was that announcement two weeks ago about an attack on the McAfee auto-upgrade feature?
While most of the assertions regarding bandwidth are true (shock!), Mr Oram is WAY OUT on the University issue. You see, students may be downloading the same amount irrespective of whether they use P2P or FTP ... but there is the issue of UPLOADING. Having administered a network for just a small company at the time of Napsterism, I saw a massive increase in bandwidth use just from Napster fielding and responsing to queries, even before local users started downloading the music.
Finally we conclude by returning to nonsense: Seti@home is P2P?!? In what universe does distributed computing offloaded by a central server and in which none of the computing nodes communicate with each other get classified as P2P?
Please, Mr Oram. Understand at least the vaguest basics of a topic before spewing garbage about it.
Re:Load of nonsense (Score:2)
Glad to know that firewalls dont work unless you have a NAT in there!
Actually, NAT and individual addressing have nothing to do with security. Firewalls can filter subnets just as easily as they can filter a single IP.
Also regarding SOAP and HTTP tunneling, etc, your blowing smoke. If your firewalls allows outgoing TCP connections (like all of them?) then you can tunnel protocols. If someone wants to do it, they can. This is a non issue.
Re:Load of nonsense (Score:1)
So I appreciate Twylite's points, except when they get twisted into a critique by unnecessarily placing issues in opposition to each other (for instance, presenting IPv6 as a threat to security instead of an issue to pursue in addition to security).
Bandwidth from the institution's perspective? (Score:1)
Second, P2P may work fine within the university with current equipment for current applications. Now add in P2P video, streaming audio, you name it. Now you're talking about multiples, or decades, of new traffic for your new P2P applications. (Almost nobody wants to do great new things with ASCII text, alas!) Soon you will need new switches, routers, all within your on-site network. A $12,000 router may not be too bad, until you need one for every 1,000 users. And if traffic keeps growing, you may need to replace it in 3-5 years. Flat rate fees? Going up!
You get what you pay for.
TANSTAAFL.
Now just how bad do you want more bandwidth?
Most P2P should be banned. (Score:2, Interesting)
I'm not sure in the States, but in Canada Universities have very small budgets that are being cut yearly. I'd rather the University had a decent network and focus spending on research rather than worry about supporting P2P stuff.
P2P is Internet upside down (Score:1)
Instead of my stupid computer being passive when retrieving information, I can have passive retrieval while agressively distributing information too. This means that we are all content providers, with high redundancy. This is supposed to be a Good Thing.
The problem there of course is that that opens up a can of worms on intellectual property, and copyright all that crap.
But certain powerful groups want to curtail this, much like the church despaired when literacy reared it's ugly head. Too bad for them. We all know that information should be democritized. And only civil disobediance will be able to counter pressures against that democritization.
Things have gotten worse since television, when our entertainment and our news/information became entwined a little too close together. P2P allows us to change this. But lableleing people pirates and copyright thieves is the Old Way. It really is. Forget about the dot-bomb and IPOs, a new found ability to communicate amongst one another is at risk right now.
We all need to pay for the goods and services we use to access information, and those who work hard to build that infrastructure need to reap the benefits. I think that Freenets and private neighbourhood nets are a good thing, as are commercial ventures, but the actual money value of information will go down simply because it is now so easily reproducible. Profit should be made in it's distribution and not in the hoarding of easily gotten patents and copyrights. That does no one any good.
ramble ramble ramble....gnashing of teeth
Re:P2P is Internet upside down (Score:2)
It only opens such cans of worms if it is abused. The problem is the tendencies of the napsterite thugs to confuse providing information with providing entertainment. The confusion is deliberate, because "access to information" sounds like something one may believe they're entitled to, while "access to entertainment" isn't.
a new found ability to communicate amongst one another is at risk right now.
If these P2P tools really were being used to "communicate", this wouldn't be an issue. I'd argue that distributing someone else's creative work is not "communicating" at all, it's more like providing a free entertainment service at someone else's expense. No-ones trying to ban web-servers, because these typically are indeed used for "communication".
We all need to pay for the goods and services we use to access information, and those who work hard to build that infrastructure need to reap the benefits.
I'm not clear on what your point is here.
but the actual money value of information will go down simply because it is now so easily reproducible.
Not sure on this point either. Maybe you mean "market value" ? The utility of information doesn't change.
Profit should be made in it's distribution and not in the hoarding of easily gotten patents and copyrights. That does no one any good.
The problem with this is that if you're prepared to make the basic assupmtion that people will act in their own economic interests, then the result would be that everyone would want to distribute and no-one would want to create. Obviously, the only sensible and morally acceptable system is one where anyone who does useful work, whether it be distribution or creation, is compensated.
IPv6 ? (Score:1)
I think that if there is anything that will make users systems less obscurely identified on a network, it will *not* be IPv6. With the power that the general public will have over IP addresses, NAT may be only slightly less useful, and IP's will change so frequently that nobody will be able to figure out where the 'ghost host' went. I for one, prefer the miniscule amount of obscurity my wireless NAT'd connection provides me when browsing.
Try setting up a machine that's completely open to cookies and the like, but only use it to occasionally browse the type of sites you normally wouldn't- say Pokemon and Barney sites. Just watch the spam and pop-ups accumulate relative to those subjects. Nah, I'd rather not "log in automatically" or "save your username and password" - disable all those people tracking devices, and change IP's / MACs on a regular basis.
Firewalling universities a big problem (Score:4, Insightful)
Our university [www.utu.fi] did this, which has annoyed especially many computer science students. For me, it closed down my largeish website, together with many CGI programs for research (such as a data equalizer for neural net research) and personal purposes.
I wrote a long complaint [funet.fi] (in Finnish sorry) about the problem, but since most people don't need (or don't know they need) the service, they don't care. The students still can put up their web page to a poorly administered and always outdated main server, which doesn't have any DB or other softwares, and has very severe restrictions on disk space (on the order of 10 megs while I'd need some 10 gigs).
I see this also as a serious threat to the development of new Internet services. If you look at most of the existing Internet technologies (http, nntp, smtp, bind...), they were all created in universities as "gray research", often by students. In a tightly firewalled Internet, they might never have made it out.
Sure, researchers and deparments of our university can theoretically have their own servers, if the department's head takes personal official responsibility and the department officially allocates money for the upkeep. This means absolute ban for almost all "gray research" projects (often part of larger projects.)
In our case, firewalling was explained with need for tighter security. However, an easy-to-use unofficial port registration would have solved most of the security problems. It's difficult to say what's the real reason; perhaps over-enthusiasm for "high-end security tech", or perhaps just low interest to administer the system - if the net isn't used it doesn't cause so much work, right?
Oh, and we pay for our connections, although they are partly subvented. Well, it might even be profitable for the university. (Note that studying doesn't cost anything here.)
Re:Firewalling universities a big problem (Score:2, Interesting)
As much as I agree that universities should keep their networks open, I have to disagree with this point. Why? Because initial "gray" work can (and probably should) still be done on an isolated network. Not only does it make sure that projects don't accidently kill the campus or departmental network, it also makes debugging a heck of a lot easier. And, once the prototypical work is done, you can usually convince some professor to beat IT into submission for you. Most departments have a couple of spare boxes lying about (heck, back at my school twenty years ago, there were usually anywhere from 2-3 midicomputers lying about totally unutilized at any time). Hubs are cheap. Linux makes a relatively stable development platform for gray work. So, in the end, I don't see "sealed tight" campus networks as a huge impediment to self-motivated research (unless it's cultural research into the latest works of Limp Bizkit).
Re:Firewalling universities a big problem (Score:2)
Some yes, at least theoretically. If someone makes an ingenious new important system, he could develop it first for some time, and then might get a permission to run it on an open server. Yes, possible, in theory.
In real world, I think most projects are not so "important" or high-end that professors would give them permission at any point. Many of the projects may be (at least initially) hobby-related and professors would not appreciate them much. Notice that the reasons may need to be *very* heavy, so even having written some "new internet protocol" such as http might not qualify.
It's basicly a problem of unnecessary obstacles which unmotivate people. If you have to struggle too much to get that one cool service you'd like to do in your limited sparetime, you'll probably do something else. This is of course rather difficult subject to consider generally, but this is my intuition, based on how I do things.
It's the commodity connection (Score:2)
Universities generally aren't concerned with P2P file sharing over Internet2. We have plenty of capacity. No Internet2 core circuit was ever saturated. Congestion on campus connections to GigaPoPs and GigaPoP connections to Abilene is very infrequent and easy to deal with (usually, by upgrading the circuit).
What universities are concerned about is Internet1 usage. They generally have metered commodity connections that cost a lot of money and are often congested.
Many universities have unwittingly become information producers for home users on cable and DSL connections, who download a lot of stuff from university dorms. This costs universities serious money while it's hard to argue that it furthers any educational goals.
Re:It's the commodity connection (Score:1)
Pods (Score:1)
Pods is a decentralised P2P network for sharing abstracted resources over nodes that need not supply or know of the resource through the use of XML.
The Disk and Processor resources are already in place and working well.
Have a look at it here [sf.net]
Academic P2P research (Score:3, Informative)
CFS [mit.edu] and PAST [microsoft.com] are P2P readonly file systems a la Napster/Gnutella/Freenet. Both had papers in this year's SOSP [ucsd.edu]. Both are based [mit.edu] on [microsoft.com] log(N) P2P overlay routing/lookup substrates.
OceanStore [berkeley.edu] seeks to be a more general (writable) global storage system.
And several P2P conferences [rice.edu] have [ida.liu.se] formed [www.lri.fr] and will continue to form.
Some of these projects have been going on for years. So you shouldn't buy the "Academic networking/CS researchers are a bunch of P2P haters" line without a few grains of your favorite seasoning.
plenty of P2P research (Score:1, Informative)
For more info check out these implementations:
Farsite, xFS, Frangipani, Intermemory, OceanStore, Eternity Service, India, PAST, Free Haven Gnutella, Freenet, Pastry, Tapestry, CHORD and CAN. (Not Napster!)
Academic research takes a more measured view (Score:1)
Oram's speech is interesting, but offers little that is new.
His assertion that academia was uninterested in p2p technologies because university administrations acted (responsibly, imho) to prevent the use of tools designed for the illegal redistribution of copyright works is a little disingenuous. Certain areas of computer science research are extremely interested in these technologies (I count myself amongst these - my PhD research involved p2p resource discovery techniques), in particular those which deal with developments in distributed systems.
However, there is a great deal of hype about the efficacy and efficiency of p2p systems (I consider Oram's article to be an example of such), so it is right that academia should judge these systems on their merits rather than simply accepting the claims at face value.
For example, Oram's article contains the (uncontroversial) statement that peer-to-peer technologies cannot only distribute files, it can also distribute the burden of supporting network connections and then goes on to claim that the overall bandwidth remains the same as in centralised systems - which is rarely the case (in the p2p domain I have studied - resource and service discovery - even the more efficient decentralised systems have a significantly greater communication complexity than do centralised systems).
Valid research cannot be founded on spurious claims such as these. p2p technologies may have a number of advantages, but forgive us in academia if we don't get very excited about their disadvantages.
Intel Inspired by Napster (Score:2)
It was interesting to see the p2p idea moved beyond academic theory and actually implemented in real world situations by a commercial entity with beneficial and measurable results.
Trickster Coyote
Reality isn't all its cracked up to be.
been there done that (Score:1)
has anyone else done anything like this?
P2P is about more than hoge files (Score:1, Interesting)
P2P Thinking (Score:1)