Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

uServ -- P2P Webserver from IBM 150

ryantate writes: "Some folks over at IBM have been working on the uServ Project, which provides "high availability web hosting ... using existing web and internet protocols", meaning you can serve a website from your desktop and people can get at it with a standard Web browser and without special software. They claim the system, which works from behind firewalls and when you are offline (provided you can convince other peers to 'replicate' your site), is in active use by about 900 people within IBM. Here's the white paper."
This discussion has been archived. No new comments can be posted.

uServ -- P2P Webserver from IBM

Comments Filter:
  • only a resounding second.
    Looks interesting. But well doesn't this undermine what admins are trying to do when they put up a firewall.
    • Re:bummer (Score:2, Insightful)

      It doesn't look like anything revolutionary to me. It's just a distributed system and a dynamic DNS put together. The coordinator still needs to be online at all times.
    • this reminds me ... I was 3rd lvl technical support for a law firm and a lawyer calls me and asks me if we have a socks4 or socks5 proxy.

      being of suspicious nature I had to inquire, why a lawyer who usually wants to know how to make a word bold in word perfect needs to know about our proxy config. Well, he says, I just installed personal web server and....

  • No fair. (Score:4, Funny)

    by tcd004 ( 134130 ) on Sunday December 02, 2001 @02:06AM (#2643133) Homepage
    Only AOL Time Warner Turner, News Corp, Verizon [lostbrain.com] and Oprah should be allowed to make Web pages.

    Everyone else doing it will just mess stuff up!

    tcd004
    • Re:No fair. (Score:5, Informative)

      by Bi()hazard ( 323405 ) on Sunday December 02, 2001 @05:47AM (#2643336) Homepage Journal
      This is exactly the purpose of uServ-if you read through the documents written by uServ's designers, you'll see that this is intended to be web publishing for the masses. While many slashdot readers would host a site themselves, the average user can't do that. Hosts such as Geocities are corporate behemoths who have shown themselves ready to trample individual users whenever they find it convenient. IBM's visionaries hope to use the new resources available to home users-namely cable and dsl connections capable of moving around enough data to distribute sites-to implement an unregulated, power to the people version of Geocities.

      Isn't this the entire purpose of the internet: a distributed, uncontrollable network allowing anyone to share information with anyone else? Don't be fooled by the scant description offered on the front page or any preconcieved notions about what distributed filesharing systems do. This isn't a client/server program like gnutella; it relies on basic internet protocols to use the dormant resources of clients as servers. Coordinating servers will be set up not only by IBM, but individual power users like the typical slashdotter-someone with a spare computer to use as a dedicated server, and enough knowledge to run it well. The dream of uServ's creators is nothing less than freeing the server side of the internet from the chains of money, nothing less than making web serving as cheap and easy as web browsing. Nothing less than the liberation of content from the hands of the powerful.

      See for yourself in the document [ibm.com] by the researchers Bayardo, Somani, Gruhl, and Agrawal. Their ultimate vision is a system taken for granted by the end user in the same way DNS is now. A complex solution to a serious problem, but one so easy to use, effective, reliable, and hidden in the background that anything else is unimaginable to the end user. Think of what will be possible when we have a large, community driven, self-sufficient, unregulated section of the internet. Censorship will be impossible, even for restrictive nations such as China. Using its revolutionary peer-to-peer proxying technology uServ will be able to dynamically create tunnels and anonymous proxies as easily as it can create webpages. Today Napster can be shut down, but one million users in a hundred countries with most of their traffic completely legitimate cannot be stopped. Today political dissidents can be tracked by oppressive governments, but a distributed network with built-in anonymity and trail obfuscation created by dozens of cooperating users in different countries can guarantee anonymity. Today the internet can to a large extent be controlled by those with money and power-but a mature uServ would bring us close to realization of the internet's original vision, where everyone is equal.

      • This isn't a client/server program like gnutella;

        Were you perhaps thinking of Napster? Gnutella is about as P2P as it gets; there's no central server, and once two nodes have been introduced (e.g., at least one of them has added the other to its host list) they can reconnect even if everyone else is shut down. Granted, it may take awhile if the original network was >> 2 nodes--but it doesn't take a very large fraction of the network to self-connect within a reasonable time.

        -- MarkusQ

  • A big company like IBM to back it up. IBM definitely has the funds to take on the RIAA and the MPAA. And if this is less of a pain to use than say gnutella/mojonation, it will prove to be a lot more popular.
  • by Rosco P. Coltrane ( 209368 ) on Sunday December 02, 2001 @02:08AM (#2643139)
    "uServ -- P2P Webserver from IBM"

    I can't wait to see the RIAA try to sue IBM. God I love this industry ...

    • Sure IBM is the 800 pound gorilla of the tech world, but wouldn't the MPAA and RIAA be a 950,000 kilo crocodile compared to it?
      • "Sure IBM is the 800 pound gorilla of the tech world, but wouldn't the MPAA and RIAA be a 950,000 kilo crocodile compared to it?"

        The RIAA is a business association, so it's more like a very large community of monkeys, the sum of all the monkeys weighting probably as much as your crocodile.

        But seriously though, the RIAA is not so stupid as to sue IBM. No no, instead they would sue uServ users one by one (or simply threaten them with a couple of C&Ds). They only go after people when they're sure to win, like when they went after Napster.

  • by Astral Traveller ( 540334 ) on Sunday December 02, 2001 @02:09AM (#2643140)
    Freenet [freenetproject.org] already does all this, and in addition, provides for complete anonymity and encryption. It can also be tunneled over just about any other protocol (instead of being limited to HTTP like uServ). It is still under heavy development, but already contains a wealth of information. This is one of the few truly great open-source projects in development today.
    • Also, it appears that uServ is not open source. So maybe it's not so good after all.
    • by whiteben ( 210475 ) on Sunday December 02, 2001 @02:24AM (#2643169)
      I agree that uServ doesn't represent any stunning advances in collaboration technologies. It makes use of proxy servers, peering, and HTTP: not exactly bleeding edge tech.


      On the other hand, it's not Freenet, either. Freenet is a platform which guarrantees that data is survivable (lawyer-proof) and secure. uServ doesn't seem to be concerned with either. It's primarily a way for users who aren't very technologically savvy to publish content. That's it. Useful in its own way.


      BEN

    • not to be a troll but...

      in this case the difference is that this works and freenet still isn't usable by any decent minority of people let alone a majority of people.

      -davidu
    • Basically if you create a bridge between your browser and a gnutella client, you could do the same. Just build an http server into the gnutella client and you can search and open documents using your browser. Of course you'd want some additional functionality (like share the cached documents you requested; some naming scheme so that you get the index.html you are looking for) but technically it doesn't get much more difficult than that. It would be a nice project for somebody working on their master thesis.

      Of course the thing is that nobody bothered to do so sofar. As pointed out above, it is a really simple combination of what we already have. Yet it takes some creativity (courtesy of IBM) to think of doing it. That's what I find so interesting about this stuff. Everybody is so busy thinking of websites as a central thing that nobody has even considered decentralization (even though it makes perfect sense).
    • Yeah, it sounds like a ripoff of Freenet, except that the transmission of data is direct instead of via intermediate nodes and the anonymity/encryption which is integral to Freenet is missing, and it doesn't just drop unpopular data like Freenet. Yeah, otherwise it's just like Freenet.

      Freenet is very interesting in an abstract sort of way, but certain characteristics - e.g. anonymity and most especially data loss - severely limit its usefulness in the real world. Plus, it's not done and doesn't look like it ever will be. I don't think it can be considered "truly great" unless (a) the development team is functional and (b) the result is useful. There are better examples.

      • Freenet is very interesting in an abstract sort of way, but certain characteristics - e.g. anonymity and most especially data loss - severely limit its usefulness in the real world.
        How could the option of anonymity make Freenet any less useful? You don't have to use it if you don't want to. Also, with respect to data loss, the only data Freenet loses is that which is unrequested. If the author wants to ensure that unpopular data is available in Freenet then all they have to do is reinsert it.
        Plus, it's not done and doesn't look like it ever will be. I don't think it can be considered "truly great" unless (a) the development team is functional and (b) the result is useful.
        Perhaps you should educate yourself before you expose your ignorance. Freenet is under heavy development (daily snapshots are released on the website), and many people are using it, including non-techies, on a regular basis.
        • Also, with respect to data loss, the only data Freenet loses is that which is unrequested. If the author wants to ensure that unpopular data is available in Freenet then all they have to do is reinsert it.

          That's just not good enough, for reasons that have already been discussed in this article [platypus.ro] and elsewhere. Reinserting data is not only horribly inefficient but also unreliable. How often do you need to reinsert? You can't know that unless you know what else is going on that will cause old copies to drop off the end of everyone's cache, so you make a pessimistic assumption and spam the network with reinsertions...and it seems to work until someone else starts doing the same things and the caches start turning over faster and IT JUST REALLY SUCKS. Freenet is useful as a data transmission method but not as a data store, and some people want a data store. Get over it.

          Perhaps you should educate yourself before you expose your ignorance.

          I'm on freenet-tech, Ian. I see how people respond when someone asks when Freenet will be finished. I know about the near-total restart when a lot of the original grand plans were found to be fatally flawed. I can almost predict the next one. You're the one who's ignorant, Ian - about what constitutes a useful system and how to provide it.

          • That's just not good enough, for reasons that have already been discussed in this article [platypus.ro] and elsewhere. Reinserting data is not only horribly inefficient but also unreliable. How often do you need to reinsert?
            The number of times you need to reinsert is inversely proportional to the populularity of your data, and proportional to your desire to have people see your output despite its unpopularity. The same is true of life. In practice, data is available on Freenet for long enough that people can evaluate it, if this were not the case action would need to be taken, but fortunately it isn't.
            Freenet is useful as a data transmission method but not as a data store
            Freenet is not, nor was it ever, intended as a way to store information. Rather it is closer to radio in that it is a good way to get information to people, but not if they aren't interested in it.
            I'm on freenet-tech, Ian.
            Try freenet-dev, that is a more accurate source for information. Either way, it is clear that your agenda is to spread a negative opinion of Freenet regardless of reality. In practice, many are happily using Freenet RIGHT NOW. Is it perfect? No, but we are working to improve it, and it is working well enough for many even now.
            You're the one who's ignorant, Ian - about what constitutes a useful system and how to provide it.
            Tempting as it is, I won't take your bait, try talking to the significant number of people who are unfamilar with Freenet internals, yet are using it on a daily basis, they will provide you with the counter-argument to your claims.
    • This uServ thing solves none of the privacy problems freenet has "solved".

      -If Employee A serves porn, it will also find its way to (the pc of) employee B. B has no control about this.
      -Also employee B can modify the data of Employee A... oops.
      -It is based on the willingness of employee B to mirror A.
      -If the site of A is very popular, it must be mirror'ed much more time, but no mechanism is described for this.

      Freenet has solved most of these problems by encrypting and signing the data in freenet. It distributes the data as it is requested. And i wonder if the system is acceptable if it gets more popular. Suppose you run a freenet node because you want to exchange mp3 files. You computer gets to contain: ??? (you have no control about this)

      Freenet does not work (yet) as far i can see. If someone can tell me how to set up standalone node to start with. 0.4 should be able to do this, but this is very much beta now.

      Both freenet and userv have not solved the problem how to find information.
  • Piracy issues (Score:2, Informative)

    by neksys ( 87486 )
    It's a neat idea, but realistically, I can't imagine personal "This is my Cat" webpages will be propagated far enough for it to be worthwhile (assuming I'm reading it correctly). Unfortunately, as with many "neat ideas", the only used that will become widespread are be warez/mp3/movie/iso/etc. sites, illegitimizing (to some) the whole idea.

    On the other hand, it may make it just that much harder for the MPAA, RIAA and co. to stop the spread of their property.
    • I agree with this in some sense that it will be either propagated to a large extent and then not at all, or just not at all. Microsoft mentioned this at some point in a "visionary statement", so at that point, everyone will have access to publish their content, and we will perform a full circle to what we have now: Everything, (almost) everywhere. (I'm assuming they will implement this in Win2003..)

      So what if it's distributed? Geocities, etc. don't go down/lag tooooo often, and their content is mostly free anyways.. The difference..? Perhaps a M$ central DB of home pages? (more) ads?

      For do-it-yourself hosting, it still may have some kinks to work out.. Just my two cents..
    • by Anonymous Coward
      I have a "This is my cat" webpage with MPEG/divx movies of my cat, hour-long mp3's of my cat, and the iso image of a blank CD scratched by my cat, so don't insult me.
    • Read the article. Propagation isn't the goal here -- all the system allows for are hot backups in case you turn off your computer, and you have to explicitly set them up.

      The intent of this system is for sharing anything -- pictures, sound clips, etc -- that you want *specific* people to look at.

      You share something, then you tell the person how to get to it.

      (And to the person who modded the previous post "Informative" -- for shame!)
    • Re:Piracy issues (Score:2, Interesting)

      by homebru ( 57152 )
      It's a neat idea, but realistically, I can't imagine personal "This is my Cat" webpages will be propagated far enough for it to be worthwhile

      Forget your cat for a minute and think business environment. This is IBM-developed, remember? Now think about an office project team who need to quickly and easily share documentation files, project plans and schedules.

      Traditionally, the project leaders flood their teams with rivers of emails and attachments. This not only bogs down the corporate mail-servers but also guarantees that half the team will never know which is the latest version of the schedule (since half the team is always new and hasn't been added to the MList yet).

      Also, traditionally, there is so much corporate politics about placing docs on an official web server that it just isn't worth the time to fight those battles while under the gun to get your project out the door. And most project managers of my acquaintence have trouble spelling html, much less writing it to fit corporate standards.

      This new tool would allow "publishing" documents to a team simply by copying them to a directory on the project leader's disk/desk. There, it's done. Followed by a short, small email to the team advising that a new version of the plan or schedule is available. In fact, the most serious problem will be getting mossback project managers to try a new tool instead of continuing to send 10Mb email attachments to a list of hundreds.

      While UServ will never replace the established HTML/web world and cannot hope to replace anonymous peer-to-peer transfers, there is a place for this technology. Let's not fall into the trap of thinking that a tool must replace all other tools in order to be useful.

      • Ok, but in a single building it would be simpler just to use a "world-readable" directory on a file server, right? So this is really only useful where you've got multiple geographically separate offices, right?

        • Re:Piracy issues (Score:2, Interesting)

          by homebru ( 57152 )

          Good thoughts. Yes, you could use a common file server. But then you still have the problem of team member churn. Some members leave, others join. And for each newbie, you would have to remember to get server access. Which, in medium and larger companies, means pushing forms through the bureaucracy, i.e., begging for permission to do your job. And which means that, weeks later, the newbie has another password to remember.

          On the plus side of a central server is the idea that the server will be backed up regularly. [Pause for laughter to die down.]

          Which leads around to the question: "How often are the desktops/laptops backed up?" And the accompanying "Why master project data on un-backed-up desktops/laptops?" And here we see the joining of technologies that UServ gives. Each team member can mirror/publish to a central server box.

          Another angle on this is access-mode. With a browser, your readers get read access. Your docs cannot be modified without your knowledge and permission. With a shared directory, anything is fair game. Including "accidental" deletes and over-writes. Ever lose a fifty page functional spec because some idjit on another team saved to the wrong directory? Very not fun.

          So, yeah, you could use a shared directory for your docs. And you could use a shared directory for software source control. It would be simple. But would you really want to?

  • ... is that the system can only handle static content. I'm sorry ... but 90% of the sites I visit are dynamically generated.

    The Raven.
    • but 90% of people likely to use this, mostly like black backgrounds with flashing red text, the marquee tag and GIF animations that jump, twirl, bounce, rotate and "burn" also cursors that turn into comets and leave a trail of other lucky charms shapes behind them as they move.

      Trust me, they'll be fine with static pages.
    • Yeah, but how many NEED to be dynamically generated. Apart from message boards / discussion type sites you can easily use the dynamic code to generate a static page whenever it changes and push that out to the peers.

      For example Slashdot, when you go to http://slashdot.org/, not index.pl, is a static page.

      I would assume this sort of technology would be best used for making sure information that some parties would prefer not to be available, is always available (eg. decss code).
      • For example Slashdot, when you go to http://slashdot.org/, not index.pl, is a static page.

        Really? Does it say This page was generated by a Cadre of Rabid Bruins for Webmonger (24302). for you too?

    • Perhaps uServe can't handle dynamic content, but it can handle a redirect to an Apache server on the same machine (or perhaps even combine the two more integrally: Apache handling the dynamic content, uSever sticking with the static). Anyone who could make dynamic content should be able to handle such a solution.

      BlackGriffen
  • While debugging a nasty client issue, my co-worker said: "Well, I've got these 100 megs worth of logs..." Which would really help me out, but because of all sorts of internal networking issues they would be hard to get. Then he introduced me to uServ. "Here, try this..." And there the logs were. Saved my butt.
    • I am confusied. The "killer app" of the P2P server is getting around the fact that your company's intranet is not setup properly to do the one thing its there to do in the first place (allow you to share information with your co-workers)?

      I could just as easily have said "well, I don't know how to run Apache, so this userve thing sure saved my ass!"

      To me it just seems like one of those things that are kinda cool, but fairly useless. But then, what do I know?
      • I could just as easily have said "well, I don't know how to run Apache, so this userve thing sure saved my ass!"

        That's exactly the point. Did you read the whole paper at all, or just count on the rest of us to fill you in?

        "Another challenge, which cannot be underestimated, is keeping the system simple...[Free web hosting sites] require technical expertise, such knowledge of FTP, not held by a typical web user."

        For you or me, this is an absurd idea: not know FTP? C'mon! But try working on a helpdesk some time. I do, for a small ISP and webhosting company, and believe me it's really like that. It never ceases to amaze me how many people just don't know that "the Innernet" is more than Explorer and Outlook Express (or IE, OE and Front Page, if they've got a weg site). This program is for them (but useful for the rest of us too).

        The other way that uServ helped in this particular situation was the not-having-to-use-email-to-send-100Mb-attachments part. I deal w/enough people who can't understand why a) they can't pick up their email because someone sent them a 5Mb attachment (remember, these are dialup users) or b) they're mad because we won't let them send attachments bigger than 5Mb. The last thing you want is for the company's email to be held up for half an hour because there's a 100Mb attachment coming through. Again, for the ordinary user, not you or I, this is the perfect solution.

        Overall, I'm impressed -- this sounds wonderful. The only thing that I can see being a bottleneck to widespread adoption, by people like my dad on dialup, is the need for a subdomain: that's something that definitely requires a techie to set up, and to get a group going. That said, maybe this is something ISPs could offer as an additional service: userv.isp.net. Given limited bandwidth over dialup, this wouldn't be great as an always-on service, but it would be a great way, as the authors suggest, to share pix or similarly large files: "You can pick them up from 7 'til 9 tonight."

        • This program is for them (but useful for the rest of us too)

          a) The comment wasn't from them it was from somone needing log files to do "debugging" (I am hoping its not your typical AOL user then). And there was no mention of email, the files came from a "co-worker" - meaning that their IT department was simply not doing their job.
          b)How are they useful for us? I haven't seen any reason yet.

          Anyway, my point was this - being easier to use (supposedly) doesn't make this "technology" better than a traditional HTTP server (or a free service), a service or server that's itself easier to use would fill this role. The usablity of this should be judged on it's technological merits, not how "drag and drop" the user interface is.

          PS I am well aware that the majority of people cannot use an FTP program. I still do believe that the solution to this is not bypassing it, but teaching them to use bloody FTP. If the general population doesn't learn something about computers, then what you and I do is just for our own fun - which is completely fine with me. I do my job, if they (the infamous "user") want to benefit from it, they'll need to make an investment (however inisgificant it actually is). Its the 21st century, pointing and clicking should be a required skill. (a good example: the majority of people can't drive for shit - are car manufacturers to be blamed for that?)
    • Which would really help me out, but because of all sorts of internal networking issues they would be hard to get.

      So in other words, uServe is a fix for IBM's jacked up intranet? Wouldn't it have been better to put resources into fixing their network in the first place?
      • Re:Um.. (Score:3, Insightful)

        by big_nipples ( 412515 )
        In a standard corporate intranet, what is the preferred method to share files between end users? Far as I can tell, there isn't one. That's the point. Same goes with home users.

        Sure, we can email things. But, as pointed out in the whitepaper, this uses third-party resources -- a mail server.

        FTP? Ok, you teach joe computer user to ftp a file to you -- oh, where are you gonna put it? You need a server somewhere to put it on.

        This thing is designed for average computer users who want to share stuff -- like pictures and log files -- but don't want to take the time to install a web server (or can't tackle the learning curve, or can't install a web server because they've got no static IP, etc, etc.)

        Have a read of the whitepaper linked in the article. It's actually quite a neat idea.
        • In a standard corporate intranet, what is the preferred method to share files between end users? Far as I can tell, there isn't one. That's the point. Same goes with home users.

          Well, everywhere I've worked we used SMB or NFS.
          • NFS -- not gonna work too well when you've got two semi-brain-dead users and one is trying to send the other a huge excel spreadsheet. They're lucky to be able to spell NFS, let alone get it working in a windows environment...

            Using windows sharing is possible, but have you ever tried to get it working on a computer that belongs to one of my previously mentioned semi-brain-dead users? Especially if you can't actually walk up to their PC and do it for them.

            Never mind the case where the file I want is on a cretin's desktop, and I'm logged into a Linux desktop that has been specifically denied access to the Windows Domain for "security reasons".

            IBM's uServ seems to address this nicely -- the company sets up the uServ servers, and installs a nice application on the users' desktops. I ask the user to please "Share" the file I want, and he emails me the URL -- no fuss, no teaching him how to do anything. Seems like a good product to me...
          • smb and nfs lack the scalability you need. All that users want is put some search terms in a box, hit enter and click on a link.

            Similarly they just want to drag a bunch of files to some folder and forget about it rather than having to share a folder and advertise that you have shared your folder and that it can be found at some very long, hard to remember address. That's too difficult for average users and they won't share or browse shared stuff.
          • ...we used SMB or NFS

            Congratulations. Glad they work for you. But why do you assume that they will work for everyone?

            In my own experience with a nation-wide network, trying to access files that may be 1000 to 1800 miles and multiple router-hops away is so frustrating that it results in copies being saved locally to avoid the time-outs. The existence of local copies, then, almost assures that they are out of date. And in our shop, the work schedules change too often to rely on out of date information.

    • A few replies (Score:2, Interesting)

      by Halo- ( 175936 )
      I work in IBM development, I was dealing with a guy who works support in another state, who was at a customer's site in another country. Obviously, the powers that be don't want to have lots of nice free data sharing between all these segments. Especially since the product I work on is security related. (And before anyone jumps on me about the lack of security of uServ, I was up till 3 AM last night running back and forth between two sites in multiple cars do a key exchange ceremony using physical tokens for a bank. I understand when using a lightweight system like this is okay.)

      Sure, in my earlier example we could have moved the data in question using existing channels, but you'd be going from three different platforms, three differnt OSes. Not only that, but a lot of people don't have things like SSH installed. SMB is kinda WinTel based, which doesn't help me much. NFS has lots of fun things like UDP. Add firewalls into the mix (because we're going between development, support, and customers) Did I mention dynamic IP's? And proxies?
      Granted, I'm not a big Java supporter, and would prefer a SSH/SCP tunnel, BUT, when I needed the data fast, this was a HELL of a lot easier than setting up a more traditional method. Have you noticed the shift towards "Web Services" in the software world? It's not because doing everythin of HTTP/HTTPS ports is the best way, but because damn near everyone has a solution in place to allow that sort of traffic to flow. uServ simply exploits that.

      Oh, about our "jacked up Intranet": Yes, it can be "jacked up" but it's a lot better thought out than any other place I've been. Even the parts running Token Ring. (ewww...)
  • So we have a reputable, giant hardware/software/etc. company backing up a P2P filesharing system. Perhaps if the RIAA or MPAA persue IBM, a standard could be set against their futile attempts to stop filesharing (because we know P2P = piracy) on the Internet.
    • IBM is backing it up only in that they are allowing a very small team (right now pretty much one person) in Research to work on it. Not exactly a huge committment, but it's a step in the right direction. Companies can use this sort of thing to keep large attachments off of overloaded mail servers, so you can justify it in terms of bottom line costs to keep them from pulling the plug :)
  • How does this protect your privacy? While freenet [sourceforge.net] uses encryption to protect your privacy, ibm uses it to grant or deny access; therein lies the rub, ie.. commerical entities only code for commercial and government interest, while non-commercial entities have better motivations and their code's functionality relfects it.

    BTW, is this released under the GPL? If so, take the best of this or add a layer of encryption to it so that it provides the functionality of privacy as does freenet.

    --turn on your freenet nodes, we've won the war!

  • One part of this that is an interesting idea is having your data replicated by local peers, so that when you are offline the data is still available. This would improve the availablity of files in any P2P, not only in the case where the person is offline, but would also help where a person has popular files: they could be replicated on "friendly" hosts to satisfy demand. This would be great if it were done automatically a group of co-operating users.
  • Wow, this is what the Internet really needs to become the force for social change that people originally thought it would be. It sucks about the Web that people with popular sites need to pay more for their bandwidth -- meaning that you don't want your personal site to get too many hits.

    Freenet is nowhere near what this sounds like guys, much as we like the underdog. What is amazing about this is that it relies on already existing infrastructure. I don't want to have to be: running a Freenet node, wait 20 seconds for a 5 k html file to load, and then be dependent on the page being a frequently requested (and thus stored) page. Freenet works best for large, popular files, because the search time then becomes negligible and you are ensured that the file you want will be available. This sounds great for Bob to host his site without worrying that it will disappear if nobody but him reads it, but also if it turns into the next Hamster Dance, he doesn't have to shell out thousands of dollars for bandwidth costs.

    I use Freenet, but I recognize its limitations. It unfortunately is not the tool for dissent that people hoped it would be, because unpopular files are hard to find.
  • Not very P2P (Score:1, Insightful)

    by Agthorr ( 135998 )
    I can serve a website from my desktop, too! All I have to do is run apache and DynDNS!

    So, let's see what the IBM thingy does... hmm, well, it serves web pages (check), provides dynamic DNS check (check), and it distributes the load to other boxes, after you manually set it up to do so (check).

    Sure, the slick interface is a value-add, by I don't really think of this as Peer-To-Peer [openp2p.com]. It'd be a lot more interesting if it automatically distributed the load, replicated the most accessed content, etc.

    • Until now I haven't seen any non-illegal or morally questionable P2P program emerging, and when a big corporation as IBM finally sticks their head out and shows the world "hey! look at this, peer-to-peer isn't all that bad" you start to fart in their direction. This isn't about what they're doing, but HOW they are doing it. Sure sure, anyone can put up a website if they want to - and this is a completely new way of doing it, that's whats so cool. Who knows, automatic load distribution and replication might be a thing of the future but for now, just be happy that everyone arent using Apache and DynDNS for all their information-sharing-purposes or the world would definitly come to a stop (it's called evolution, if you dont like it look away).
  • Wasn't Sun developing a Java P2P thing (API, protocol, platform, infrastructre, whatever they called it) that was supposed to be the greatest thing ever, solve all humanities problems, erradicate evil from the face of the earth and cure cancer? What happened to that?

    (On the bright side, P2P seems to be the only one of the stupid X2X acronyms to actually catch on - the combinations of Bs, Cs and 2s were getting pretty obnoxious)
    • It was called JXTA, and really it's just a way to transfer XML around with java. It's useful, probably. It's still around, but no real visible apps have come about.
  • This sounds a lot like what MS gave us several years ago. Yawn.
  • by glwtta ( 532858 ) on Sunday December 02, 2001 @02:48AM (#2643197) Homepage
    ... but relevant [bbspot.com]
  • Kind of stupid. (Score:5, Interesting)

    by DarkZero ( 516460 ) on Sunday December 02, 2001 @02:48AM (#2643198)
    The white paper talks about letting people use this program for a fee... but isn't the point of P2P, at least in 90% of cases, to be a way for people that don't have the money for big web servers and T1 lines to serve files and content? It talks about how this is a good alternative to free web hosting services, yet it isn't free, which does not make it a viable option for people that are looking for a FREE web hosting service. If people were willing to pay to serve content, why would they choose this over uploading their files to the server of a web hosting service they would pay for? The biggest and most important difference between those two, it seems, are that this way of hosting content will take up a lot more of your computer's speed and its internet connection than simply uploading your files to a hosting service would.

    If this were a freeware/shareware/open source P2P web hosting program, I'd be thrilled. In fact, I would already have a web page up on it, because I've been looking for just such a solution. But a closed source program that I have to pay a subscription fee for, with a larger fee if I want its fullest abilities? Compared to a hosting service that wants a subscription fee but doesn't take up my internet connection or bog down my computer with continuous server processes, this "P2P Web Hosting (Subscription) Service" is just reinventing the wheel by making it a triangle.

    The whole thing just seems... kind of stupid.

    • Not stupid (Score:4, Interesting)

      by Cato ( 8296 ) on Sunday December 02, 2001 @03:58AM (#2643249)
      uServ only needs a central server to locate individual web servers and set up dynamic DNS accordingly - e.g. to find a replica when the master site is down, or to find a proxy that can accept incoming connections for a firewalled machine. The actual access to web servers is always done via dynamic DNS and HTTP, so there is virtually no cost to the central server (it's only used as machines log in and out of the system, or change proxying/replication relationships).

      The central server (i.e. admin server and dynamic DNS service) could be very low cost - something like the cost of dynamic DNS, which can cost from $0 to $25 per year. Someone like TZO.com could easily offer this (they do a good dynDNS service already).

      The reason this is better than a free hosting service is that you don't subject your readers to adverts, and you can host whatever content you want. The one thing that's missing from this is dynamic load balancing - if you could have 100 other sites replicating a popular open source software site, and have people automatically connect to a nearby low-load site, this would basically *solve the mirroring problem*. If you can make the creation and use of mirrors completely automatic, the non-corporate Web can easily scale to much higher volumes than today, without having to make mirrors visible to the user.

      This does take up more of your bandwidth than central hosting, but that's the whole point of P2P - if this is a problem, apply rate limiting in the web server or the network. Most people use a lot more downstream bandwidth when surfing, so all you need to do is to reserve some bandwidth for upstream ACKs and upstream email - the remainder can be used for P2P serving without problems.

      Open source hosting is very reliant on Sourceforge and on people paying for web hosting services - it would be great to see it scale through the application of standard protocols and some smart software. Freenet is a much more radical approach, of course, with some interesting features, but it requires a new client or that someone hosts an HTTP to Freenet gateway - probably both approaches will fit into different niches.
      • Hmmm... well, you pretty much sold me, but I still wonder about the subscription fee. Because they can be changed at any time on a whim, subscription services always bother me, and even moreso for the realm of P2P, which can be very finicky depending on its popularity, the generosity of the users, and even the time of day. But otherwise, a damn good arguement for uServ.
        • The subscription fee can be $0, of course, just as with dynamic DNS - see www.dhs.org and many other services. However, some people will want to pay for a service to help guarantee its continued existence - of course, any paid-for service can put up its charges, but a free service can disappear...
    • If this were a freeware/shareware/open source P2P web hosting program, I'd be thrilled.

      My apologies if I'm reading you wrong but.... does this mean that you think it's wrong to illegitimately use unlicensed "boxed" software, but that to use shareware in the same way is okay?

      • My apologies if I'm reading you wrong but.... does this mean that you think it's wrong to illegitimately use unlicensed "boxed" software, but that to use shareware in the same way is okay?

        Actually, you kind of are reading me wrong. In the context of my post, the problem I had with uServ was the subscription fee. The white paper states "We believe the uServ service can therefore be profitably offered for a small yearly fee". That seems counterproductive to me. For one thing, it isn't the good alternative to free hosting services that it pretends to be, because it isn't free. That's like saying that Adobe Photoshop is a good alternative to freeware photo editors that have ads in them. Obviously, Photoshop wouldn't be, because it isn't FREE, and the reason people put up with freeware programs that have ads in them is because they cost absolutely nothing. Also, with a subscription service in place, uServ isn't anywhere near as up-front as shareware. With shareware, you test it, you pay for it once, and then you own it forever. With subscription services, IBM could just wait until it had a large user base and then decide to up the yearly subscription fee by a very large number, leaving you either with IBM or right back in the wasteland of free hosting services or desperately trying to host your site off your cable modem.

        In short, I just don't see how a service that makes you pay a subscription fee while taking up your bandwidth and your overall computer speed at the same time is so much better than either putting up with a free hosting service and its ads or just paying for hosting through a web hosting service. Without being free like the majority of P2P file-swapping services are, I just don't see how uServ has an edge over its more traditional web hosting competition.

        • I think what the writers of the whitepaper wanted to do was to suggest possible uses for their tech. Remember, they're just paid to research this stuff -- they're not the ones who decided what to do with it. All of us on the net are the ones who find the use for new tech.

          These guys have only slightly hinted at it being possible to charge a miniscule amount for the service, and Slashdot readers are up-in-arms about evil subscription costs. Chill out a bit -- let's wait to see an internet (as opposed to intranet) implementation before complaining about fees.

          It really seems to me like the IBM researchers *want* the free dynDNS services to add this to their service offering, which would make it a free service.
  • A p2p web mirroring system. Actually a bit different from this, my idea was of having a massive distributed 'cloud' of proxy servers, so that people in sucky countries (China, Saudi Arabia, Australia) could get past national firewalls.

    IMO, the web model of content distribution kind of sucks. Interesting sites that draw a lot of traffic die because they don't have enough bandwidth. or their content isn't 'profitable' enough.

    But on the other hand, isn't this just a stripped down version of Freenet without the protection? Of course, giving how sluggish Freenet is on the current internet, maybe that's the only way to go.

    The holy grail, I think would be a system that still allowed interactive/dynamic content. Imagine a distributed /. :P
  • by Ogerman ( 136333 ) on Sunday December 02, 2001 @03:17AM (#2643222)
    (subject line spoken in a gruff voice like in the old Wendy's commercials)

    I guess that "billion dollars spent on Linux" must be going towards buying IBM execs bigger leather chairs and fine art to decorate the hallways.

    If they want the advantages of Open Source community, they ought to try being part of the community. Lameness.
    • The responsible executive has been dispatched to your home, to personally apologize to you for not living up to your expectations. Afterwards, he will commit seppuku.
  • In the future, the Internet will be destroyed by what is known only as the Slashdot Effect® [slashdot.org]. The Second Dark Ages will begin and the world will be imitate the world of Dark Angel [darkangeltheseries.com].

    Luckily, a hero [keanunet.com] from the future has come to the past, obtained a job at IBM [ibm.com], and created uServ [ibm.com]. Slashdot [slashdot.org], you have met your match.

    -Rufus [georgecarlin.com]

  • by burtonator ( 70115 ) on Sunday December 02, 2001 @03:28AM (#2643231)
    This is slightly similar to my Reptile project which was covered a while back on slashdot [slashdot.org]

    The major difference is that we are reusing existing P2P protocols and will provide bindings for JXTA, Freenet, Jabber, etc.

    Content is syndicated in between nodes as XML (RSS, etc). An index is kept of all the content so you can run local searches. Actually we use Hypersonic SQL so you have a very FAST in-memory index of all this stuff.

    Users publish information into the system by adding a item to their local weblog. Remote users can subscribe to this channel and will receive updates via the P2P layer.

    We are also working on a reputation, and distributed public key authentication model. This is obviously very tough and we have been working at it for a while...

    Hopefully we will have another release out soon.

    Anyway.. check it out! [openprivacy.org]
  • For such a verbose description, I can't see too terribly much difference between a windows user running PWS (MS Flak: "It's so easy, you're probably ALREADY RUNNING IT!") and google cache, and you don't have to convince google cache to peer with you.

    If I see one maggot, it all gets thrown away -- My Girlfriend [nhdesigns.com]
    • She should be more careful with her web page. If she wants to advertise her Web Design expertise, it wouldn't hurt to test the site with Netscape. When I hire people I often go to the sites they mention on their resume and try them out. I imagine others do as well.

      With Netscape 4.79 on Win98, the only thing that you see is the navigation buttons on most of the pages. A quick examination shows that she is improperly closing her table tags using <table> instead of </table>
  • A way to combat the /. effect! Yippie!!!

    "Hey guys! I'm going to post a plug on slashdot -- wanna replocate me?" -- this I'm sure won't get a lot of "Sure!" responces... :-P

    Otherwise great way to set up mirrors in a hurry.
  • This will be a bit like freenet, but without the anonymity stuff, it will be much more reliable and faster. I think it will be quite a good system for the average people on dialup. As long as the AUP's don't kill it.
  • Why would I want to sacrifice my limited outgoing bandwidth to serve someone else's content? And how would I ensure that my "peer group" actually remains on-line? Sounds like it would make my connection hard to use when I'm on-line and give me no guarantees when I'm off-line.

    The real working business model is, well, web hosting: you pay someone to keep your content on-line. You get reasonably predictable uptime, bandwidth, and services (PHP, etc.). It's not very expensive, you know. You even get it for free if you accept advertising on your pages.

    And the tools to support web hosting and migrate your data are already there: you can use "rsync" to keep your local site in sync with your web hosting service. For really high-end applications, you can replicate the data through a commercial service like Akamai.

    • by Cato ( 8296 )
      Linux and some other OSs have good QoS features, particularly for upstream bandwidth - just allocate (say) half your bandwidth to upstream email (and the important TCP ACKs for your downstream traffic, and the P2P downloads from your machine can use the other half. In fact, you can even allocate 90% to your own traffic but let the P2P traffic 'burst' to use this when you are not using it. The only problem is that Linux QoS is quite hard to use, and most people aren't even aware of what it can do.
  • by Bonker ( 243350 ) on Sunday December 02, 2001 @04:20AM (#2643271)
    Hmmm... I think it's been mentioned that this sounds like Freenet without all the extras thrown in.

    Frankly, there are a few things inhibiting Freenet's popularity when compared to Gnutella and Fasttrack (Is that still running?).

    1. High learning curve: Trying to figure out how to search for freenet keys is a bit of a challenge, especially compared to typing in "Matalika" in a Morpheus or Gnutella search window and getting dozens of relevent matches from Lars and co.. You don't have critical mass until you have the morons.

    2. Difficult install: I have yet to see a Freenet implimentation that didn't require an attendant JRE install of some kind. Worse, it also frequently entails setting up Java class paths, a task that can confuse even Java developers from time to time. Then a user must understand that he usually has to use his or her browser to access Freenet. There is no 'Freenet' icon to point and click.

    3. Difficulty of sharing: It's possible to make entire web pages available via Freenet, but if a Freenet user is firewalled for any reason, it really harms him in terms of being able to participate in the sharing.

    4. Unpopular data doesn't propogate: Because the most popular data is shared and replicated most frequently. Warez and mp3s show up, but things like dissident and political theories, text files, and more personal data are lost... even to those who might be interested. (Oddly, Hotline is still a very good place to find these sorts of things. IRC fserves, as well.)

    From what I read of the white-paper it looks like this project, or an open-source project very similar to it, could solve these problems and still acheive many of Freenet's goals.

    Maybe the OSS community should look into something like this... a moron-safe, web-based file sharing project for the masses that ignores anonymization and encryption in order to gain a more critical mass. Better yet, because of the similarity between the two projects, once the sharing infrastructure was in place, it could accept a Freenet plugin, or vice-versa.

    Just an idea...
    • To keep information in freenet all you have to do is have a cron job that periodically requests the files. I just request the files and dump 'em to /dev/null

    • Maybe the OSS community should look into something like this... a moron-safe, web-based file sharing project for the masses that ignores anonymization and encryption in order to gain a more critical mass.

      I'd be interested to know first how, in general, how one can create any type of p2p tool without having to fear legal problems because what users share might be copyrighted in some countries. Has the MPAA / RIAA ever said anything on that topic? The most popular stuff will probably be copyrighted music and videos. How do I, as a developer, avoid that my tool gets used for that type of content? Why do I have to provide solutions for that 'problem' in the first place? Why don't they go after Joe X. who shares movies on IP w.x.y.z? Whenever I create something easy to use, I must fear to get punished for it. Where are Hillary Rosen's suggestions, she was the one to ask p2p developers to work together with content right owners. This isn't some technical detail, it's the very core problem.
    • I suspect you haven't tried to use Freenet in quite a while. Try downloading a recent snapshot [freenetproject.org]. While Freenet still relies on Java, for most people this just requires installation of an rpm or a quick apt-get. Installation of Freenet itself is pretty easy these days. There is even a .deb in unstable for Debian users although it is somewhat old. Unpopular data does propogate, if it didn't systems like Frost [sf.net] wouldn't work, yet they do. As for firewalls, these are not just a problem for Freenet, but for most true P2P systems.

      The current 0.4 snapshots are very impressive, and once a few final bugs are resolved 0.5 will be released.

  • by Anonymous Coward on Sunday December 02, 2001 @04:52AM (#2643296)
    Hi. I work at IBM, and I think you guys are looking at this the wrong way (i.e. the Napster "gimme all your mp3s" perspective).

    When your company has 300,000+ employees, communication can be difficult sometimes, especially when it comes to sharing files. uServ allows you to allocate a semi-permanent "address" for asyncronous access of data, which cuts through several layers of beurocracy (requesting webspace, etc). Lotus Notes doesn't quite cut it for this type of usage..

    The point is not to anonymously share MP3s.
  • IBM obviously didn't check Google before naming their project. GNU userv [greenend.org.uk] got there first (in 1996).

  • uServ is not for the Internet because its underyling architecture doesn't provide neither encryption nor authentication. But it is a great solution to the Knowledge Management problem of many companies: Employees can can post documents without overhead.

    I found the most interesting part of the paper in the underyling Vinci [www10.org] component infrastructure. It focuses on speed and protocol extendability for distributed applications in a (trusted) Intranet environment.

    mailto:frank@fraber.de [mailto], www.fraber.de [fraber.de]

  • by turg ( 19864 )
    Testing my link-check evader: http://www.yahoo.com/ [64.28.67.150]

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...