Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

uServ -- P2P Webserver from IBM 150

ryantate writes: "Some folks over at IBM have been working on the uServ Project, which provides "high availability web hosting ... using existing web and internet protocols", meaning you can serve a website from your desktop and people can get at it with a standard Web browser and without special software. They claim the system, which works from behind firewalls and when you are offline (provided you can convince other peers to 'replicate' your site), is in active use by about 900 people within IBM. Here's the white paper."
This discussion has been archived. No new comments can be posted.

uServ -- P2P Webserver from IBM

Comments Filter:
  • by Tim_F ( 12524 ) on Sunday December 02, 2001 @03:08AM (#2643138)
    A big company like IBM to back it up. IBM definitely has the funds to take on the RIAA and the MPAA. And if this is less of a pain to use than say gnutella/mojonation, it will prove to be a lot more popular.
  • by Halo- ( 175936 ) on Sunday December 02, 2001 @03:21AM (#2643163)
    While debugging a nasty client issue, my co-worker said: "Well, I've got these 100 megs worth of logs..." Which would really help me out, but because of all sorts of internal networking issues they would be hard to get. Then he introduced me to uServ. "Here, try this..." And there the logs were. Saved my butt.
  • by Benjiman McFree ( 321140 ) on Sunday December 02, 2001 @03:24AM (#2643166)

    How does this protect your privacy? While freenet [sourceforge.net] uses encryption to protect your privacy, ibm uses it to grant or deny access; therein lies the rub, ie.. commerical entities only code for commercial and government interest, while non-commercial entities have better motivations and their code's functionality relfects it.

    BTW, is this released under the GPL? If so, take the best of this or add a layer of encryption to it so that it provides the functionality of privacy as does freenet.

    --turn on your freenet nodes, we've won the war!

  • by whiteben ( 210475 ) on Sunday December 02, 2001 @03:24AM (#2643169)
    I agree that uServ doesn't represent any stunning advances in collaboration technologies. It makes use of proxy servers, peering, and HTTP: not exactly bleeding edge tech.


    On the other hand, it's not Freenet, either. Freenet is a platform which guarrantees that data is survivable (lawyer-proof) and secure. uServ doesn't seem to be concerned with either. It's primarily a way for users who aren't very technologically savvy to publish content. That's it. Useful in its own way.


    BEN

  • Kind of stupid. (Score:5, Interesting)

    by DarkZero ( 516460 ) on Sunday December 02, 2001 @03:48AM (#2643198)
    The white paper talks about letting people use this program for a fee... but isn't the point of P2P, at least in 90% of cases, to be a way for people that don't have the money for big web servers and T1 lines to serve files and content? It talks about how this is a good alternative to free web hosting services, yet it isn't free, which does not make it a viable option for people that are looking for a FREE web hosting service. If people were willing to pay to serve content, why would they choose this over uploading their files to the server of a web hosting service they would pay for? The biggest and most important difference between those two, it seems, are that this way of hosting content will take up a lot more of your computer's speed and its internet connection than simply uploading your files to a hosting service would.

    If this were a freeware/shareware/open source P2P web hosting program, I'd be thrilled. In fact, I would already have a web page up on it, because I've been looking for just such a solution. But a closed source program that I have to pay a subscription fee for, with a larger fee if I want its fullest abilities? Compared to a hosting service that wants a subscription fee but doesn't take up my internet connection or bog down my computer with continuous server processes, this "P2P Web Hosting (Subscription) Service" is just reinventing the wheel by making it a triangle.

    The whole thing just seems... kind of stupid.

  • by burtonator ( 70115 ) on Sunday December 02, 2001 @04:28AM (#2643231)
    This is slightly similar to my Reptile project which was covered a while back on slashdot [slashdot.org]

    The major difference is that we are reusing existing P2P protocols and will provide bindings for JXTA, Freenet, Jabber, etc.

    Content is syndicated in between nodes as XML (RSS, etc). An index is kept of all the content so you can run local searches. Actually we use Hypersonic SQL so you have a very FAST in-memory index of all this stuff.

    Users publish information into the system by adding a item to their local weblog. Remote users can subscribe to this channel and will receive updates via the P2P layer.

    We are also working on a reputation, and distributed public key authentication model. This is obviously very tough and we have been working at it for a while...

    Hopefully we will have another release out soon.

    Anyway.. check it out! [openprivacy.org]
  • Not stupid (Score:4, Interesting)

    by Cato ( 8296 ) on Sunday December 02, 2001 @04:58AM (#2643249)
    uServ only needs a central server to locate individual web servers and set up dynamic DNS accordingly - e.g. to find a replica when the master site is down, or to find a proxy that can accept incoming connections for a firewalled machine. The actual access to web servers is always done via dynamic DNS and HTTP, so there is virtually no cost to the central server (it's only used as machines log in and out of the system, or change proxying/replication relationships).

    The central server (i.e. admin server and dynamic DNS service) could be very low cost - something like the cost of dynamic DNS, which can cost from $0 to $25 per year. Someone like TZO.com could easily offer this (they do a good dynDNS service already).

    The reason this is better than a free hosting service is that you don't subject your readers to adverts, and you can host whatever content you want. The one thing that's missing from this is dynamic load balancing - if you could have 100 other sites replicating a popular open source software site, and have people automatically connect to a nearby low-load site, this would basically *solve the mirroring problem*. If you can make the creation and use of mirrors completely automatic, the non-corporate Web can easily scale to much higher volumes than today, without having to make mirrors visible to the user.

    This does take up more of your bandwidth than central hosting, but that's the whole point of P2P - if this is a problem, apply rate limiting in the web server or the network. Most people use a lot more downstream bandwidth when surfing, so all you need to do is to reserve some bandwidth for upstream ACKs and upstream email - the remainder can be used for P2P serving without problems.

    Open source hosting is very reliant on Sourceforge and on people paying for web hosting services - it would be great to see it scale through the application of standard protocols and some smart software. Freenet is a much more radical approach, of course, with some interesting features, but it requires a new client or that someone hosts an HTTP to Freenet gateway - probably both approaches will fit into different niches.
  • by davidu ( 18 ) on Sunday December 02, 2001 @05:12AM (#2643263) Homepage Journal
    not to be a troll but...

    in this case the difference is that this works and freenet still isn't usable by any decent minority of people let alone a majority of people.

    -davidu
  • by Bonker ( 243350 ) on Sunday December 02, 2001 @05:20AM (#2643271)
    Hmmm... I think it's been mentioned that this sounds like Freenet without all the extras thrown in.

    Frankly, there are a few things inhibiting Freenet's popularity when compared to Gnutella and Fasttrack (Is that still running?).

    1. High learning curve: Trying to figure out how to search for freenet keys is a bit of a challenge, especially compared to typing in "Matalika" in a Morpheus or Gnutella search window and getting dozens of relevent matches from Lars and co.. You don't have critical mass until you have the morons.

    2. Difficult install: I have yet to see a Freenet implimentation that didn't require an attendant JRE install of some kind. Worse, it also frequently entails setting up Java class paths, a task that can confuse even Java developers from time to time. Then a user must understand that he usually has to use his or her browser to access Freenet. There is no 'Freenet' icon to point and click.

    3. Difficulty of sharing: It's possible to make entire web pages available via Freenet, but if a Freenet user is firewalled for any reason, it really harms him in terms of being able to participate in the sharing.

    4. Unpopular data doesn't propogate: Because the most popular data is shared and replicated most frequently. Warez and mp3s show up, but things like dissident and political theories, text files, and more personal data are lost... even to those who might be interested. (Oddly, Hotline is still a very good place to find these sorts of things. IRC fserves, as well.)

    From what I read of the white-paper it looks like this project, or an open-source project very similar to it, could solve these problems and still acheive many of Freenet's goals.

    Maybe the OSS community should look into something like this... a moron-safe, web-based file sharing project for the masses that ignores anonymization and encryption in order to gain a more critical mass. Better yet, because of the similarity between the two projects, once the sharing infrastructure was in place, it could accept a Freenet plugin, or vice-versa.

    Just an idea...
  • by Anonymous Coward on Sunday December 02, 2001 @05:52AM (#2643296)
    Hi. I work at IBM, and I think you guys are looking at this the wrong way (i.e. the Napster "gimme all your mp3s" perspective).

    When your company has 300,000+ employees, communication can be difficult sometimes, especially when it comes to sharing files. uServ allows you to allocate a semi-permanent "address" for asyncronous access of data, which cuts through several layers of beurocracy (requesting webspace, etc). Lotus Notes doesn't quite cut it for this type of usage..

    The point is not to anonymously share MP3s.
  • A few replies (Score:2, Interesting)

    by Halo- ( 175936 ) on Sunday December 02, 2001 @10:42AM (#2643557)
    I work in IBM development, I was dealing with a guy who works support in another state, who was at a customer's site in another country. Obviously, the powers that be don't want to have lots of nice free data sharing between all these segments. Especially since the product I work on is security related. (And before anyone jumps on me about the lack of security of uServ, I was up till 3 AM last night running back and forth between two sites in multiple cars do a key exchange ceremony using physical tokens for a bank. I understand when using a lightweight system like this is okay.)

    Sure, in my earlier example we could have moved the data in question using existing channels, but you'd be going from three different platforms, three differnt OSes. Not only that, but a lot of people don't have things like SSH installed. SMB is kinda WinTel based, which doesn't help me much. NFS has lots of fun things like UDP. Add firewalls into the mix (because we're going between development, support, and customers) Did I mention dynamic IP's? And proxies?
    Granted, I'm not a big Java supporter, and would prefer a SSH/SCP tunnel, BUT, when I needed the data fast, this was a HELL of a lot easier than setting up a more traditional method. Have you noticed the shift towards "Web Services" in the software world? It's not because doing everythin of HTTP/HTTPS ports is the best way, but because damn near everyone has a solution in place to allow that sort of traffic to flow. uServ simply exploits that.

    Oh, about our "jacked up Intranet": Yes, it can be "jacked up" but it's a lot better thought out than any other place I've been. Even the parts running Token Ring. (ewww...)
  • Re:Piracy issues (Score:2, Interesting)

    by homebru ( 57152 ) on Sunday December 02, 2001 @11:18AM (#2643605)
    It's a neat idea, but realistically, I can't imagine personal "This is my Cat" webpages will be propagated far enough for it to be worthwhile

    Forget your cat for a minute and think business environment. This is IBM-developed, remember? Now think about an office project team who need to quickly and easily share documentation files, project plans and schedules.

    Traditionally, the project leaders flood their teams with rivers of emails and attachments. This not only bogs down the corporate mail-servers but also guarantees that half the team will never know which is the latest version of the schedule (since half the team is always new and hasn't been added to the MList yet).

    Also, traditionally, there is so much corporate politics about placing docs on an official web server that it just isn't worth the time to fight those battles while under the gun to get your project out the door. And most project managers of my acquaintence have trouble spelling html, much less writing it to fit corporate standards.

    This new tool would allow "publishing" documents to a team simply by copying them to a directory on the project leader's disk/desk. There, it's done. Followed by a short, small email to the team advising that a new version of the plan or schedule is available. In fact, the most serious problem will be getting mossback project managers to try a new tool instead of continuing to send 10Mb email attachments to a list of hundreds.

    While UServ will never replace the established HTML/web world and cannot hope to replace anonymous peer-to-peer transfers, there is a place for this technology. Let's not fall into the trap of thinking that a tool must replace all other tools in order to be useful.

  • Re:Piracy issues (Score:2, Interesting)

    by homebru ( 57152 ) on Sunday December 02, 2001 @03:48PM (#2644175)

    Good thoughts. Yes, you could use a common file server. But then you still have the problem of team member churn. Some members leave, others join. And for each newbie, you would have to remember to get server access. Which, in medium and larger companies, means pushing forms through the bureaucracy, i.e., begging for permission to do your job. And which means that, weeks later, the newbie has another password to remember.

    On the plus side of a central server is the idea that the server will be backed up regularly. [Pause for laughter to die down.]

    Which leads around to the question: "How often are the desktops/laptops backed up?" And the accompanying "Why master project data on un-backed-up desktops/laptops?" And here we see the joining of technologies that UServ gives. Each team member can mirror/publish to a central server box.

    Another angle on this is access-mode. With a browser, your readers get read access. Your docs cannot be modified without your knowledge and permission. With a shared directory, anything is fair game. Including "accidental" deletes and over-writes. Ever lose a fifty page functional spec because some idjit on another team saved to the wrong directory? Very not fun.

    So, yeah, you could use a shared directory for your docs. And you could use a shared directory for software source control. It would be simple. But would you really want to?

  • uServ is not for the Internet because its underyling architecture doesn't provide neither encryption nor authentication. But it is a great solution to the Knowledge Management problem of many companies: Employees can can post documents without overhead.

    I found the most interesting part of the paper in the underyling Vinci [www10.org] component infrastructure. It focuses on speed and protocol extendability for distributed applications in a (trusted) Intranet environment.

    mailto:frank@fraber.de [mailto], www.fraber.de [fraber.de]

This file will self-destruct in five minutes.

Working...