uServ -- P2P Webserver from IBM 150
ryantate writes: "Some folks over at IBM have been working on the uServ Project, which provides "high availability web hosting ... using existing web and internet protocols", meaning you can serve a website from your desktop and people can get at it with a standard Web browser and without special software. They claim the system, which works from behind firewalls and when you are offline (provided you can convince other peers to 'replicate' your site), is in active use by about 900 people within IBM. Here's the white paper."
This could be what p2p needs to make it (Score:2, Interesting)
I work there. It's pretty useful (Score:2, Interesting)
The encryption is for access controls? (Score:1, Interesting)
How does this protect your privacy? While freenet [sourceforge.net] uses encryption to protect your privacy, ibm uses it to grant or deny access; therein lies the rub, ie.. commerical entities only code for commercial and government interest, while non-commercial entities have better motivations and their code's functionality relfects it.
BTW, is this released under the GPL? If so, take the best of this or add a layer of encryption to it so that it provides the functionality of privacy as does freenet.
--turn on your freenet nodes, we've won the war!
Re:Sounds like a ripoff of Freenet (Score:5, Interesting)
On the other hand, it's not Freenet, either. Freenet is a platform which guarrantees that data is survivable (lawyer-proof) and secure. uServ doesn't seem to be concerned with either. It's primarily a way for users who aren't very technologically savvy to publish content. That's it. Useful in its own way.
BEN
Kind of stupid. (Score:5, Interesting)
If this were a freeware/shareware/open source P2P web hosting program, I'd be thrilled. In fact, I would already have a web page up on it, because I've been looking for just such a solution. But a closed source program that I have to pay a subscription fee for, with a larger fee if I want its fullest abilities? Compared to a hosting service that wants a subscription fee but doesn't take up my internet connection or bog down my computer with continuous server processes, this "P2P Web Hosting (Subscription) Service" is just reinventing the wheel by making it a triangle.
The whole thing just seems... kind of stupid.
Similar to my Reptile project. (Score:4, Interesting)
The major difference is that we are reusing existing P2P protocols and will provide bindings for JXTA, Freenet, Jabber, etc.
Content is syndicated in between nodes as XML (RSS, etc). An index is kept of all the content so you can run local searches. Actually we use Hypersonic SQL so you have a very FAST in-memory index of all this stuff.
Users publish information into the system by adding a item to their local weblog. Remote users can subscribe to this channel and will receive updates via the P2P layer.
We are also working on a reputation, and distributed public key authentication model. This is obviously very tough and we have been working at it for a while...
Hopefully we will have another release out soon.
Anyway.. check it out! [openprivacy.org]
Not stupid (Score:4, Interesting)
The central server (i.e. admin server and dynamic DNS service) could be very low cost - something like the cost of dynamic DNS, which can cost from $0 to $25 per year. Someone like TZO.com could easily offer this (they do a good dynDNS service already).
The reason this is better than a free hosting service is that you don't subject your readers to adverts, and you can host whatever content you want. The one thing that's missing from this is dynamic load balancing - if you could have 100 other sites replicating a popular open source software site, and have people automatically connect to a nearby low-load site, this would basically *solve the mirroring problem*. If you can make the creation and use of mirrors completely automatic, the non-corporate Web can easily scale to much higher volumes than today, without having to make mirrors visible to the user.
This does take up more of your bandwidth than central hosting, but that's the whole point of P2P - if this is a problem, apply rate limiting in the web server or the network. Most people use a lot more downstream bandwidth when surfing, so all you need to do is to reserve some bandwidth for upstream ACKs and upstream email - the remainder can be used for P2P serving without problems.
Open source hosting is very reliant on Sourceforge and on people paying for web hosting services - it would be great to see it scale through the application of standard protocols and some smart software. Freenet is a much more radical approach, of course, with some interesting features, but it requires a new client or that someone hosts an HTTP to Freenet gateway - probably both approaches will fit into different niches.
Re:Sounds like a ripoff of Freenet (Score:2, Interesting)
in this case the difference is that this works and freenet still isn't usable by any decent minority of people let alone a majority of people.
-davidu
Freenet without the overhead? (Score:5, Interesting)
Frankly, there are a few things inhibiting Freenet's popularity when compared to Gnutella and Fasttrack (Is that still running?).
1. High learning curve: Trying to figure out how to search for freenet keys is a bit of a challenge, especially compared to typing in "Matalika" in a Morpheus or Gnutella search window and getting dozens of relevent matches from Lars and co.. You don't have critical mass until you have the morons.
2. Difficult install: I have yet to see a Freenet implimentation that didn't require an attendant JRE install of some kind. Worse, it also frequently entails setting up Java class paths, a task that can confuse even Java developers from time to time. Then a user must understand that he usually has to use his or her browser to access Freenet. There is no 'Freenet' icon to point and click.
3. Difficulty of sharing: It's possible to make entire web pages available via Freenet, but if a Freenet user is firewalled for any reason, it really harms him in terms of being able to participate in the sharing.
4. Unpopular data doesn't propogate: Because the most popular data is shared and replicated most frequently. Warez and mp3s show up, but things like dissident and political theories, text files, and more personal data are lost... even to those who might be interested. (Oddly, Hotline is still a very good place to find these sorts of things. IRC fserves, as well.)
From what I read of the white-paper it looks like this project, or an open-source project very similar to it, could solve these problems and still acheive many of Freenet's goals.
Maybe the OSS community should look into something like this... a moron-safe, web-based file sharing project for the masses that ignores anonymization and encryption in order to gain a more critical mass. Better yet, because of the similarity between the two projects, once the sharing infrastructure was in place, it could accept a Freenet plugin, or vice-versa.
Just an idea...
Interpretting whitepaper from wrong perspective (Score:3, Interesting)
When your company has 300,000+ employees, communication can be difficult sometimes, especially when it comes to sharing files. uServ allows you to allocate a semi-permanent "address" for asyncronous access of data, which cuts through several layers of beurocracy (requesting webspace, etc). Lotus Notes doesn't quite cut it for this type of usage..
The point is not to anonymously share MP3s.
A few replies (Score:2, Interesting)
Sure, in my earlier example we could have moved the data in question using existing channels, but you'd be going from three different platforms, three differnt OSes. Not only that, but a lot of people don't have things like SSH installed. SMB is kinda WinTel based, which doesn't help me much. NFS has lots of fun things like UDP. Add firewalls into the mix (because we're going between development, support, and customers) Did I mention dynamic IP's? And proxies?
Granted, I'm not a big Java supporter, and would prefer a SSH/SCP tunnel, BUT, when I needed the data fast, this was a HELL of a lot easier than setting up a more traditional method. Have you noticed the shift towards "Web Services" in the software world? It's not because doing everythin of HTTP/HTTPS ports is the best way, but because damn near everyone has a solution in place to allow that sort of traffic to flow. uServ simply exploits that.
Oh, about our "jacked up Intranet": Yes, it can be "jacked up" but it's a lot better thought out than any other place I've been. Even the parts running Token Ring. (ewww...)
Re:Piracy issues (Score:2, Interesting)
Forget your cat for a minute and think business environment. This is IBM-developed, remember? Now think about an office project team who need to quickly and easily share documentation files, project plans and schedules.
Traditionally, the project leaders flood their teams with rivers of emails and attachments. This not only bogs down the corporate mail-servers but also guarantees that half the team will never know which is the latest version of the schedule (since half the team is always new and hasn't been added to the MList yet).
Also, traditionally, there is so much corporate politics about placing docs on an official web server that it just isn't worth the time to fight those battles while under the gun to get your project out the door. And most project managers of my acquaintence have trouble spelling html, much less writing it to fit corporate standards.
This new tool would allow "publishing" documents to a team simply by copying them to a directory on the project leader's disk/desk. There, it's done. Followed by a short, small email to the team advising that a new version of the plan or schedule is available. In fact, the most serious problem will be getting mossback project managers to try a new tool instead of continuing to send 10Mb email attachments to a list of hundreds.
While UServ will never replace the established HTML/web world and cannot hope to replace anonymous peer-to-peer transfers, there is a place for this technology. Let's not fall into the trap of thinking that a tool must replace all other tools in order to be useful.
Re:Piracy issues (Score:2, Interesting)
Good thoughts. Yes, you could use a common file server. But then you still have the problem of team member churn. Some members leave, others join. And for each newbie, you would have to remember to get server access. Which, in medium and larger companies, means pushing forms through the bureaucracy, i.e., begging for permission to do your job. And which means that, weeks later, the newbie has another password to remember.
On the plus side of a central server is the idea that the server will be backed up regularly. [Pause for laughter to die down.]
Which leads around to the question: "How often are the desktops/laptops backed up?" And the accompanying "Why master project data on un-backed-up desktops/laptops?" And here we see the joining of technologies that UServ gives. Each team member can mirror/publish to a central server box.
Another angle on this is access-mode. With a browser, your readers get read access. Your docs cannot be modified without your knowledge and permission. With a shared directory, anything is fair game. Including "accidental" deletes and over-writes. Ever lose a fifty page functional spec because some idjit on another team saved to the wrong directory? Very not fun.
So, yeah, you could use a shared directory for your docs. And you could use a shared directory for software source control. It would be simple. But would you really want to?
Knowledge Management and Distributed Components (Score:2, Interesting)
I found the most interesting part of the paper in the underyling Vinci [www10.org] component infrastructure. It focuses on speed and protocol extendability for distributed applications in a (trusted) Intranet environment.
mailto:frank@fraber.de [mailto], www.fraber.de [fraber.de]