Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Please explain ... (Score 1) 235

Well, the next time you, your girlfriend, daughter, mom, dad etc writes a web service, perhaps they'll get some leverage out of it :-)

(One interesting demo, not linked on the site but present in the source code on github, is a PubSubHubBub endpoint in the browser, which can subscribe to PubSubHubBub's feed update notifications for real-time feed tracking. (PSHB deliberately doesn't include any long-polling ad-hockery, so this is otherwise not possible.) The URL of the page is subscribed to the PSHB hub, and when new atom entries arrive, they're POSTed directly to the page -- or rather, as direct as possible given that there's the reversehttp gateway relaying the POST on.)

Comment Re:FTP anyone (Score 1) 235

You've missed the point: this is one of the situations when reversehttp actually saves the day. The reversehttp gateway is outside the firewall. Services inside the firewall contact the gateway to get it to relay requests in to them, using long-polling (or any other method, current or yet to be invented). The point is that the gateway is in exactly the right place to dole out names/URLs and to handle the routing, filtering and relaying.

Comment Re:Please explain ... (Score 1) 235

Can you explain a scenario where such a set-up makes sense (from a business or usability perspective) and where other protocols are unable to get the job done?

Anywhere you'd otherwise be configuring apache/DNS/firewall/CGI/reverseproxy rules by hand. There aren't any protocols for that other than reversehttp yet. The goal was to make it as easy to get a name/URL and become a full server in the HTTP network as it is to anonymously participate as a client. Particularly interesting is the way it makes short-lived services suddenly viable: previously, you'd either have to reconfigure your gateway apache instance (or similar) each time a service came and went (or moved!), or ad-hoc up some way of doing roughly what reversehttp does, but on a case-by-case basis.

(It's also, by the way, no more difficult to secure the gateway than it is to secure any other web service -- the demo server hands out URL space pretty freely, but obviously if you were deploying it yourself you'd apply normal HTTP access control policies to limit who was allowed to register which names and so on.)

Comment Re:Connection, yes. Server, no. (Score 1) 235

It'd be useful in the same way decentralised version control systems are useful by comparison with their centralised counterparts. The replication topologies are no longer constrained to the simple hub-and-spoke model. Think google wave, without the centralised server. Think google docs, without the centralised server.

Comment Re:Will never happen (Score 1) 235

3) how is your browser going to access a Database like mysql with php, local on your machine? The pages would have to be cached and static or some other wizard way of doing it.

There's nothing that limits reversehttp to the browser. Any HTTP client library can use it. The demo implementation has clients not just for browser-hosted Javascript but also for Python and Java. Besides the clients included with the demo implementation, Paul Jones has written hookout, for Ruby programs, and Tatsuhiko Miyagawa has written AnyEvent-ReverseHTTP for exposing HTTP services via reversehttp from programs written in Perl.

Comment Re:Connection, yes. Server, no. (Score 1) 235

So people do not like caching proxies, why would they like one in their browser ? Why would they like getting content from another user browser instead of from the original source ?

What if the other user's browser is the original source? Think something like Tiddlywiki instances running in the browser and synchronising peer-to-peer.

Comment I run www.reversehttp.net (Score 1) 235

From www.reversehttp.net:

"[...] we need to be able to act as if being able to respond to HTTP requests was within easy reach of every program, so that we can notify interested parties of changes by simply sending an HTTP request to them. This is the core idea of web hooks. We need to push the messy business of dealing with long-polling away from the core and toward the edge of the network, where it should be. We need to let programs dynamically claim a piece of URL space [...] and handle requests sent to URLs in that space [...]. Once that's done suddenly asynchronous notification of events is within reach of any program [...], and protocols and services layered on top of HTTP no longer have to contort themselves to deal with the asymmetry of HTTP. They can assume that all the world's a server, and simply push and pull content to and from whereever they please."

The details of getting the requests through the gateway to the serving application are pretty trivial and interchangeable. The new idea is in registering endpoint names. UPNP and STUN cover some of the same space, but IP's addressing is fundamentally different to HTTP's URL-based addressing in that it's possible with URLs to have recursive delegations of portions of the namespace -- something that IP just can't do (because it's flat).

Slashdot Top Deals

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...