Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

RSS & BT Together? 161

AntiPasto writes "According to this Yahoo! News article, RSS and BitTorrent could be set to join in a best-of-both-worlds content management system for the net. Possible?" Update: 03/17 21:39 GMT by T : Thanks to Steve Gillmor, here's the original story on eWeek to replace the now-dead Yahoo! link.
This discussion has been archived. No new comments can be posted.

RSS & BT Together?

Comments Filter:
  • by tcopeland ( 32225 ) * <tom@th[ ]sleecopeland.com ['oma' in gap]> on Tuesday December 16, 2003 @10:08AM (#7734690) Homepage

    "Now, should an aggregator be polling every 30 minutes? The convention early
    on was no more than once an hour. But newer aggregators either never heard of
    the convention or chose to ignore it. Some aggregators let the users scan
    whenever they want. Please don't do that. Once an hour is enough. Otherwise
    bandwidth bills won't scale."


    Hm. That's interesting. The RubyForge [rubyforge.org] RSS feeds get polled every
    half hour by a couple folks, i.e.:
    [tom@rubyforge httpd]$ tail -10000 access_log | grep "16/Dec" | grep export |
    grep 66.68 | wc -l
    19
    [tom@rubyforge httpd]$
    Hasn't caused problems yet, but maybe that's because RubyForge only gets about
    30K-40K hits per day, and the feeds get just a fraction of that.
    • by Anonymous Coward
      Also, if you make your feeds static files, rather than dynamic, a modern server is going to have no problems serving it hundreds (or thousands) of times a minute if necessary.
      • The grandparent was speaking of bandwidth bills, not processing time.

        S
        • Consider also that, like Kazza before it, people are now running "hacked" bittorrent clients which throttle upload speeds to a stupidly low level. Even if an RSS driven bittorrent was well behaved, it wouldn't be long before an unfriendly one arrived
          • Even if an RSS driven bittorrent was well behaved, it wouldn't be long before an unfriendly one arrived

            That's a good point, but it's trivial to serve up "broken" or empty RSS if the requests are coming too often.. limit by IP. Would cost SLIGHTLY more in processing, but would save much in bandwidth, especially if the feed is large.

            S
          • Considering the official bittorrent client has the --max_upload_rate option, it's not much of a hack. I normally set it to around 15K/sec, to prevent it flooding my upload and making ping times bad for my housemates.
          • by bongoras ( 632709 ) * on Tuesday December 16, 2003 @11:26AM (#7735576) Homepage
            1, BT lets you throttle your upload now. 2, if you do it, your download is also throttled. 3, if you want to modify btdownload.py so that it lies about how much it's uploading in an effort to get faster downloads, have fun. It won't help you because BT itself doesn't trust what the client says, it still sends only as fast as it's getting.
            • "It won't help you because BT itself doesn't trust what the client says, it still sends only as fast as it's getting."

              So explain how I get 230K/s download with only 10K/s upload using the ABC client?

              graspee

              • I get that too on occasion. it depends a lot on what the seeder to leach ratio is. If there's a relatively high number of complete or near copies and a relatively low number of leechers who also have low bandwith to d-load you can get high speeds down while uploading relatively little, But try to get 230 down while only pushing 2k up. Damn near impossible, and if you do pull it off a lot of seeders have gotten wise to leechers and will hit you with the ban hammer fast than you can say Sir. tiddlywinks...
              • So explain how I get 230K/s download with only 10K/s upload using the ABC client?

                That isn't unusual. If there are plenty of uploaders with plenty of upstream capacity, you can expect fast downloads pretty much without uploading anything. The BitTorrent idea really comes into play when everyone is trying to download at the same time. It guarantees some level of fairness since other clients will give you faster downloads if you are being generous in uploading to them.

                Strange things do still happen though. S

            • That's the theory but in actual practice it's different. Either I get trickling download rates (while uploads rates are still high) or I get fast download rates (I'm done when uploads are still around 10% ~ 20% of the amount that I've downloaded).
        • Bandwidth bills on a static page are also trivial.

          A well behaved program won't go GETs on every RSS page, but will do HEADS, compare them to what it already has, and decide from there to get or not get the new page.

          A HEAD request is very small, and unless you're doing millions of them, this shouldn't be an issue.

          - Serge
          • That was the whole point of the original post (great great grandparent, I think).

            Sure, well-behaved clients would do "HEAD" at a moderate interval. But, clients just can't be trusted; most users are "gimme gimme gimme"..

            S
          • by welsh git ( 705097 ) on Tuesday December 16, 2003 @12:22PM (#7736129) Homepage
            > A well behaved program won't go GETs on every RSS page, but will do HEADS,
            > compare them to what it already has, and decide from there
            > to get or not get the new page.

            An even more behaved program will issue a GET with the "If-Modified-Since: " header, which will mean the server will return a "304 - not modified" if the file hasn't changed, or the actual file if it has.. Thus doing in one operation what a combined HEAD and followup GET would take 2 to do.
          • by NonaMyous ( 731004 ) on Tuesday December 16, 2003 @12:30PM (#7736206)
            An even better behaved program will use conditional GET instead of HEAD. For more info, see HTTP Conditional Get for RSS Hackers [pastiche.org] :
            The people who invented HTTP came up with something even better. HTTP allows you to say to a server in a single query: "If this document has changed since I last looked at it, give me the new version. If it hasn't just tell me it hasn't changed and give me nothing." This mechanism is called "Conditional GET", and it would reduce 90% of those significant 24,000 byte queries into really trivial 200 byte queries.
    • Slashdot rules their polling times with an iron fist!!! Not that they shouldn't mind you... hehe
      • The Slashdot polling timer is broken - I feed every 61 minutes, and still get kicked out one every week or so. I appreciate that they want to keep their b/w as low as possible, but for what pretends to be a news site, you have to let people be up to date. maybe a nice subscriber option (hint hint)
    • by scrytch ( 9198 ) <chuck@myrealbox.com> on Tuesday December 16, 2003 @10:24AM (#7734856)
      Of course it hasn't caused any problems. It's a couple folks every half hour. Try a few thousand folks every minute (imagine it's a metaserver for some online game, or a blog during a major news event).

      Still, I'm not seeing anything beyond the "duh" factor here. All that needs to happen is for browsers to handle torrent links. Not some souped up napster app, a browser, so that I can type in a torrent link and get any web page (or other mime doc) for the browser to handle. Change the RSS to use the new URL scheme, and there you go. You could also do it as a proxy, but you run into worse cache coherency issues than with direct support of the protocol; who's to say who has the correct mapping of the content url to the torrent url?

      Good luck, mind you, on getting anything but blogs, download sites, and perhaps hobby news sites to jump on board. This issue has been beaten to death in the IETF and many other circles, and it all boils down to content control -- the NY Times simply doesn't want its content mirrored like that.
      • All that needs to happen is for browsers to handle torrent links.... Change the RSS to use the new URL scheme, and there you go.

        I don't see how this is supposed to help. The problem that BitTorrent addresses is different from that faced by a popular RSS service.

        BitTorrent has proven useful so far to preserve bandwidth. It's handy when distributing files of greater than 1 megabyte in size- usually much larger, such as 650 meg ISO images. It's effectiveness comes from the fact that indivudual download
    • by costas ( 38724 ) on Tuesday December 16, 2003 @10:40AM (#7735022) Homepage
      The real problem isn't the polling intervals, is that most RSS readers/spiders do not respect HTTP 304 (Not Modified). RSS is ideal for Etag/Not-Modified-Since behavior, but no, most spiders are still too lazy to implement this.

      My newsbot (in my .sig) creates dynamic RSS feeds, customized for each agent; I thought that was a great feature to give users, but it's getting overused by some spiders hitting the site every 15-20 minutes, w/o listening for 304s...
    • On this subject, there is some [COOL_INSULT_ADJECTIVE] guy over at btse*rch.net (please don't use his product) that wrote a BT search engine which strips the HTML parsing for new BT links. The problem is that EVERY person who uses this and clicks update EVERY time will be hitting the full site and leading to 10K hits/day. Very bad for smalled BT sites out there.
    • *** Changes RSS reader's interval to 60 minutes up from the default seting of 15 ****
  • genius (Score:1, Funny)

    by Anonymous Coward
    Absolute genius. An RSS feed of torrents. I would set one up right now if I had content to share.
  • Neat idea. (Score:5, Interesting)

    by grub ( 11606 ) <slashdot@grub.net> on Tuesday December 16, 2003 @10:13AM (#7734737) Homepage Journal

    This could be carried further into a whole indymedia [indymedia.org] via BT. It would be even harder for governments and industry to silent dissident voices.
    • I like this idea... and it could not only help to sort of colocate indymedia as it exists today, it could help to desseminate media coverage further and further... Wow. The possibilities are really interesting. In a way, this could sort of create the potential for a super fast, ultra distributed news source sort of like what USENET never was... however, I could also see the crap factor potentially skyrocketing with this. It would need a lot of work, or at least some really badass indexing and rating...
    • by STrinity ( 723872 ) on Tuesday December 16, 2003 @10:38AM (#7735006) Homepage
      This could be carried further into a whole indymedia via BT. It would be even harder for governments and industry to silent dissident voices.

      A couple weeks back, Indymedia had an article saying that the Protocols of Zion were created by the Illuminati to throw blame on the Jews while they take over the world.

      There's a fine line between being a dissident and wearing a tin-foil hat, and many of the guys at Indymedia are squarely on the wrong side.
      • by zulux ( 112259 )
        A couple weeks back, Indymedia had an article saying that the Protocols of Zion were created by the Illuminati to throw blame on the Jews while they take over the world.

        Awww....shit......

        [BY THE POWER OF THE ILLUMINATED LIGHT: IMPLEMENT PLAN BETA. PLAN APLHA HAS BEEN SPOTTED BY THE MASSES]

        • Fnord! I give up. Every time we try to conquer the world, it's the same old thing. I swear two lab mice could do better. That's it, I'm quitting the Illuminated Seers of Bavaria, Berdoo chapter. Here's my pyramid shaped badge, I am so outta here. What'cha gonna do now, all you "No one quits the Illuminati" Geezers?
  • Good concept (Score:2, Interesting)

    by SargeZT ( 609463 ) *
    It is a good concept, by all means. But, the bittorrent development community isn't that impressive. The program is great, but implementing RSS into BitTorrent would require an overhaul of the entire engine. I would love it if this got put into future versions, but I'm not that hopeful.
  • I highly doubt it. (Score:3, Insightful)

    by junkymailbox ( 731309 ) * on Tuesday December 16, 2003 @10:16AM (#7734773)
    The article's idea is simply to make the web (at least the rss) distributed and then query the distributed server to change from 30 minutes refresh to a faster refresh. But the distributed server needs to be updated also. It may simply be cheaper / more efficient to simply run more servers.
  • by clifgriffin ( 676199 ) on Tuesday December 16, 2003 @10:16AM (#7734781) Homepage
    ...practical ways. It's a nice program, I've used it on occasssion but it does have its share of bugs.

    And setting up a server isn't quite easy.

    It really could be a lot better with some work.

    • by Anonymous Coward
      setting up a tracker isn't hard.

      create .torrent files
      put .torrent files into a folder.
      run bttracker.py telling it which torrents are allowed (the location of that folder) and a folder to use for tmp files.

      run btdownload.py like you normally would when resuming a download.

      send the .torrent file to your friends, random websites, post a link to in on /., send an email with it to **AA (attach a goatse/tubgirl pic with that last one too)

      (for more details RTFM! or STFW!)
      • by PierceLabs ( 549351 ) on Tuesday December 16, 2003 @11:11AM (#7735404)
        There are too many steps involved. What's needed is the ability to put content into a deploy directory things just get torrented and distributed.

        The other problem being the relative difficulty of actually finding those 'random' websites that contain links to the things you'd actually want to download.
        • That's not actually a hard thing to implement, assuming a single tracker or a pre-fetched metafile (.torrent) - make a cron job that checks files in a directory for 1) whether a metafile exists and 2) whether it's being seeded/downloaded, and then does each as needed. Extra points if you monitor the tracker for anything in that directory and seed only if certain critera aren't met.
    • by Anonymous Coward
      Try out the Hunting of the Snark project [klomp.org] client.

      It has a simple option --share that automatically shares a file or directory through bittorrent by creating the metainfo file on the fly, launching a mini webserver to serve the .torrent file and acts as the bittorent clients that acts as the initial seed.

      And it is a nice and simple commandline tool.

    • I've taken a throw the baby out with the bathwater solution and have implemented BitTorrent-like download swarming with a server that stores a heirarchical filesystem and transfers that are highly server regimented:

      http://pdtp.org/ [pdtp.org]

  • Ummm... (Score:5, Funny)

    by leifm ( 641850 ) on Tuesday December 16, 2003 @10:18AM (#7734800)
    I'll believe it when I see it. This idea has been circulating the last few days through the blog world, the same people who think they're going to crush traditional media with the sheer power of their blogs. I say whatever.
    • If a blog is published in the woods, and no one is around...

      No, I don't think blogs themselves (with an average of 12 readers each) are very powerful. But combined with massive syndication and micro content searchable with keywords, a needle in the haystack could be felt.

      You're right though, most bloggers (er... perhaps myself included) have a very high opinion of their content.

      • That's what makes this whole RSS + BT thing seem particularly arrogant to me, it's like they're saying 'my content is so popular that I need to offset my bandwidth costs using BT'. Most blogs are ego stroking, with a linking to b and then everybody sits around and gets off looking at their Google ranking and site logs. RSS is the new Pointcast.
  • by dk.r*nger ( 460754 ) on Tuesday December 16, 2003 @10:21AM (#7734828)
    BitTorrent doesn't scale for very small downloads (less than a few MB, I'd say), due to the tracker.

    The tracker keeps, well, uhm, track, of the available pieces of the file, and every client reports in every time has got, or failed to get, a piece. So, using BitTorrent to distribute RSS feeds won't work, because the tracker will take up as much bandwidth, if not more, as a HTTP request, resulting in the "Not changed since your version" request.

    Apart from that, well, yes, BitTorrent is great to distribute large files :)
    • I thought he was talking about distributing BitTorrent links through RSS rather than sending each RSS news reader the full content of the page with graphics, etc.

      So you send out a new torrent through RSS referencing your new page instead of the regular RSS content, and your viewers use BitTorrent to work together to get the content from you without putting all the strain on your server. A .torrent file would be a lot smaller than a full RSS feed with images like he was using in the example.

      Makes more s

    • gzip the RSS file. It should reduce bandwidth by over 50%.
    • Remember that bit of the article on page 1 where he thinks that RSS feeds should be multimedia extravaganzas, and that all the RSS traditionalists who just use it to serve up their blog's headlines are only barely this side of dinosaur status. You'd be back into multi-MB territory in no time if he had his way.
  • stop putting up "graphics, and even multimedia files"! .. or use akamai [akamai.com] or some other servers.

    but i guess bumming off bittorrent/p2p bandwidth is not a bad idea either.

    • by djh101010 ( 656795 ) on Tuesday December 16, 2003 @10:27AM (#7734894) Homepage Journal
      A base Akamai contract starts at $2,000 a month for a 1Mb/second bandwidth allowance. Not sure if many/any Open Source projects have a budget for such.

      Akamai is great for offloading bandwidth and speeding up customer's page load times, but you're paying for the bandwidth one way or another.
      • Rumor around a previous employer of mine was that Akamai was costing us approx $100,000.00 (CND) per MONTH.

        (yes, I placed that decimal correctly)

        S
        • Speaking from direct personal experience, a contract for FreeFlow (the a248.g.akamai.net/blah/blah/blah.html type addresses) goes for about $2,000.00 US per 1MB/s of bandwidth usage (measured on a 95th percentile - so peaks don't kill you). If you want edgesuite (where it's your domain name cnamed over to akamai's edge servers - like i.cnn.com for instance), it's slightly more per megabit of bandwith initially, but cheaper if you go over say 10Mb/second.

          It's not cheap, but for us it was cheaper than addin
          • We served on the order of tens of millions of page-views per day, on edgesuite, with secure services, and a shared, akamized, wildcard certificate, all with content caching (on web applications that were 99% dynamic, user-specific content (non-cacheable) -- my CTO was a troll), all for DDoS protection (we peaked at 100Mb/sec when we signed, so we got ripped off -- see the part about my CTO).

            Fun stuff. Good riddance, I say.

            S
  • Konspire2b (Score:5, Informative)

    by Dooferlad ( 101535 ) * on Tuesday December 16, 2003 @10:31AM (#7734936) Homepage Journal
    Konspire2b [sourceforge.net] looks like a better option than BitTorrent for distributing news. You could have a channel mapping to an RSS feed and just wait for the news to come to you. No polling intervals and low bandwidth requirements for the operator. With BitTorrent you still have to poll for updates and this removes that requirement.
  • Sorry, guys, but you are basically reinventing USENET over TCP/IP.
    • What morron modded parent as insightful?

      Does your usenet reader serve news articles to other users?

      No, you need a costly usenet servers architecture. Not only machines, but also huuuge bandwith. Today's usenet servers that want to serve large portion of world hierarchies can only get it via dedicated satellite usenet-only feeds.

      RSS+BT on the other hand is poor server and rich clients that exchange articles between themselves via p2p network only supervised by a BT tracker.

      Robert
      • The idea of using an NNTP type protocol for RSS is something I've been pushing for a while. Just change "newsgroup" to "newsfeed" and add a way to authenticate posters and automatically create feeds, and NNTP already takes care of most of the rest of the problems.

        Things RSS has been struggling with like character encoding, attachments (enclosures), scaling, and other issues are things that NNTP solved long ago.

        RSS + Torrent would be an excellent replacement for binaries newsgroups though.
      • by penguin7of9 ( 697383 ) on Tuesday December 16, 2003 @01:37PM (#7737023)
        Does your usenet reader serve news articles to other users?

        Yes: the way people traditionally read USENET news is by becoming a USENET node, downloading articles to the directory hierarchy of the local machine, and then redistributing them to neighboring sites. Reading news by connecting to centralized news servers via a network client happened many years later.

        No, you need a costly usenet servers architecture.

        There is nothing intrinsically "costly" about it: it's something a PDP-11 used to handle and that regularly ran over dial-up.

        Not only machines, but also huuuge bandwith. Today's usenet servers that want to serve large portion of world hierarchies can only get it via dedicated satellite usenet-only feeds.

        Just like a BT solution, you only redistribute those articles that you yourself are interested in.

        The reason why we got a USENET infrastructure with a small number of backbone sites (compared to the readership) that carried everything is simply because a bunch of sites took on that role and carry everything. There is nothing in the protocol or design of USENET that requires it.

        RSS+BT on the other hand is poor server and rich clients that exchange articles between themselves via p2p network only supervised by a BT tracker.

        And you believe that BT and the BT tracker scales up to many billions of files on millions of nodes by sheer magic? BT would probably need a lot of work to scale up. And at least USENET doesn't need any supervision by anything--it's completely asynchronous and unsupervised.

        Note that I did not claim that USENET would work any better than RSS+BT--I have no idea whether it would--simply that people are basically reinventing USENET when they combine RSS and BT.

        I actually suspect that there are intrinsic properties of large peer-to-peer news networks that people don't like because that's why USENET became more and more centralized over the years.

        What morron modded parent as insightful?

        That's what I would ask about your posting. In fact, I would ask what moron wrote it.
    • by Anonymous Coward
      but now we have it so they go over port 80.
      bttracker: default port 80 (though its often on a higher port)
      rss: just an xml document on a webserver, default port 80

      there probably is a way to proxy the peer to peer connection to port 80 as well.

      in the future everything will be on port 80, and the OS will have everything built into it. and The Windows OS will drop the name 'Windows', much like people seem to forget The MSSQL Server isn't the same The SQL Server. the file system will be just a database, the os
    • You mean over HTTP. :)
  • by ph00dz ( 175440 ) on Tuesday December 16, 2003 @10:39AM (#7735014) Homepage
    I always thought that syndicators should take advantage of the current distributed architecture of the newsgroups to syndicate their content... but hey, maybe that's just me. The only real problem is one of authentication -- since you're downloading content from a publicly accessible source one would have to come up with some clever way of making sure you're grabbing content from the author you choose.
    • how about checksums on the authors web-site for releases/content?
    • Actually, my free hosting, free-conversant.com [free-conversant.com] doesn't have native RSS support (easily acheived through templating tho), but with its message-centric approach to content managent, it has native NNTP support. Seems to be the comments of most on here that NNTP is already there. Why not RSS over NNTP?
    • The only real problem is one of authentication -- since you're downloading content from a publicly accessible source one would have to come up with some clever way of making sure you're grabbing content from the author you choose.

      Sounds like a perfect application for PGP/GPG. It'll guarantee that the person you think wrote it did, or not, and whether that content has been modified at all.
    • Umm, DSA or RSA signatures? Just put up your key on the blog or whatever.

      Pushing checksums via the web does indeed reduce bandwidth in a best-case scenario, but if someone floods the newsgroup with fake updates all the aggregators will slam the website like mad trying (and failing) to verify the MD5 hash.

  • IRC (Score:4, Interesting)

    by Bluelive ( 608914 ) on Tuesday December 16, 2003 @10:43AM (#7735067)
    Using rss polling seems to me just a way to fake a subscribe push technology. Why not just use a push technology like irc. A channel per tracker, just join a channel to get the updates when they are send. Youd probably still want to use rss for events that youd miss while not online for longer periods.
  • fidonet (Score:5, Interesting)

    by mabu ( 178417 ) * on Tuesday December 16, 2003 @10:52AM (#7735152)
    A good analogy would be comparing the setup to Fidonet and their "echo" messageboards. It's a very efficient method to distribute news.

    The key to usefulness however, is enabling technology to prioritize and authenticate the RSS feeds in some way.
    • Re:fidonet (Score:3, Interesting)

      As a former FidoNet node SysOp, I have had a similar idea for a couple of years. I have messed around with the code but never been happy with it to a point of putting it on SourceForge.

      The idea goes like this:

      If you want to host a RSS feed, you run a program that is basically a peer cache. People hit your IP and "subscribe" to the feed. You give them a list of other subscribers' IPs and the public key for the feed. The client then hits these peers and checks to see who has faster bandwidth. If the pe
    • Whoah. </keanu> (Score:5, Informative)

      by CrystalFalcon ( 233559 ) on Tuesday December 16, 2003 @11:53AM (#7735865) Homepage
      This is the first time I've heard FidoNet mentioned in... must be almost a decade. It's like the huge amateur network (which for a brief period outnumbered the Internet in raw node count, mind you) never existed.

      Anyway, FidoNet was not without its share of problems. The killing bullet, I'd say today, was the social factor - there were too conservative forces clinging to backwards compatibility at the cost of anything. Anything had to work with the most basic piece of software; this effectively shot progress and evolution dead.

      Not that there weren't attempts. There were. They just weren't successful.

      Anyway, setting up echoes would have the same problems as FidoNet echoes. The number one problem was typical for Slashdot: DUPES!

      Echoes were set up so that one node relayed a message in an echomail forum to its other connected nodes for a particular echo, effectively creating a star topology, different for each forum. However, since each sysop just wanted the echo linked, he would just hook up to somewhere, and forget about it. Then, others would hook up from him, and all of a sudden somebody had hooked up to two different valid uplinks.

      The result? The star topology all of a sudden had a loop in it. Messages would keep circling (since FidoNet used dedicated dialup lines, latency between nodes was typically in the hours range) and dupe filters were created.

      All of those filters and filter-enabling tags were optional, of course. After all, you couldn't mandate an operational node to change its behavior, you could just ask nicely.

      Political play to no ends. :-/

      Anyway, there were many other funny effects with EchoMail. Crosslinking was another - when one echo got linked to another at a node, so that all messages in echo X would enter echo Y at that node and vice versa. The most exotic of these was when a religious echo got crosslinked with a fantasy humor one -- through crosslinked physical directories at a node (the FAT pointers for the different directories hosting the two echoes pointed to the same location on the disk). Anyway, much hilarious discussion ensued, and not many understood much what people were trying to say in the crosslinked echo. :-)

      / former sysop and NEC in FidoNet
      • but I had the impression that my fido-setup on os/2 handled hundreds of thousands of messages a lot faster than my current box does with email, and this while the processor-speed incresed by a factor of 10 and the memory by a factor of 50... aaah, I really liked fido-technology :)
      • Fidonet was great. I was one of the original systems in the network. One thing that killed Fidonet were that a few less-than-honorable people managed to take control of some of the primary hubs in the network and exert biased influence over the network - jacking people on fees and controlling which content ended up being distributed. Fidonet ended up becoming political in nature and there was a minor rebellion, at a time when usenet was gaining attention. A few overzealous fidonet backbone operators rui
  • by mybecq ( 131456 ) on Tuesday December 16, 2003 @11:02AM (#7735286)
    Can somebody explain how RSS and BitTorrent equal a content management system ?

    Sounds more like a (possibly improved) content delivery system.

    Too bad the article didn't indicate anything about content management.
  • by Anonymous Coward
    When I read "BitTorrent," I thought "Bitkeeper." Then for RSS, I came up with "Rational Software Solutions." I had this vision of them combining, Voltron-like, to crush the CVS rho-beast dead.

    Like I said, weird.
  • WebTorrent (Score:4, Insightful)

    by seldolivaw ( 179178 ) * <me@NOsPaM.seldo.com> on Tuesday December 16, 2003 @11:27AM (#7735578) Homepage
    I blogged about the possibilities of using BitTorrent to deliver web content [seldo.com] back in April, but I didn't consider RSS. The idea worked out between myself and some friends was a network of transparent proxies as a way of dealing with Slashdot-style "flash crowds". When you request content, your proxy requests the content from you, and simultaneously broadcasts the request to nearby machines. If any of those machines have already downloaded the content (some form of timestamp and hash is necessary to ensure it's the correct and authentic version of that URL) then they will send that content to you, allowing servers already under or expecting heavy load to push out a new HTTP status message "use torrent", supplying a (much smaller) torrent file. This allows web servers to scale much better under flash crowd conditions.

    The drawback of the WebTorrent idea is that you need some way to group all the images, text and stylesheets together, otherwise you have to make a n inefficient P2P request for each one. RSS is a great way of doing that.

    There aren't many details online at the moment of the work we did on the WebTorrent idea; it was mainly an e-mail thread -- get in touch if you'd like details. The project page [seldo.com] is available, but I stopped updating it so it doesn't have all the work that was eventually done.
    • I mentioned the idea of a "WebTorrent" in my /. journal a few days back, there already is a way to make a torrent of the contents of a directory, it is even possible to modify a client to place a priority on specific files inside of a torrent (i.e. index.html). One drawback my roommate thought of though is: How do you "update" a WebTorrent? Once the file is out in the swarm, there is no way you can update the file. You would have to make a completely new torrent for any new version of your webpage.
    • Yeah, I've been thinking about this idea too for a while. My original idea contained cool excuses for me to get to run a brand new domain name registry and make $$$, but it would never have taken off that way and some dumb plans like that have already been tried. So I started thinking about saner, thoroughly open approaches involving, as you say, transparent personal proxies.

      Which could be simpler than one might think:

      Before satisfying a request for the URL "http://www.mysite.com/a/b/foo.jpg" the "hard wa
      • Re:WebTorrent (Score:3, Insightful)

        by seldolivaw ( 179178 ) *
        Even better, why not let the format of the manifest be XML, and let the data compression be handled by HTTP gzip compression? In which case, your JAR files become RSS feeds...
    • A lot of your summary seems already available in HTTP, if crudely. HTTP can:

      • fetch metadata (like an ETag, which is a hash uniquely identifying an HTTP resource), depending on whether or not it's new (via Last-Modified, or If-None-Match in conjunction with known ETags)
      • proxy requests, at the request of the origin server (i.e., "Thanks for your request, but please use this proxy server to get your response: ...") via the 305 response code
      • retrieve "shards" of a resource via partial GETs (using the Range h
    • +1 interesting
  • Next logical step... (Score:3, Interesting)

    by Bugmaster ( 227959 ) on Tuesday December 16, 2003 @11:31AM (#7735630) Homepage
    The next logical step would be to augment HTTP itself to piggypack on top of BT (as suggested by multiple people earlier on this site); this will make slashdotting go away for good. I can see three major problems to both the RSS+BT and HTTP+BT integration schemes: leeching, cracking and discovery. If everyone starts to leech, then BT's advantages are nullified. If someone cracks the client, they can corrupt portions of the feed/website that is being served (checksums solve this problem, but AFAIK they rely on the majority of users being honest). Then there's also the chicken-and-egg problem of discovering the .torrent file (or its equivalent) in the first place: someone still has to serve it so that you can jump-start your torrent madness, and that someone can get slashdotted easily.

    These problems are not insurmountable, but they are not insignificant, either. Thus, I don't think that RSS+BT is the instant-gratification, no-risk paradise that the Yahoo article makes it out to be.

    • If someone cracks the client, they can corrupt portions of the feed/website that is being served

      You don't need to crack anything, just download the source and re-write it best you want... bittorrent is kinda open source.
      Furthermore, corrupted files being sent out is already happening today, which is why some clients (I use the shadow bittorrent client) have an option of banning any user that sends corrupted data. Every now and then, my client will have banned a user or two, so this is already happenin
  • Maybe a kind of event notification service would be useful (I get to it after a few comments...)

    A) Sounds nice, but even without a torrent, using an open source hash algorithm (client and server agree on how to calculate the hash) would provide a way for the client to only download the hash value itself to check for freshness.

    This way,
    1 the author knows how many people have consumed the data and their general geographic distribution.
    2 the author can make a decision to stop publication, which problematic but at any rate easier to enforce than if he or she starts out authorizing a torrent.
    3 the author is free to pay for bandwidth if he or she will happily serve one per user just not a zillion per poller.

    B) To be sure, it is easy to see who publishes an RSS feed / incites a Torrent download over somebody's infrastructure, whereas it is not so easy to discover the identity of an anonymous coward. You could also publish a pseudo-RSS feed itself exclusively over the torrent network using sequential filenames for more anonymity maybe..um.

    C) Personally I have a current need for frequently updated RSS for a certain application and I'd set up a server that my internal network clients would poll frequently. But I'd still need for one machine to know the instant things change on the web too.

    D) I'm wondering if a hierarchical network of servers might be useful here to publish event notifications. UDP is lossy, and we don't want to lose any events so that's out I guess. In NTP, various strata of time servers are used and clients try to sync to Greenwich time (light data) by the best route available. In NNTP, a client usually uses only one news server to get a fat feed, and different server owners often choose to handle only a subset of what's available in the whole world, which might also be the case (try serving every event of importance to someone in the world.. what is the bandwidth needed for that? How many bits to describe it in ip-like dot format?)

    Probably there is another service that does what I'd like and it just flew out my left ear, but it just seemed to me that the best thing would be to combine the lightweight NTP network which lets clients synchronize their understanding of time despite general flakiness, and the NNTP network which allows different servers to decide to serve only certain segments of the worldwide aggregated feed.

    And SIP does a lot of things that might be useful. And there is MDS (metacomputing directory service for the "semantic grid" - pdf [ic.ac.uk] / google's html [216.239.57.104]). And there's Jini ..

    Anyway we do want to know some things with at least one minute resolution. (A storm watch? A news headline so we can turn on the TV or video stream?) At TV stations I know people constantly are watching the TV out of the corner of their eye to see if something earth-shattering comes up. How about a chime to tell you to look instead? How else to people get First Post? ;) I'd just like to beat normal notification systems for current events and website updates, for starters, based on a relatively robust and timely mechanism.

    Maybe a low bandwidth network with some of these characteristics would be useful to distribute update event notifications that filter down to everyone's devces. A big company could have one or two machines consuming a global event notification thread, add events only it knows about, and serve this information on a push or pull basis to all its employees. Hmmm, tasty. Come to think of it I want something like that for another project too, Does anything already do this?

    One interesting paper (2001) I found is on an emergency notification network based on subscribe/notify messages over SIP, a widespread voice over ip prot

  • by Anonymous Coward
    The main downside comment I have seen on this thread is the issue of trust: either content suppliers don't trust the network (e.g. the NYT comment,) or readers don't trust the network (CIA, Evil Bloggers, whatever) to not send them a bogus feed.

    (Note I don't know details of how BT works, just general idea - fell free to take this idea and run with it however it makes more sense.)

    I like the notion of this happening at the web-server level, which allows it to be generalized to other forms of content distrib
  • RSS feed = newsgroup
    Aggregator = news reader
    Bittorrent = RAR+PAR binaries

    And best of all.. no polling! Well, between usenet servers it's mostly a broadcast kind of affair these days..

    Has anyone made a rss2nntp bot yet?

    Of course, IRC is also a remarkably cool medium for timely distribution of small ASCII messages.. The nick/channel bullshit sucks (though usenet "channel"/group takeovers/spams suck even more), but surely it's not beyond the realm of possibilities to build an IRC server that requires people
  • modtorrent (Score:2, Insightful)

    by Isbiten ( 597220 )
    What I would like to see is modtorrent for apache. Where you could specify that files larger than 20MB would get sent as a .torrent instead. And it wouldn't require you to make a .torrent manually instead it would create it when a file was requested. And put it in a directory so it was ready to serve it the next time someone wanted it. Would work great if you want to have large files such as movies and demoes on your site.
  • a better match might be using jabber and the p Publish-Subscribe extention (http://www.jabber.org/jeps/jep-0060.html) to allow users to subscribe to a resource and then allow the resource to announce when it has updated, creating a push notification. You could receieve the notice using a standard jabber client, but eventually someone could make an aggregator with an integrated jabber client that would handle your news subscription. Then when you start it up, you poll, and then as long as it is open you don'
  • by ikewillis ( 586793 ) on Tuesday December 16, 2003 @03:04PM (#7738118) Homepage
    The problem with attempting to cobble BitTorrent onto an RSS feed system is that BitTorrent would still utilize a "pull" model for distributing the syndication data, but instead of directly fetching the XML document syndicators would grab a .torrent file. While this may decrease the bandwidth used, it only solves half of the problem. What really needs to be addressed is the "pull" model being used to fetch the RSS document in the first place.

    A better solution would be eliminating the need for syndicators to constnantly poll waiting for RSS updates by using IP multicasting to notify syndicators of when the content of a particular RSS feed has changed. Multicast protocols which provide such announcements already exist, such as the Session Announcement Protocol [faqs.org] which would notify those curious of updated RSS feeds. A URL to the updated feed would be provided, and afterwards whatever file transfer protocol you wish could be used to fetch the RSS feed itself, even BitTorrent.

  • The article proposes using BT to transport RSS files, with the goal of reducing the load on extremely popular RSS files. It's a fine idea, except that RSS files are generally very small, and BT incurs such overhead that it's a poor choice for distributing small files.

    The real potential of combining BT & RSS is in the reverse: use RSS to distribute .torrent files, or .torrent URLs. Configure your news aggregator to pass all new .torrents to the BT client, and you've got yourself a media aggregator tha
    • That's *exactly* what I was thinking. It'd be great if torrent websites simply provided an RSS feed of their torrents instead of an HTML page; that way there'd be less HTTP traffic to the server, allowing more bandwidth for the bt trackers to play.

Been Transferred Lately?

Working...