Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software The Internet

RSSOwl 1.2 Released 114

Benjamin Pasero over at RSSOwl.org wrote to tell us that they have released version 1.2 for their RSS/RDF/Atom newsfeed viewer. It looks like a lot of work has gone into this version. Some of the new features are; a fully customizable toolbar with new elements like 'History', new search scopes allow for more detailed searches, a new 'Linked Mode' to update selection in your favorites automatically, support for Atom 1.0 format, and quite a few others.
This discussion has been archived. No new comments can be posted.

RSSOwl 1.2 Released

Comments Filter:
  • by trollable ( 928694 ) on Sunday November 06, 2005 @08:40PM (#13965937) Homepage
    I have no problem browsing my favorite sites once or twice a day, and enjoy doing so. What am I missing out on?

    Nothing if you have only two or three favorite sites. But if you have fifty of them? Basicaly a RSS reader lets you see all the new entries of the blogs and websites you track. And you can quickly go the articles of interest. Now if you're a pure slashdoter (someone with no post outside), then it is not for you.
  • by Noksagt ( 69097 ) on Sunday November 06, 2005 @08:57PM (#13966026) Homepage
    I agree with your points, but would also add that an aggregator also gives you some things that a web browser doesn't.

    For one, you can save locally-cached copies of posts. Yes, a web browser also has a cache, but you can't typically have both easy and fine-grained control of the content you keep or throw away. Some sites that have feeds have mediocre connectivity (and feeds were originally promoted partly as a bandwidth saver--you don't download as much content at once). Some authors have a nasty habit of deleting the best content. By archiving it in an aggregator, you can save the best stuff.

    Aggregators also let you search over all relevant feeds and only those feeds. No more dealing with separate search engines, with their separate "advanced search" syntax (or, worse, very basic or non-existent searches).

    Finally, an aggregator lets you apply filters so that the best, most relevant content sees your eyes & bad/spammy content doesn't. I keep my feeds in Thunderbird, and treat some blogs as email--I apply Bayesian filters to particularly noise-filled feeds (such as comment feeds), and sort content topically. Some aggregators eliminate or group related posts that come from different feeds. Some let you push these posts (which have the most "buzz") to the top, so you don't miss it.
  • by jacoplane ( 78110 ) on Sunday November 06, 2005 @09:14PM (#13966108) Homepage Journal
    Why do I need a seperate program to view this type of content? Doesn't it make more sense to implement such an implementation in a browser? Personally, I have been using Bloglines [bloglines.com] for a long time (and more recently netvibes [netvibes.com]). Google [google.com] and Microsoft [live.com] also seem to be going this way.

    Of course, as long as an application supports the importing and exporting of OPML [wikipedia.org] it doesn't matter what you use, because switching is easy. However, I can't really justify running a whole seperate application that seems to do little other than launching Firefox anyway.

Without life, Biology itself would be impossible.

Working...