Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

How Much Bandwidth is Required to Aggregate Blogs? 209

Kevin Burton writes "Technorati recently published that they're seeing 900k new posts per day. PubSub says they're seeing 1.8M. With all these posts per day how much raw bandwidth is required? Due to innefficiencies in RSS aggregation protocols a little math is required to understand this problem." And more importantly, with millions of posts, what percentage of them have any real value, and how do busy people find that .001%?
This discussion has been archived. No new comments can be posted.

How Much Bandwidth is Required to Aggregate Blogs?

Comments Filter:
  • All at once (Score:5, Interesting)

    by someonewhois ( 808065 ) * on Sunday August 14, 2005 @05:30PM (#13318003) Homepage
    It would make a lot more sense to have a protocol where you check one file that has a list of links to another XML file, and then the aggregator figures out which of those URLs has NOT been aggregated, then it downloads the other XML file which has the post-specific info, which it proceeds to display. That would save a lot of bandwidth, I'm sure.
    • Re:All at once (Score:5, Insightful)

      by ranson ( 824789 ) * on Sunday August 14, 2005 @05:42PM (#13318067) Homepage Journal
      I'm trying to understand how this would help because if everyone would incorporate generally accepted practices with regard to the HTTP protocol into their XML generation script (e.g., including Last-Modified and/or Expires headers, providing an e-tag, etc) the aggregators could use Get If-Modified-Since requests to save an unthinkable amount of bandwidth. As it is right now, since most RSS feeds are generated on the fly from some database, that doesn't happen and the aggregators just have to pull the entire XML at regular intervals to ensure nothing was missed. I find it silly that some basic functionality of the WWW like smart caching rules started being ignored when RSS came along.
      • It's amazing to me how many problems would be solved if applications (client and server) just understood http 1.1 more fully.
      • One way to hugely reduce bandwidth would be to use XMPP publish-subscribe for RSS, rather than HTTP. That way, you don't have to poll every 30 minutes or so to see changes, and you don't have to download a complete RSS file just to get one new article.
    • Re:All at once (Score:2, Interesting)

      by G-Licious! ( 822746 )

      I don't think you need a list of links or even a separate file. An easier solution might be to just pass a format string in a separate link-tag on the html page announcing the feed. For example, right now we have: (taken straight form the linked article)

      <link rel="alternate" type="application/atom+xml" title="Atom" href="http://www.feedblog.org/atom.xml" />
      <link rel="alternate" type="application/rss+xml" title="RSS" href="http://www.feedblog.org/index.rdf" />

      And we could introduce a new r

    • Re:All at once (Score:4, Interesting)

      by broward ( 416376 ) <browardhorne@noSpAm.gmail.com> on Sunday August 14, 2005 @06:21PM (#13318229) Homepage
      The bandwidth isn't going to matter much.

      The blog wave is close to an inflection point,
      probably within six to twelve months...
      which means that total bandwidth will probably
      top out at about TWICE the current rate.

      http://www.realmeme.com/Main/miner/preinflection/b logDejanews.png [realmeme.com]

      I suspect that even now, many blogs are
      starved for readership as new blogs come online
      and steal mental bandwidth.

  • by ranson ( 824789 ) * on Sunday August 14, 2005 @05:30PM (#13318009) Homepage Journal
    How much bandwidth is required? A lot less if everyone would take the 5 minutes required to implement GZip compression on their Apache servers. It saves you bandwidth, it speeds up your site for users (especially those on dialup), and saves the bandwidth of aggregators (assuming they advertise an Accept-Encoding header for gzip; deflate)

    So my plea to the internet community today.. make sure your web server is configured to send gzipped content. TFA says he doesn't know how many RSS feeds can support gzip. The answer is easy really, any feed being served by Apache (plus a LOT of other webservers. AOLserver even added gzip support recently). Here's how to setup Apache [whatsmyip.org] and here's where to check [whatsmyip.org] if your site is using GZip or and get an idea of the bandwidth savings you should see get. If you're site isn't gzipping, show your admin (if it's someone else) the 'how-to' above and ask them to implement it -- it's an absolute no-brainer win-win for everyone that takes no time at all to setup really. It's really absurd IMO that it's not enabled in Apache by default.
    • by TCM ( 130219 ) on Sunday August 14, 2005 @05:38PM (#13318054)
      Of course every server is powerful enough that CPU time can't possibly become an issue, right?
      • >Of course every server is powerful enough that CPU >time can't possibly become an issue, right? On moderately busy servers, most have found that mod_gzip helps with both CPU and RAM, since users stay connected to your server for shorter durations, resulting in overall fewer concurrent connections.
        • Do you have _any_ sources to back this up? Compared to keeping a connection state, gzipping is _way_ more expensive. I find it very hard to believe that there is a case where keeping the connection longer was more expensive than gzipping the content.
          • "Compared to keeping a connection state, gzipping is _way_ more expensive. I find it very hard to believe that there is a case where keeping the connection longer was more expensive than gzipping the content."

            I'm prone to agree. But I also suspect that my CTO is going to agree that it's cheaper to pay once for more processing power than it is to pay every day for higher bandwidth use. YMMV, of course. Bandwidth is relatively cheap in some parts of the US, but in other parts of the world it's hideously exp

          • by womby ( 30405 ) on Sunday August 14, 2005 @07:22PM (#13318482)
            With the least intensive compression algorithms html can end up almost 10 times smaller
            That results in a 10 times shorter transfer time,
            Which results in 10 times fewer simultaneous connections,
            Which results in 10 times fewer apache processes,
            Which results in massively reduced memory and processor requirements.

            That unused processor and memory is what would be used to perform the gzip operations. Lets say for arguments sake compressing the output doubles the processor usage (a ridiculously high number) cutting the number of apache processes by an order of magnitude only has to reduce CPU requirements by 50% to come out on top.

            If the gzip operation only inflicts a 10% overhead cutting the apache processes by ten only needs to free more than 9% to come out on top.

            Look at your server, would cutting the number of apache processes from 400 to 40 save more than 10% of the CPU usage, would it save more than 50%?

            [All numbers in this post were selected for ease of calculation not for their real world precision,]
          • by jp10558 ( 748604 ) on Sunday August 14, 2005 @08:05PM (#13318633)
            Couldn't you GZIP each page once per change (obviously no good for dynamic pages, but for blogs, each post would only need to be done once. Unless you get comments like on slashdot, it's unlikely you'd have to gzip more than once every few minutes or so. And then serve that file like you would any other file?
      • IIRC, you can configure your server to do the compression once per file, instead of every time the page is served.
      • There are gzip accelerator PCI cards available for cases where CPU is an issue. Whether they're cheaper in large clusters than just adding some hosts or getting a bigger pipe, I don't know ... but they're another option.
    • by Madd Scientist ( 894040 ) on Sunday August 14, 2005 @05:40PM (#13318060)
      i used gzip with apache at an old job and we ran into a problem with it... some obscure header problem in conjunction with mod-rewrite.

      so i wouldn't say ANY site using apache... but probably most. the real problem there is with compression load on the servers... gzip compression doesn't just happen you know, it takes CPU cycles that could be being used to just push data rather than encode it.

      • so i wouldn't say ANY site using apache... but probably most. the real problem there is with compression load on the servers... gzip compression doesn't just happen you know, it takes CPU cycles that could be being used to just push data rather than encode it.

        Most web clients take gzipped content, so if it's static you should gzip by default and store compressed on the filesystem.

        For browsers taking compressed content (most of them) serve as is and for those that don't you can uncompress the content on the
      • Another nice and strange problem is that IE totally ignores ETag headers on gzipped pages (it does not send a If-None-Matched header back).
        So effectively IE requests each and every page again if it's gzipped.

        Nice to know that this bandwidthreduction-solution has the opposite effect...

        See my blog [blogspot.com] for more info.

    • I think you mean "enable". *Implementing* GZip takes a hell of a lot longer than 5 minutes :)
    • With or without gzip, 12.5mbit is easy and cheap. A 2.4ghz Celeron with a 20mbit unmetered Cogent connection goes for $239 US/mth at ServerMatrix. For these big sites complaining about bandwidth, $239 per month is peanuts.
    • by ZorbaTHut ( 126196 ) on Sunday August 14, 2005 @06:06PM (#13318193) Homepage
      As I remember, www.livejournal.com has experimented with gzip compression several times. They've discovered that the price of the CPU far exceeds the price of the bandwidth.

      Bandwidth is cheap. Computers, not so much.
    • Your howto specifically states how to *not* use mod_gzip and to create .gz copies of every page.

      Not so useful on a dynamic site.
    • by epeus ( 84683 ) on Sunday August 14, 2005 @07:55PM (#13318585) Homepage Journal
      If your weblog server implements ETag and Last-Modified, my spider can send a one packet request with the values I last saw from you, and you can send a one packet 304 response if nothing has changed.

      Charles Miller [pastiche.org] explained this well a few years ago.

      (I run the spiders at Technorati).
    • If you want ssl you either need to disable compression for https requests or do a weird hack with mod_proxy.

      In theory, the two should work together seamlessly. In practice, they don't.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday August 14, 2005 @05:31PM (#13318011)
    Comment removed based on user account deletion
    • Answer: Not enough to justify the cost to do it. Which goes to show you that if a site as popular as slashdot can't save money doing this, no other site on the net belongs converting to xhtml, economically speaking of course.

      "Though a few KB doesn't sound like a lot of bandwidth, let's add it up. Slashdot's FAQ, last updated 13 June 2000, states that they serve 50 million pages in a month. When you break down the figures, that's ~1,612,900 pages per day or ~18 pages per second. Bandwidth savings are as fo
    • by A beautiful mind ( 821714 ) on Sunday August 14, 2005 @05:44PM (#13318079)
      Normally you would be right, but now you're banging open doors. CmdrTaco and others are actively working on a new CSS-using formatting of slashdot.
    • by Anonymous Coward on Sunday August 14, 2005 @05:51PM (#13318117)
      The bandwidth savings from using html+css are hugely exaggerated.

      Slashdot is switching to html+css for the front page, but not for any dynamic pages like the one you're on now. Because slashcode was written by totally incompetent programmers, the markup for comment pages is not separated from the logic. Making any changes is therefore a huge undertaking and the people who wrote it are far too busy maintaining the high journalistic standards slashdot is known for to do it.
      • Making any changes is therefore a huge undertaking and the people who wrote it are far too busy maintaining the high journalistic standards slashdot is known for to do it. ...+5, nougat-filled sarcasm.
    • It has absolutely sod-all to do with XHTML. HTML 4.01 and XHTML 1.0 are functionally identical. You can use table layouts and <font> elements with XHTML 1.0 and you can use CSS with HTML 4.01.

      You are referring to separating the content and the presentation through the use of stylesheets. This has nothing to do with XHTML, although it would save a hell of a lot of bandwidth if Slashdot implemented it. They are implementing it [slashdot.org].

    • How much bandwidth is /. wasting every month by not creating a standard xhtml page even though someone created one for them already

      and from here [alistapart.com].

      Ask an IT person if they know what Slashdots tagline is and theyll reply, News for Nerds. Stuff that Matters. Slashdot is a very prominent site, but underneath the hood you will find an old jalopy that could benefit from a web standards mechanic.

      This is going to sound like a flame - and it isn't meant to be it. But it seems obvious at this point that the people run
    • A more serious question is how much bandwidth /. is wasting by hosting the large quantity of duped articles
  • Slashdot? (Score:4, Insightful)

    by djsmiley ( 752149 ) <djsmiley2k@gmail.com> on Sunday August 14, 2005 @05:31PM (#13318013) Homepage Journal
    "And more importantly, with 9M posts, what percentage of them have any real value, and how do busy people find that .001%?"

    On slashdot.... Oh wait....
    • If you'd check out my blog, you could read about the blogs I've read today thus saving yourself a lot of time.
    • It would make you very rich. Nobody thinking about crap like that! No sir!

      Like most of life, building networks of trust takes time. Aren't issues like this really part of the problem? Charging for bandwidth... My server has something like 100gig of transfer, and unless I get Slashdotted several times a month is this really a problem? And, if I do, why aren't I getting some ads in place to pay for it?
  • 900k a day, not 9m (Score:2, Informative)

    by Anonymous Coward
    order of magnitude out there, fella... better try again with this new fangled "math" stuff
  • by astrashe ( 7452 ) on Sunday August 14, 2005 @05:33PM (#13318024) Journal
    I used to have a blog that I recently shut down because no one read it.

    No one read it, but I got a ton of hits -- all from indexing services. WordPress pings a service that lets lots of indexing systems know about new posts. Some of them -- Yahoo, for example, were contstantly going through my entire tree of posts, and hitting links for months, subjects, and so on.

    It didn't bother me, because the bandwidth wasn't an issue, and it wasn't like they were hammering my vps or anything. It mostly just made it really hard to read the logs, because finding human readers was like looking for a needle in a haystack.

    But bandwidth is cheap, and RSS is really useful, so it seems at least as good of a use for the resource as p2p movie exchanges.
    • I think this anecdote might provide a good idea of how many of those blog posts are actually useful.

      Almost none.

      Don't worry about it, guys. If people ever start clamoring for MORE blog posts, you'll know.

    • Are you saying that you read the logs directly/manually?

      See AWStats [sourceforge.net]
    • Who says a whole lot of people need to read your blog? Only a small handful of friends read mine, mostly people I live far away from. It's a weirdly indirect way of keeping in touch with those people (I read theirs, they read mine). Still, I find my blog to be more of a diary to keep track of things that happen in my life for my own personal purposes more than anything else.
  • by llZENll ( 545605 ) on Sunday August 14, 2005 @05:33PM (#13318026)
    Rather than a making all these assumptions why not just email Bob Wyman and ask him?

    "How much data is this? If we assume that the average HTML post is 150K this will work out to about 135G. Now assuming we're going to average this out over a 24 hour period (which probably isn't realistic) this works out to about 12.5 Mbps sustained bandwidth.

    Of course we should assume that about 1/3 of this is going to be coming from servers running gzip content compression. I have no stats WRT the number of deployed feeds which can support gzip (anyone have a clue?). My thinking is that this reduce us down to about 9Mbps which is a bit better.

    This of course assumes that you're not fetching the RSS and just fetching the HTML. The RSS protocol is much more bloated in this regard. If you have to fetch 1 article from an RSS feed your forced to fetch the remaining 14 addition posts that were in the past (assuming you're not using the A-IM encoding method which is even rarer). This floating window can really hurt your traffic. The upside is that you have to fetch less HTML.

    Now lets assume you're only fetching pinged blogs and you don't have to poll (polling itself has a network overhead). The average blog post would probably be around 20k I assume. If we assume the average feed has 15 items, only publishes one story, and has a 10% overhead we're talking about 330k per fetch of an individual post.

    If we go back to the 900k posts per day figure we're talking a lot of data - 297G most of which is wasted. Assuming gzip compression this works out to 27.5Mbps.

    Thats a lot of data and a lot of bloat which is unnecessary. This is a difficult choice for smaller aggregator developers as this much data costs a lot of money. The choice comes down to cheap HTML index ing with the inaccuracy that comes from HTML or accurate RSS which costs 2.2x more.

    Update: Bob Wyman commented that he's seeing 2k average post size with 1.8M posts per day. If we are to use the same metrics as above this is 54G per day or around 5Mbps sustained bandwidth for RSS items (assuming A-IM differentials aren't used)."
    • You're forgetting, most collocation data centers charge you by the 95th percentile. With most of the traffic bunched up duing the weekday hours (most likely), the guy is probably paying for many more Mbps than what you're calculating.
  • Some Answers (Score:4, Insightful)

    by RAMMS+EIN ( 578166 ) on Sunday August 14, 2005 @05:35PM (#13318033) Homepage Journal
    ``How Much Bandwidth is Required to Aggregate Blogs?''

    Less than it currently takes, what with pull, HTTP, and XML used instead of more efficient technologies.

    ``what percentage of them have any real value, and how do busy people find that .001%?''

    Using a scoring system, like Slashdot's?

    It's not like all of this is rocket science. It's just that people go along with the hyped technology that's "good enough for any conceivable purpose", ignoring the superior technology that had been invented before and wasn't hyped as much. Nothing new here.
    • Please correct me if I got my facts wrong.

      You got your facts wrong. When feed readers use conditional GET and respect HTTP Last-Modified headers, and when feed publishers use gzip encoding (XML, like most plain text formats, compresses wonderfully), the bandwidth requirement for aggregation is minimal; the technologies themselves, then, are not inefficient; the inefficiency is in how they are being used. And the alternative you hint at, push, is nowhere near being "more efficient" since it would requi

      • :-) Someone on /. takes sigs seriously. Good!

        I can't say I agree with you, though. Maybe I should clarify my points a bit.

        1. Push distribution should be more efficient than pull distribution, because it only sends when something has actually changed. You could argue that pull distribution can be more efficient, because multiple updates can be bundled, but the same can obviously be done with push distribution as well.

        2. XML is more verbose than, for example, s-expressions. RSS is not terrible, but when I eye
          1. "Only send when something has changed" is available and widely implemented right now; it's called the "If-Modified-Since" header. More on why push won't work in a moment.
          2. XML is verbose by design, much as s-expressions are not. And again, with gzip encoding or other compression applied, this is not a problem; feeds compress nicely (and in case you're worried about the performance hit to the server having to gzip the feed, keep in mind you can cache that fairly easily).
          3. I'd rather deal with the bandwidth co
  • By which I assume you mean 9 million....

    the cited article discusses volumes of 900k, i.e.: thousands...

    from whence comes this discrepancy ?
    • from whence comes this discrepancy ?

      Aforementioned discrepancy cometh from thine arse, which be white as the first winter snow.
       
    • Definition of whence: From where.

      So, you can say:

      Whence comes this discrepancy?

      but please don't use

      From whence...

      because it's redundant.
  • by davecrusoe ( 861547 ) on Sunday August 14, 2005 @05:37PM (#13318047) Homepage
    And more importantly, with 9M posts, what percentage of them have any real value, and how do busy people find that x%
    Well, the significant percent is probably much larger than you might think. For example, if you aren't a chef, chances are you won't desire to read anything that relates to cooking. So, knock off X% of all blogs. You might not be interested in knitting, so deduct another X%.

    In actuality, my guess is that there are few blogs you might decide to visit, and of those you do, several may have content you find worthwhile. Remember, worthwhile is all in the perception of the reader - there is no real definition for quality or value. Perhaps through trial and error - in essence digital tinkering - you find and derive your own value.

    cheers, --dave
  • by Lovejoy ( 200794 ) <[moc.liamg] [ta] [yojevolnad]> on Sunday August 14, 2005 @05:45PM (#13318082) Homepage
    Does anyone else wonder why Slashdot editors seem to have it in for blogs? Is it because in Internet years, Slashdot is as old and sclerotic as the Dinomedia? Is Slashdot the Dinomedia of the new media?

    Does anyone else consider it ironic that the Slashdot editorship HATES blogs, but Slashdot is actually a blog?

    Anyone else getting tired of these questions?
    • On the contrary, the questions being raised about the quality of blogs is very correct..

      The average blog is just some random joe telling us about his day or various bits of intellectual sophistry about things he doesn't understand (politics, science, etc).

      Sorry, quantity != quality. A million monkeys at a million typewriters, only a few of them are producing the works of Shakespeare.
  • That's 900,000 posts (Score:4, Informative)

    by epeus ( 84683 ) on Sunday August 14, 2005 @05:57PM (#13318143) Homepage Journal
    I run the spiders at Technorati, and it is 0.9 million posts a day, which Kevin Burton had correct in the post cited. Is the is the no dot effect?
  • by Rob Carr ( 780861 ) on Sunday August 14, 2005 @05:58PM (#13318150) Homepage Journal
    Most blogs are both drivel and worthwhile, depending upon the individual reading them (including mine). They become worthwhile in context.

    If a friend is going through cancer treatment, her blog is worthwhile. If you find a youth group leader like yourself and can learn from his posts, his blog is worthwhile. A mother fighting for her health so that she can take care of her two sons and husband can share insights that are worthwhile. Someone fighting depression might have a worthwhile blog. A grandmother might have a view of the world that makes her blog worthwhile, just to get a different view. Perhaps a blog by someone who totally disagrees with you will be worthwhile, just to stretch your mind.

    I've just described why I read the blogs on my blog roll. You can choose differently.

    Top political blogs? You can find them easily among Technorati's top 100 list. Tags at Technorati will let you pick out specialties like science or "Master Blasters" or diabetes or the Tour de France. Google will turn up blogs if you search right, which is the trick for using Google.

    "Worthwhile" is a much more difficult variable to calculate than "bandwidth." Perhaps it's the sheer variety of blogs that makes them interesting, because they are so individual and someone, somewhere will speak to your mind or your heart.

    Worthwhile is what's worthwhile to you, and maybe to very few others. Not everyone will agree, and that's not a bad thing.

  • with 9M posts, what percentage of them have any real value, and how do busy people find that .001%?/i

    Either I don't understand this question, or it's a completely idiotic question. What the fuck does "real value" mean? The maxim "One man's trash is another man's treasure " is especially important when talking about information--the asymmetry of value from person to person is even bigger than when you're talking about physical goods.

    Considering the second half of the question, though, one might re-phrase t
  • Value (Score:5, Interesting)

    by lakin ( 702310 ) on Sunday August 14, 2005 @06:01PM (#13318167)
    what percentage of them have any real value

    I had for a while held the view that most blogs out there are pointless. Some can be insightful and some are basically used as company press releases, but most are people talking about their days activities that few people really care about, and a few of my friends have blogs like these. When I asked one whats the point, she said she just blogs stuff she would normally mention to many people on msn throughout the day. Its not meant to have value to anyone on slashdot, be hugely insightful, or detail some breathtaking new hack, its simply another way for her to talk to friends (that doesnt involve repeating herself).
  • "Remember that nut that sat beside you on the bus? They guy that had a water-powered car but was too afraid to go public in case the oil companies came after him? You remember how glad you were when your stop came and you got off? This morning, though, when you walk into town for the Sunday papers, today, that nut is everyone you meet!

    You've just woken up in....The Blogosphere! De-de-de-de, de-de-de-de.....

    The answer to the article's question is: nothing; there's no point in wading through the output of b

  • This is just a bunch of numbers spouted, with no useful context, and then a broad statement made about value.

    The days when 9 megabytes or 5 MPS sustained for a popular server is considered out of line is long gone. Poeple want to communicate, and they will use whatever resources are needed. How many resources do we use so that we can gaurantee that tuan will his present from grandma? How many resources do we use so that an arbitrary firm can mail a postcard to everyone in the country? How many resourc

  • by StikyPad ( 445176 ) on Sunday August 14, 2005 @06:22PM (#13318231) Homepage
    search query: blog -1337 -teh -kewl -hugz -omg -bored -lol -lmao -"can't wait to get my drivers license"
    • by Rosco P. Coltrane ( 209368 ) on Sunday August 14, 2005 @06:44PM (#13318322)
      search query: blog -1337 -teh -kewl -hugz -omg -bored -lol -lmao -"can't wait to get my drivers license"

      Ah! I guess you missed the following blog entry then:

      Hi everybody, it's Sunday today and I'm bored. So I guess I'll get on with my homemade engine that runs on water. As you know, it's almost finished, and I expect it to put out as much as 1337 horsepower. The reliability of the motor should be good too: my friend, Ray Kewl in engineering, said it should provide well beyond 10,000 TEH (total engine hours).

      Update: the engine is in the car, and it runs! on nothing but water! OMG I'm so happy! check the pictures and the diagrams to build your own. I can't wait to get my drivers license renewed so I can take it for a spin!

      • Yes, but with a post like that, it should end up on Slashdot in a few days anyway... after every news site has posted it a few days earlier.
        • it should end up on Slashdot in a few days anyway

          It's going to.

          Someone is going to link to the original post [slashdot.org] on their blog. That article will be recopied a few times until any link to Slashdot is lost.

          Some news reporter, hoping to pick up on the "next big thing" will take it to be a legitimate report.

          When you watch the cable news and see an over-hyped story about a car that runs on water, ask yourself if it started out as a joke on Slashdot.

    • [joke] I wonder why nothing comes up when I search for that in Google? [/joke]

      Wait--real life is more humorous--the GOP [rnc.org] is the first listing!
  • semi off topic (Score:3, Interesting)

    by cookiepus ( 154655 ) on Sunday August 14, 2005 @07:26PM (#13318502) Homepage
    Since we're on the subject of blog aggregation, can someone recomend a GOOD way to aggregate?

    Every single RSS aggregator I've come across treats my RSS world similar to an e-mail reader, where each blog is a 'folder' and each entry is equivalent to an e-mail.

    This is decidedly NOT what I want and I don't understand why everyone's writing the same thing.

    My friend is running PLANET, which builds a frontpage out of the RSS feeds (looks kind of like the slasdot frontpage where adjacent stores come from different sources and are sorted in chronolocial order (newest on top)

    PLANET seems to be a server-side implementation. My buddy's running Linux and he made a little page for me but it's not right for me to bug him every time I want to add a feed.

    Is there anything like what I want that would run on Windows? And if not, why the heck not?

    By the same token, why doesn't del.icio.us have any capacity to know when my links have been updated?

    For what it's worth, here's my del.icio.us BLOGS area with some blogs I find good.

    http://del.icio.us/eduardopcs/BLOG [del.icio.us]
  • "And more importantly, with millions of posts, what percentage of them have any real value, and how do busy people find that .001%?"

    Busy people don't waste time on blogs. Blogs are the realm of internet kooks ranting about the latest conspiracy behind secret intelligence memos, not sane people with limited free time.
  • and how do busy people find that .001%?

    They don't, they really have better things to do. The media actually does that for us already... what me worry?
  • As I explained (as long ago as 2000) in Miski: A White Paper [archive.org], we need a system with the following features:
    • Each producer of link suggestions has a unique address, something like channel/user@example.com. (This implies resolution via DNS, but probably people will end up using the URL of an XML file.)
    • The channel address points to the producer's server.
    • The subscriber to a channel tells their server to subscribe to the channel. The subscriber's server talks to the producer's server.
    • When the producer makes
  • Th e long tail (Score:4, Informative)

    by Eivind Eklund ( 5161 ) on Monday August 15, 2005 @03:28AM (#13319836) Journal
    I think most of these blogs have something of interest to somebody, and that the value of blogs is in their diversity - in a lot of things having value to a small number of people.

    This effect is called the The long tail [wikipedia.org] effect, and is visible all over the web. For instance, Amazon.com says that every day, it sells more books that didn't sell yesterday than the sum of books sold that *also* sold yesterday. In other words, they sell (in sum) more of the items selling less than one every other day than of items selling (by type) more than that.

    Eivind.

  • Technorati recently published that they're seeing 900k new posts per day. PubSub says they're seeing 1.8M.

    PubSub later admitted they may have been double-counting.

Almost anything derogatory you could say about today's software design would be accurate. -- K.E. Iverson

Working...