Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet

Using Google to Calculate Web Decay 209

scottennis writes: "Google has yet another application: measuring the rate of decay of information on the web. By plotting the number of results at 3,6, and 12 months for a series of phrases, this study claims to have uncovered a corresponding 60-70-80 percent decay rate. Essentially, 60% of the web changes every 3 months." You may be amused by some of the phrases he notes as exceptional, too.
This discussion has been archived. No new comments can be posted.

Using Google to Calculate Web Decay

Comments Filter:
  • At last! (Score:1, Interesting)

    by ringbarer ( 545020 )
    This kind of thing can be a good application of Google's SOAP interface!
  • Are google claiming that they can check through the entire internet inside a timescale of 3 months, ready to check through again at the start of the next quarter?

    Surely this can't be true. Check Google's cached pages - see the dates on there?

    Google is turning into another history book [archive.org].
    • Our weblogs show that google visits our site (www.up.org.nz) atleast monthly, and it is by no means a huge traffic drawing site in the global senee. Its' last visit was on 13th April, drawing 1888 hits...

    • Are google claiming that they can check through the entire internet inside a timescale of 3 months, ready to check through again at the start of the next quarter?

      I don't know if that's all that far-fetched. I know Googlebot last hit my site on April 7th, crawled every page in my domain over the course of 12 hours, and current searches of their cache show content I'd updated at that time. They seem to visit every month or so.

      Perhaps it's based on the traffic they detect to a given site through their CGI redirects... but I'm not a large site, my primary webserver is a Pentium 90. :)

      crawl4.googlebot.com - - [07/Apr/2002:13:36:32 -0400] "GET /broken_microsoft_products/ HTTP/1.0" 200 128854 "-" "Googlebot/2.1 (+http://www.googlebot.com/bot.html)"

      • "Perhaps it's based on the traffic they detect to a given site through their CGI redirects... but I'm not a large site, my primary webserver is a Pentium 90. :)


        crawl4.googlebot.com - - [07/Apr/2002:13:36:32 -0400] "GET /broken_microsoft_products/ HTTP/1.0" 200 128854 "-" "Googlebot/2.1 (+http://www.googlebot.com/bot.html)"


        wow, not only are you running your domain off a pentium 90, but you also have reverse DNS lookup turned on in the logs... that's gotta be giving you a decent preformance hit, no?


        • wow, not only are you running your domain off a pentium 90, but you also have reverse DNS lookup turned on in the logs... that's gotta be giving you a decent preformance hit, no?

          Well, it doesn't actually handle DNS; that's felix, an old 486DX-33 running FreeBSD, port-forwarded behind my gateway (I've only got the one IP address). But yeah, I'm sure each logger thread gets held up waiting for resolution.

          More impressively, dynamic content. (Most of the pages are generated dynamically as shtml through the x-bit hack; nothing sophisticated, mostly just inserting templates and stuff for color scheme because I'm too lazy to type long BODY tags) And anywhere from 2,000 to 5,000 hits per day. And only 48 megs of RAM. And it's a popular Linux distro's default kernel, not recompiled for that machine. Even so, it hardly ever breaks a sweat.

          As you can tell, it's like, zero performance tuning. But it still cranks out a SETI@Home unit every day or two.

          As for reverse DNS itself, yeah, I like it. :) It's a nice luxury.

  • Not exactly decay... (Score:4, Interesting)

    by QuantumFTL ( 197300 ) on Tuesday April 30, 2002 @04:29AM (#3434436)
    It seems to me that in a way, the web is like an organism, whose smaller constituents are constantly (or not so constantly, depending on the webmaster) renewing themselves. It's a truely adaptive medium, and thus drastic change in short times like this as interest shifts should be quite expected.

    That said, this is one of the many ways in which Google is an invaluable tool for research. Not just finding information, but generating it. Thanks Google!

    • by Anonymous Coward
      I think that the larger organisms are renewing themselves on a regular basis as well. I fyou look at large sites - any of the Microsoft bundle, BBC News, Financial Times - they are all changinge from hour to hour or maybe day to day for the non news pages.

      It's the medium size businesses that don't seem to be grasping the web and the fact that you need to have a site that is dynamic in so far as it keeps people interested and possibly entertained.

      I'm lucky in that the company I work for is a small firm and a publisher so we have daily news content and well as on-line versions of our weekly and monthly publications (HTML and PDF downloads!) being uploaded all the time - so our web traffic is growing constantly - slowly but it hadn't seen a decline in the past two years.

      M@t :o)
  • For once, that is on topic. I'm glad to see that the phrase 'bill gates sucks' had the lowest decay rate of the phrases that the guy tested for.
  • I actually always wondered about this. Really interesting, although I guessed that there would be a rapid rate of decay due to the nature of "information." Things get old and pass with time. An interesting application of this would be to keep records over a number of decades and figure out the average life/revival span of certain trends.
    • Information vs WWW (Score:1, Interesting)

      by castlan ( 255560 )
      The nature of information is decidedly ephemeral compared to the static nature of much of the web. Perhaps the surge in Weblogging has altered this dynamic even more than the hypercommercialization, but I'll dispute the 60% figure if it is based only on those four phrases. Much of the early Web was fairly static research and information hosted on .edu domains from what I gather. Since the tide shifted away to .commercialization and tripe, the nature of "information" has little to do with the state of the web, and more to do with tidiness. How much of the Web is long abandoned fan sites and dusty old means abandoned from the "information superhighway"?

      In fact, Information Superhighway would be a great data point for this subject. Another consideration, which would be difficult to accomodate, is the reality of mirrors and shuffling pages to different URLs.

      Most importantly, I strongly hope that your "interesting application" never gets implemented, because I can see no application of the resulting data that doesn't make my blood run cold. Psychological Warfare and hostile advertising are the bane of the Post-WWII US, and (likely) the world. Propeganda is a pernicious technology, and I fear further development in this area.

      Okay, I'll admit that was a touch trollish. Because the Psych. Warfare genie was already released from it's NAZI bottle and invited into the US (along with other valuable sciences), it's a little late to advocate repression of this technology. Yet I still reel from my country's increasingly malevolent commercialism aspects, which have spun off from Capitalism without any of Capitalism's redeeming social aspects. I almost want to become a socialist, until I consider that this state of affairs sprung from the National Socialist state.

      In any case, while the WWW may be evolving, is certainly isn't in the Darwinian sense that was likely intended. Vestigal Geocities homepages long abandoned are plentiful, and are less temporary, giving search engines a better shot at crawling than dynamic, or "living" news portals. This sickly "creature" is more of a construction than the product of evolution (unless you consider pre-Charles Darwin senses of the word). If you want to research the nature of information and survivability/mutability, the Freenet Project would provide a much more fruitful environment, if it ever reached widespread useage. I would have less strenuous objections to classifying the Freenet an "ever-evolving creature".
  • blessed (Score:4, Funny)

    by thanjee ( 263266 ) on Tuesday April 30, 2002 @04:34AM (#3434447) Journal
    How long until all the cheesemakers have fully decayed and are no longer blessed?

    I don't look forward to that day.

    Long live cheese and cheese makers!
  • Web Death (Score:4, Interesting)

    by svwolfpack ( 411870 ) on Tuesday April 30, 2002 @04:35AM (#3434450) Homepage
    It would also be interesting to see how much of the web no longer exists... like at what rate the web is dying. God knows there's enough dead links out there...
    • I seem to recall reading a New Scientist article (in print) that said someone had worked out that the half-life of the web was 18 months, so a given link has a 50% chance being dead after 18 months.

      Can't find any links unfortunately (the results of search for anything involving the words "half-life" tend to be somewhat skewed...)

    • Oh, most of the web is still around... it just looks like pages are decaying because every link you click has already been Slashdotted ;)
    • by inKubus ( 199753 )
      Yeah, in bytes. I wonder how many digits that would be?
    • we're an isp.. i remember the first time someone contacted me about this horrible thing.. they wanted us to redirect all our 404 traffic to a page that would spawn popup spam. seems like thats what half of my web browsing is these days. find a page with links. click a link. a window pops up, and one under. close both, and the main page says the page doesnt exist. *sigh* the next one will work, though.. although it too will spawn a few windows. it's disenchanting to work on these systems when most people are spoiling the experience with their spammy goo. (and no we never sold our 404 traffic). its kinda sad.. when i get to a plain old apache default error message these days, i get all teary eyed and remember the good old days.

      now its all about finding open relays to megaphone your get rich quick idea that you copied from some other guy to 30 million people, praying that you get at least 40 back. course if you decide to bite just to mess with them, you find that they dont even check the box. whats the point? arggvhhh its just frustrating. its completely trashed the fun of having email. and the web.

      and the ghost of the old web, the one with low noise, is not viewed as dead, merely its soul is an HTTP redirect to someone's digital billboard, completely unrelated and unwanted.
    • I suppose the rate at which new links are created is roughly a positive coefficient that outweighs the negative coefficient associated with death of a link.

      Reminds me of calculations for population growth with k_growth and k_death.

      So, two questions:

      1. What about deliberately short-lived links like the kind of md5-flavored arguments I get from my favorite news sites hoping to track my usage? How do those affect link life statistics?
      2. What's the oldest link on the web?
  • For a few moments, I thought that the phrase "base" (for baseline) on his graph was a reference to "all your base are belong to us." It would have been neat to see how quickly that phrase appeared, then decayed!
  • After reading the artical, I found a few things to be disturbing...

    First of all, he showed very little of his actual data. This makes it difficult to tell if his interpretation is correct.

    Thirdly, what the heck was this guy smoking when he came up with search phrases. Most of these phrases seem to be tangental to the main purpose of most web sitees on the internet.

    Finally, Timothy, why didn't you put the foot icon by the story? :)

  • I think one of the flaws in any analysis of the decay on the web is the fact that most news sites keep an infinite archive of almost everything they have ever published online. The specific phrases probably don't represent a large enough sample size to properly reflect all sites. Sure, he says he used many phrases, but all he gives us is "bill gates sucks", "life's short play hard", "blessed are the cheesemakers", and "late at night". To properly do the study, he should've used a random word letter generator or word generator and test the decay of that.

    But, it is interesting to see his results. I can only imagine that if Archive.org [archive.org] did a study like this, they would be able to make a more legitimate conclusion. Perhaps some collaboration is in order?

    • This looks like something tossed together in a few minutes, just to get posted on slashdot =) Very thin on details and data.... Google itself should do this type of analysis - publish something like the zeitgeist [google.com]

      -Berj
    • Doesn't Google keep improving its search algorithm so that only relevant sites are provided in the hits? Did this "researcher" hit the link that includes the filtered out near duplicates?
  • Obligatory Full Text (Score:5, Informative)

    by rosewood ( 99925 ) <<ur.tahc> <ta> <doowesor>> on Tuesday April 30, 2002 @04:41AM (#3434469) Homepage Journal
    I only do this since I know an angelfire page will get /. and reach bandwidth limits fast! However, there is a pretty excel chart on there so bookmark and come back much later.

    Web Decay
    by Scott Ennis
    4/26/2002
    Knowing how anxious most companies are to keep their web content "fresh," I was curious how "fresh" the web itself was.

    In order to come up with a freshness rating for the web you need to sample a very large number of pages. Not wanting to do this, I opted to use the Google search engine as a method for reviewing the web as a whole.

    My hypothesis is this: By searching Google using some common english phrases and returning results at various time points, a baseline can be reached for the common rate of freshness of overall web content.

    I took the total number of pages found for each given phrase at 3, 6, and 12 months. I calculated a percentage for each of these points based on the total number of results found with no date specified.

    For example: Phrase 3 mos. 6 mos. 12 mos. Total

    buy low sell high 4700 5470 6200 7830
    60% 70% 79% 100%

    Note:
    This method excludes any pages which are not text and more specifically, not English text.
    This method relies on a random sampling of phrases.
    Using this methodology I determined that the average rate of decay of the web follows a 60-70-80 percent decline at 3, 6, and 12 months.

    Therefore, If a company wants to maintain a freshness rate on par with the web as a whole, their site content should be updated at the inverse rate. In other words:
    60% of the site should change every 3 months
    70% of the site should change every 6 months
    80% of the site should change every 12 months
    The only way to do this effectively is to either have a very small site, or have a site with dynamically generated information.

    The following graph shows the decay rate for a few phrases. I selected these phrase to display because of their unique characteristics.
    bill gates sucks--This phrase had the lowest decay rate of any phrases I searched.
    life's short play hard--This phrase had the greatest decay rate of any I searched (note: this search was also very small).
    blessed are the cheesemakers--This phrase was relatively small, but demonstrates that quantity of pages may not be important in determining decay rate.
    late at night--This phrase returned the highest number of results of any I searched and yet it also adheres closely to the 60-70-80 rule.

    Conclusion:

    Web content decays at a uniform, determinable rate. Sites wanting to optimize their content freshness need to maintain a rate of freshness that corresponds to the rate of web decay.
    • by Frank T. Lofaro Jr. ( 142215 ) on Tuesday April 30, 2002 @12:10PM (#3436091) Homepage
      Why do so many people use crap like Angelfire, Tripod, Homestead with all their bandwidth limits, restrictions, ads and blocking of remote image loads?

      Not to mention that well over 50% of the time any search engine result that points to Angelfire in particular points to a 404 Not Found. This is much more than what I experience with other sites. Do their users get kicked off often, or just go away, or what? I don't even bother clicking on those results unless it looks like the content is truly compelling. And thank God for Google's cache.

      I can understand if some truly can't afford hosting, but even for these people, even Geocities is much better!

      Somehow I doubt the majority of those people using Angelfire, Tripod, etc can't afford hosting.

      Well, after the dot-com world gets a little more squeezed, those sites may no longer exist. Too bad that many people won't bother rehosting their content and will just drop off the web.

      olm.net [olm.net] offers Linux [linux.com] based hosting for under $9/month. No I don't work for them, but I am a (satisfied) customer.

      $9 a month - and you won't piss off your users.

      (Yes I know their other packages are more - but the $9 a month package is better than any of the free services)

      Don't EVEN get me started on organizations and commercial BUSINESSES (ack!) that use free hosting - that is so unprofessional. I don't think I'd want to do business with a company (even a local store) that wouldn't/couldn't pay $9 a month to have a less annoying and more reliable website.

      Of course, some of the content out on the Web isn't even worth $9/month, heck some of it has NEGATIVE worth. ;) Of course, then it isn't worth looking at, so who cares if it is even hosted.
      • "Somehow I doubt the majority of those people using Angelfire, Tripod, etc can't afford hosting." - but for most of the sites - like blogs, pictures of my family and pets - people don't think its worth paying! Also once you change address you lose your search engine rankings.
  • by Seth Finkelstein ( 90154 ) on Tuesday April 30, 2002 @04:42AM (#3434470) Homepage Journal
    For a more extensive (although older) study, take a look at

    Digital libraries and World Wide Web sites and page persistence [informationr.net]

    That said, the Web and its component parts are dynamic. Web documents undergo two kinds of change. The first type, the type addressed in this paper, is "persistence" or the existence or disappearance of Web pages and sites, or in a word the lifecycle of Web documents. "Intermittence" is a variant of persistence, and is defined as the disappearance but reappearance of Web documents. At any given time, about five percent of Web pages are intermittent, which is to say they are gone but will return. Over time a Web collection erodes. Based on a 120-week longitudinal study of a sample of Web documents, it appears that the half-life of a Web page is somewhat less than two years and the half-life of a Web site is somewhat more than two years. That is to say, an unweeded Web document collection created two years ago would contain the same number of URLs, but only half of those URLs point to content. The second type of change Web documents experience is change in Web page or Web site content. Again based on the Web document samples, very nearly all Web pages and sites undergo some form of content within the period of a year. Some change content very rapidly while others do so infrequently (Koehler, 1999a). This paper examines how Web documents can be efficiently and effectively incorporated into library collections. This paper focuses on Web document lifecycles: persistence, attrition, and intermittence.

    Sig: What Happened To The Censorware Project (censorware.org) [sethf.com]

  • Credibility? (Score:2, Interesting)

    by Gossy ( 130782 )
    Is it me, or does this 'research' simply look like something a bored guy has just thrown together from a few minutes work, then submitted to Slashdot to see if it gets posted?

    From the evidence, he searched for very few phrases. The sample size is way too low to be representive of the web - which some estimates put at several billion more pages than there are people on the planet! There are no signs of more than about 5 different phrases being searched for here..

    Can a few simple searches on Google really generate a large enough sample to draw such large conclusions?

    The report is one page long, hosted on Angelfire. There is no substantial data to back up his claims. Is this report reliable in any way?

    I'm amazed this got posted on the front page of Slashdot..

    • What about the U.S. census? Many surveys of 'scientific' reputation use small sample sets to pose hypotheses. What matters is that a 3rd party either confirms or disaffirms the data. Any takers?

      • Yeah, but statistical samples are usually based on more then four sets or cases. If this study had checked, say a few dozen search phrases, and was coming back with similar results, I would be a touch more impressed with it. And if he had actually spent more then the apparent 3 minutes every 3 months on this and actually used a couple hundred search phrases AND was still getting the same decay rates, then it might just be indicative of something.

        Kierthos
    • Agreed. I'm a bit in the dark on *how* this guy came up with his numbers.

      I calculated a percentage for each of these points based on the total number of results found with no date specified.

      IMHO, This is a bit vague to be called anything but conjecture.
    • Comment removed based on user account deletion
      • Well, statistically, if you can develop a good enough test criteria, you could determine the rate with a very, very small sample. This is how some of the more reputable firms can survey 250 voting American adults and usually be within 3% of what the American public will do during the upcoming election.
        No, this is because of the (surprising) fact that the accuracy of the survey is dependent only on the sample size, not on the population size. 250 is not a very small sample. The fact that it's 250 out of 100 million or whatever is irrelevant.
    • Actually, it doesn't even make sense. If 60% decays in 3 months, shouldn't 60% of the remaining 40% decay in 6 months (for a total of 84%). And then in another 6 months 84% of that would change which would give us 98% or "1 - (2/5)^4" of the web changed if 60% changed in 3 months. Unless there's something very funky going on, the rate of decay should stay constant!
  • archive.org (Score:3, Interesting)

    by mmThe1 ( 213136 ) on Tuesday April 30, 2002 @04:51AM (#3434487) Homepage
    This makes the job of Archive.org [archive.org] - like sites damn tough.

    P.S. Are we losing information at a comparable rate to generation....?
  • interesting but... (Score:3, Interesting)

    by lowLark ( 71034 ) on Tuesday April 30, 2002 @04:54AM (#3434494)
    He creates a problem for himself by not providing us with his raw data, making any subsequent verification of the trend difficult. In fact, the one data set he gives us:
    Phrase 3 mos 6 mos 12 mos. Total
    buy low sell high 4700 5470 6200 7830
    60% 70% 79% 100%
    seems to demonstrate the opposite of the trend that he describes. Indeed, a current search on google [google.com] shows about 1,270,000 results (makes you wonder when he did his searches that the current number of results is so many orders of magnitude in difference). The methodology also fails to take in to account any growth in the size of the web, which could mask the effects of decay.
  • by Anonymous Coward on Tuesday April 30, 2002 @05:08AM (#3434520)
    It is now official - Netcraft has confirmed: The web is decaying

    Yet another crippling bombshell hit the beleaguered web community when recently IDC confirmed that the web accounts for less than a fraction of 1 percent of all server usage. Coming on the heels of the latest Netcraft survey which plainly states that the web has lost more market share, this news serves to reinforce what we've known all along. The web is collapsing in complete disarray, as further exemplified by failing dead last [samag.com] in the recent Sys Admin comprehensive networking usage test.

    You don't need to be a Kreskin [amdest.com] to predict the web's future. The hand writing is on the wall: the web faces a bleak future. In fact there won't be any future at all for the web because the web is decaying. Things are looking very bad for the web. As many of us are already aware, the web continues to lose market share. Red ink flows like a river of blood. Dot-coms are the most endangered of them all, having lost 93% of their core developers.

    Let's keep to the facts and look at the numbers.

    The web leader Theo states that there are 7000 users of the web. How many users of other protocols are there? Let's see. The number of the web versus other protocols posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 other protocols users. Web posts on Usenet are about half of the volume of other protocols posts. Therefore there are about 700 users of the web. A recent article put the web at about 80 percent of the HTTP market. Therefore there are (7000+1400+700)*4 = 36400 web users. This is consistent with the number of Usenet posts about the web.

    Due to the troubles of Walnut Creek, abysmal sales and so on, the web went out of business and was taken over by Slashdot who sell another troubled web service. Now Slashdot is also dead, its corpse turned over to yet another charnel house.

    All major surveys show that the web has steadily declined in market share. The web is very sick and its long term survival prospects are very dim. If the web is to survive at all it will be among hobbyist dabblers. The web continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, the web is dead.

    Fact: the web is dead.

  • Essentially, 60% of the web changes every 3 months.
    I think that is incorrect, according the "researcher". He should have said, "Essentially, 60% of the web is getting older every 3 months.".
  • by Raedwald ( 567500 ) on Tuesday April 30, 2002 @05:11AM (#3434526)
    I'm not impressed. The article does not define what he means by decay, or how he measured it, except in the vaguest of terms. The analysis of the data is poor; anyone interested in decay would suspect some kind of exponential decay. They would therefore plot the data logarithmically, and perhaps calcualte a half life. Piss poor.
    • The analysis of the data is poor; anyone interested in decay would suspect some kind of exponential decay. They would therefore plot the data logarithmically, and perhaps calcualte a half life. Piss poor.

      So when can we expect to see your rigorous analysis? Or were you just bitching?
  • The "Study" does not take into account new web pages that have replaced the old.

    But then again it is an interesting piece of trivia
  • by thegoldenear ( 323630 ) on Tuesday April 30, 2002 @05:14AM (#3434535) Homepage
    Tim Berners-Lee wrote :"There are no reasons at all in theory for people to change URIs (or stop maintaining documents), but millions of reasons in practice.": http://www.w3.org/Provider/Style/URI and advocated creating a web where documents could last, say, 20 years and more
    • by Anonymous Coward
      Nobody's ever going to keep content on the web that's 20 years old. [google.com]
    • by Fweeky ( 41046 ) on Tuesday April 30, 2002 @10:06AM (#3435254) Homepage
      The key to making links that don't rot is to design a URI schema that's both independent of any redesigns of your site and independent of any particular way of doing things.

      Let's look at a few examples.

      The URI to this page is http://slashdot.org/comments.pl?sid=31884&op=Reply &threshold=3&commentsort=3&tid=95&mode=nested&pid= 3434535 [slashdot.org] - what is it telling you that it doesn't need to?

      Well, for a start, that .pl is a bad idea. What happens in 4 years time when SlashDot is running on PHP, or Java, or Perl 7, or a Perl Server Page, or ASP? Then there's the difficult-to-decode query string that tells you nothing about the link other than "this is the information the server needs to locate your page at the moment", and doesn't give you much faith in it living forever.

      Now let's look at an equivilent Kuro5hin [kuro5hin.org] URI.

      http://www.kuro5hin.org/comments/2002/4/29/22137/6 511/51/post#here [kuro5hin.org] is a URI to reply to a random comment on k5.

      For a start, you can't tell what application or script is serving you the page, and you can't see what type of file it's linking to; both these things can and will change over time.

      Second, there's a date embedded in there; you can see the developers, if they ever decide to change the meaning of '/comments', using that date as a reference; if the URI is before the change, they can map it onto the new schema or pass it onto legacy code.

      Having the date in the URI is good because it allows you to determine when the link was issued, and map it onto any changes or pass it off to a legacy system as required.

      Now let's take an apparantly good link on my now horribly out of date site, aagh.net [aagh.net].

      http://www.aagh.net/php/style/ [aagh.net] links to an article on PHP coding style.

      Certainly, hiding the fact that I'm using PHP to serve this document is good, and shortening the URI to remove the useless querystring is good (you can't see one? Good, that's the point), however, this URI may well stop working in a few weeks; I'm planning a redesign and the old schema may well not fit in well with it.

      A short yyyymm in there could have made all the difference; a simple if check on the URI's issue date would keep it working.

      The moral of the story: Think about your URI's when you're designing a site. Try to remove as much data as you can without painting yourself into a corner.
      • The key to making links that don't rot is to design a URI schema that's both independent of any redesigns of your site and independent of any particular way of doing things.

        You can't mod_rewrite a domain name that you have lost control over. If you have a popular site hosted on a university's server, and then you graduate, what do you do? If you put up a site, some Yakkestonian trademark holder takes it from you in WIPO court, and you're forced to go to Gandi.net to get a new domain, what do you do?

        • > You can't mod_rewrite a domain name that you have lost control over.

          Nope, that's too bad. You can mod_rewrite a domain name you do have control over, though. You can also see if you can get the new owners to redirect to your new domain.

          If, say, all your URI's start with a date, you ask the new owners to redirect any URI containing '/yyyy/mm/dd' less than the date you lost the domain to your new site. You may not get it all or even most of the time, but the option is still there.

          Alas, this is one of those cases URN [ietf.org]'s would come in handy.
      • Someone with mod points and a clue about how to organize a web site, please mod the parent up as insightful (or informative).

        If I had mod points today, I would do it ... the web needs more thought put into it's architecture and less put into it's look and feel.
    • URL's: people change hosts, usually due to money concerns

      filenames: people change languages (php, perl, asp, etc), site layouts, functionality.

      filenames: intentionally changed to prevent deep linking (heh heh)

      When I change a URI it is usually because I'm changing the logical structure of that program. However I also usually check the referrer logs, and if there has been an outside referral then I will put in a redirect for the old file, and contact the site that had the link to ask them to change it.

      There is no excuse for having broken links on your own site though, though it does happen to the best of us :)

      Travis
    • My main site is going on 4 years old and still has the same core pages and link structure as when it was first uploaded. The main page, and certain subpages, are linked from around 100 other sites (the list has grown every time I check it); I rely on these referrals for traffic, but bedamned if I'm gonna chase 'em all down and get everyone to fix their links. Easier to retain the existing structure, or have a duplicate page (old name, same content) if needed to maintain link integrity.

      And this costs me nothing but "oh yeah, must remember to upload index.html as well as index.htm".

      I expect that my sites will be just as valid 20 years from now, assuming Earthlink is still in business and still hosting it. (Yeah, I should get my own domain names, but..)

      If you have dynamic content, your needs might differ. But for informational sites, change for the sake of change is usually a Bad Thing.

  • by weave ( 48069 ) on Tuesday April 30, 2002 @05:20AM (#3434550) Journal
    Do your part to stop web decay. Include this in a cron job. For best results, be sure to brush, I mean touch, three times a day...

    find /var/www/html -name '*' -exec /bin/touch {} \;
    • this highlights a valid point. page changes to not equate to information loss. page changes on a blog, or web-board, news site or olm are almost all additions to an overall mass of indexed content that does not change much, apart from perhaps the ads, and index or contents pages that change regularly. pages such as these /. comments are always changing, but seldom lose information. the information is continually being sorted for relevance by ant-like readers.

      this 'study' suggests to me that there is room for real scientific investigation into the nature of massively webbed information. and google very likely provides a useful tool in the information-scientist's investigative arsenal.

  • The link now says:

    Temporarily Unavailable

    The Angelfire site you are trying to reach has been temporarily suspended due to excessive bandwidth consumption.

    The site will be available again in approximately 2 hours!

  • Study? (Score:4, Insightful)

    by Anonymous Coward on Tuesday April 30, 2002 @05:22AM (#3434556)
    Wow! What a wonderful, in-depth, study! Is there any link to a scientific paper on that page that I am missing or is that everything? I mean, how can someone claim something just showing us a few numbers and an excel graph.

    I appreciate the topic very much, but some more material on it is needed. This study wouldn't be complete enough even for high-school homework...

    And look at his homepage (just remove the last part of the url). The most pages are more than two years old... that's decay! :)

    Seriously speaking, just look for a few more sources before you accept a story.

  • Study claims ?? (Score:1, Insightful)

    by Anonymous Coward
    this study claims to have uncovered a corresponding 60-70-80 percent decay rate. Essentially, 60% of the web changes every 3 months."

    The guy that submited this story is the guy that did the study.
  • by BoBaBrain ( 215786 ) on Tuesday April 30, 2002 @05:33AM (#3434574)
    On a similar note, I was curious to see what the CowboyNeal content of the web is. As luck would have it, a precise answer can be found easily.

    Google gives us the following interesting results:

    3,840,000 [google.com] sites contain the word Cheese.

    1,640 [google.ch] sites contain the words CowboyNeal and Cheese.

    Therefore, 4.27083333333333333333333333333e-2% of cheese related sites contain a reference to CowboyNeal.

    As cheese is a randomly chosen word with no special connection to CowboyNeal it is reasonable to assume that 4.27083333333333333333333333333e-2% of all sites contain a reference to The Cowboy (Assuming the number of sites dedicated to CowboyNeal equals the number dedicated to ignoring him).

    So there we have it. The web is 99.957291666666666666666666666667% CowboyNeal free. :)


    I said the results were "precise", not "accurate". :P
  • bored geeks mercilessly devouring the download limit of free sites...I can't help but find it amusing that this guys decay information has just decayed.
  • I can't even find my page on google anymore. I don't know if it's just because my site's unpopular, or because it has the same name as an online retailer. In any case, it's not searchable anymore, and my guess is that it was removed as "dead".
  • by jukal ( 523582 ) on Tuesday April 30, 2002 @05:45AM (#3434596) Journal
    Once you have put a page on the Web, you need to keep it there indefinitely. Read more [useit.com]. Slow news day, eh?
    • Once you have put a page on the Web, you need to keep it there indefinitely.

      How is this possible if you happen to lose control of the domain? I wrote a letter to Tim Berners-Lee about this issue.

      In "Cool URIs don't change" [w3.org] you wrote that URIs SHOULD never change. However, you left some questions unanswered:

      In theory, the domain name space owner owns the domain name space and therefore all URIs in it.

      However, what happens when ownership of the domain name is suddenly removed from under a user's feet?

      Except insolvency, nothing prevents the domain name owner from keeping the name.

      Wrong. A trademark owner in Yakkestonia can drag a domain name owner into WIPO court and have the domain forcibly transferred. Under ICANN's dispute resolution policy, the plaintiff gets to pick the court, and WIPO has shown itself to find for the plaintiff in an overwhelming majority of cases. This is sometimes called "reverse domain name hijacking."

      And in theory the URI space under your domain name is totally under your control, so you can make it as stable as you like.

      Not if a hosting provider provides only subdomains (or worse yet, subdirectories) and does not offer an affordable hosting package that lets a client use his or her own domain name. Would it be reasonable to construe the "Cool URIs don't change" article as a warning against using such providers?

      John doesn't maintain that file any more, Jane does. Whatever was that URI doing with John's name in it? It was in his directory? I see.

      What is the alternative to this situation for documents that began their lives hosted on an ISP's or university's server space?

      Pretty much the only good reason for a document to disappear from the Web is that the company which owned the domain name went out of business or can no longer afford to keep the server running.

      Or that the hosting provider pulled the document under a strained interpretation of its Terms of Service because the company didn't like the document's content.

      Is there an official W3C answer to these questions?

  • "Temporarily Unavailable

    The Angelfire site you are trying to reach has been temporarily suspended due to excessive bandwidth consumption."

    Imagine that you were renting a building and running a business - a retail store. One day, the owner of the bulding comes in and padlocks the doors and says "Sorry, you can't re-open till the first of the month - too many people have come into your store".

    What stupidity.
    • If you're running a business, you don't go to Angelfire. In fact, I doubt if they allow anything commercial at all.
      If you're using someone else's building (for no cost, mind), this person certainly has the right to kick you out if he feels "too many people have come by".
  • late at night--This phrase returned the highest number of results of any I searched and yet it also adheres closely to the 60-70-80 rule.
    If he really wanted a large search he should have tried "porn".....
  • by Bowie J. Poag ( 16898 ) on Tuesday April 30, 2002 @07:01AM (#3434691) Homepage


    Looks like 100% of the link mentioned in this article decayed in a little under 5 minutes! ;)
    Cheers,
  • by Per Abrahamsen ( 1397 ) on Tuesday April 30, 2002 @07:35AM (#3434727) Homepage
    I have maintained a number of google celebrity lists, where celebrities in various categories are ranked based on the number of page hits by google.

    While the numbers clearly aren't totally random, they are very fragile indeed. Some people have had a change of two orders of magnitude, within a week. And in these cases, there have usually been no real world events that could explain such a change. I guess the google page hits numbers depend as much on the internal google structure, as on the number of actual pages on the web.

    So I doubt google page hits statistics is a useful research tool. Nonetheless, it can be fun. Here are some google hall of fame lists:

    PS: Mail me to suggest new entries to the lists.
  • .. I noticed that a paper I wrote a LOT of years ago can still be found online somewhere.. so I suppose that although -in the average- web pages do disappear, if those pages contain documents, they will survive the death of their original webpage.

    not that it was an interesting document - just a little paper about nothing important. But still, it's out there.

    My thoughts? I think that as long as a website can be "saved" in some form, its content will be available in other forms for a long amount of time.

    this should make people think, especially those who put copyrights on their webpages, or don't want some information to spread around.

    could we say that information want to be free as long as it's downloadable?

    hmm..
  • By thoroughly researching the following phrases on www.yahoo.com :-

    Sex
    Warez
    mp3

    I have discovered that amazingly, my results differ substantially !

    In conclusion, then, it seems that content is ultimately always fresh and there is no indication of decay !
  • by gpmart ( 576795 ) on Tuesday April 30, 2002 @08:08AM (#3434796) Homepage Journal
    In fact, I would argue that good content need not change. Aside from the obvious issues with the small sampling of phrases, the web is, thankfully, not just a series of catch-phrases. In fact, it was designed to carry complex information [vissing.dk] such that it could not be reduced.

    What scares me here is the conclusion that web sites need to change their content 60% every 3 months. This is not freshness, this is reorganizing to re-organize. If you are considering doing this, you had better seriously re-consider your future. Its an interesting study but a good meme doesn't die simply because the catch-phrases are tired.

    At faculty meetings at our school I sit with a bingo card. On it are a series of catch-phrases. We listen for the catch-phrases and shout out when we have finished our cards. B***SH*T is the game and to reduce your content to a series of reorganized catch-phrases is like having a marketing guy develop foreign policy.

    Anyone willing to write the perl module that searches for the latest catch-phrases and inserts them randomly into your web content. Yeesh!

  • Ironicaly, this site on decay, adds to the decay.
  • They failed to include one statistic: The decay rate when the Slashdot Effect is applied to a website: 99.998%

    :)
  • by scottennis ( 225462 ) on Tuesday April 30, 2002 @09:24AM (#3435018) Homepage
    The study I posted on Angelfire appears to have reached a bandwidth threshhold. I've made the same study available here:

    http://helen.lifeseller.com/webdecay.html [lifeseller.com]

    I've also included a link to the raw data I used.
  • I have never liked the smell of bit-rot, so I like to keep them close by my desk where I can keep them well-watered and pruned. ;)

    For years, whenever I've found an article that I've liked, or data that I thought would be useful later on, I've always either saved the .html file or text off to my hard drive, or (lately) used Adobe Acrobat to get the whole page (preserving graphics and layout in one binary file, rather than 100 extra .gif/.jpg images in a directory somewhere).

    Ryan
  • From the article:


    Therefore, If a company wants to maintain a freshness rate on par with the web as a whole, their site content should be updated at the inverse rate. In other words:
    60% of the site should change every 3 months
    70% of the site should change every 6 months
    80% of the site should change every 12 months
    The only way to do this effectively is to either have a very small site, or have a site with dynamically generated information.


    This seems so totally- "if everyone else is
    jumping off the Brooklyn Bridge, then we
    should to" by itself that it discredits what
    sliver of credibility the article had. Using
    a web-wide average as a guideline for what
    a particular web site "should do" is
    meaningless. Web sites should present timely,
    appropriate information that is useful to
    those who visit. Some sites deal with
    material that changes frequently (stock quotes
    and sports sites should be presumably updated
    regularly) and some sites deal with material
    that does not change frequently (no need to
    redo your tech support documents for long-
    out of production products every week.)
    This notion of `freshness' is ill-defined,
    poorly measured and of dubious value.

  • I love how this very page seems to have died... The web is a massive irony generator.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...