Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

Freecache 258

TonkaTown writes "Finally the solution for slashdotting, or just the poor man's Akamai? Freecache from the Internet Archive aims to bring easy to use distributed web caching to everyone. If you've a file that you think will be popular, but far too popular for your isp's bandwidth limits, you can just serve it as http://freecache.org/http://your.site/yourfile instead of the traditional http://your.site/yourfile and Freecache will do all the heavy lifting for you. Plus your users get the advantage of swiftly pulling the file from a nearby cache rather than it creeping off your overloaded webserver."
This discussion has been archived. No new comments can be posted.

Freecache

Comments Filter:
  • by attaboy ( 689931 ) * on Wednesday May 12, 2004 @01:34PM (#9129036)

    Well, it won't be the solution to Slashdotting, as you can't cache a whole site.

    Please note that you cannot submit a whole site to FreeCache as in http://freecache.org/http://www.rocklobsters.com/ This will not work as only index.html will be cached. You have to prefix every item that you want to have cached seperately.

    You can cache an HTML page (index.html) but all the images will pull from the local machine. You could cache each image separately, but the change would have to be made in the site's HTML.

    On the other hand, I don't imagine it would be hard to write some kind of proxy script that grabs the page and changes the HTML to point to freecache SRCs for each image/movie... you could then point to a freecache of that page...

    And of course, this all breaks the second somebody has a site that is heavily CGI based.

    Still, it's a start. I'll be sure to use it if I ever submit any site of my own to Slashdot ;-) Many thanks to the guys at the Internet Archive for setting this up. You rock!


  • by Seoulstriker ( 748895 ) on Wednesday May 12, 2004 @01:36PM (#9129071)
    1. Buy massive amounts of bandwidth
    2. Host extremely popular web sites
    3. ???
    4. PROFIT!!!

    How are they supposed to be making money on this?
  • Caching (Score:1, Insightful)

    by Anonymous Coward on Wednesday May 12, 2004 @01:39PM (#9129123)
    Personally, I believe that Slashdot should really begin caching static versions of the most popular pages itself...
  • /.ed already... (Score:3, Insightful)

    by warpSpeed ( 67927 ) <slashdot@fredcom.com> on Wednesday May 12, 2004 @01:39PM (#9129128) Homepage Journal
    This does not bode will for a caching site that will supposidly help with the /. effect...

  • by hendridm ( 302246 ) on Wednesday May 12, 2004 @01:44PM (#9129206) Homepage
    Yeah, that's fine for sites who can expect the possibility of being linked to, but those sites can often handle the load anyway. It those small sites (Geocities) hosted on some guys cable modem describing how he modded his mom's vibrator into a CD player that won't make it. Often times, myself included, these people don't really think about or expect to be linked to.
  • by zipwow ( 1695 ) <zipwow@gmail . c om> on Wednesday May 12, 2004 @01:45PM (#9129238) Homepage Journal
    It's not about the editors, it's about the authors. You, as an author, can use the freecache service by using their style links in your pages. It doesn't cost you anything to do it, and it's pretty easy to do.

    It's not perfect, it will certainly not be used by everyone. Still it's something you can do defensively, especially if you're serving mpegs of your latest case mod or bear attack or whatever.

    -Zipwow
  • by shoppa ( 464619 ) on Wednesday May 12, 2004 @01:49PM (#9129300)
    http://www.archive.org/, which used to have a one or two second response time, now is taking over a minute to return their home page.

    I do not think this is a solution to slashdotting :-)

  • by andycal ( 127447 ) on Wednesday May 12, 2004 @01:53PM (#9129359)
    The problem with that is that if it's new content google won't have it yet. Freecache could be a good way of surviving a /.ing , but the problem ( as with all caches) is that the server then doesn't get a accurate count of hits. This matters to some people, particularly people who advertise.

    The cool thing here is that you can say, "Cache just these things" and still have your server supply the html but not the images (or movies).

    But you still have to have a decent pipe.
  • Actually, index.html would only be cached if it is 5MB or greater in size.

    Which is unlikely. So it won't be cached. Nor will the PNG/GIFs.

    Ratboy
  • by Xiadix ( 159305 ) on Wednesday May 12, 2004 @01:54PM (#9129374) Homepage Journal
    That is another stumbling block that will prevent it from saving may websites. If I can't use the freecache link, I will be forced to go back to the orginal link...as will a good percentage of the other /. crowd.

    KevG
  • by Xiadix ( 159305 ) on Wednesday May 12, 2004 @02:00PM (#9129462) Homepage Journal
    Is a public available squid server. If you put any link through the server such as:

    www.squidserver.com/http://www.doomedsite.com

    The public squid will cache a copy of it. On the first access (like when the approver looks at it) It should look at a request and see if it has a recent cache. If it does feed that, if not get the newest copy and promth the user for a refresh or automatically refresh after a set time (5 sec). It will update its cache as the site does. All without having to upload anything. After a few days when nobody is utilizing the cache, it can purge it. Waiting for the next doomed site.

    DISCLAIMER: The may be how Freecache works, but I can't get to it
    1) because I am at work.
    2) as the comments suggest it is slashdotted.

    KevG
  • by SpaceLifeForm ( 228190 ) on Wednesday May 12, 2004 @02:03PM (#9129493)
    Which is why it's very important to have a simple, clean, and informative main web page with links to more details. Sites that overload their main page with crap actually drive away viewers.
  • by Phong ( 38038 ) on Wednesday May 12, 2004 @02:07PM (#9129559)
    I looked around the site and didn't see an answer to this question:

    How does this system guard against doctored content coming from the cache sites? Since they allow sites to sign up to become a cache server, wouldn't it be possible for a malicious user to sign up and use some locally-modified code to add a virus to all the .exe files that get sent out from their cache? They could even customize the output of their CGI depending on what domain you are in, making it easy to target specific sites and/or hide their munging from other sites.

  • Re:Taking bets.... (Score:3, Insightful)

    by wo1verin3 ( 473094 ) on Wednesday May 12, 2004 @02:11PM (#9129625) Homepage
    Taken from here [slashdot.org] but it answers your question. If the person seeding removes the file, it would disapear in the cache as well. Maybe they check the original file link still exists and functions every few hits to the cache?

    Also only works for large files unless this FAQ [archive.org] is out of date:

    What files are being served by FreeCache?
    FreeCache can only serve files that are on a web site. If the link to a file on that web site goes away, so will the file in the FreeCaches. Also, there is a minimum size requirement. We don't bother with files smaller than 5MB, as the saved bandwidth does not outweight the protocol overhead in those cases.

  • by RAMMS+EIN ( 578166 ) on Wednesday May 12, 2004 @02:40PM (#9130053) Homepage Journal
    I wonder why this continues to be a problem. It should be obvious to any judge that a hosting provider cannot and should not check everything that is uploaded to their servers.

    It may be reasonable to expect them to pull content that is illegal where they are located, but that should be a simple matter of notifying them, they pull the content, no harm done. They may even be required to disclose the identity of the uploader, after which this person can be prosecuted.

    I don't think anything in this scenario is outrageous or unfeasible. What is outrageous and infeasible is holding the host responsible for what the user uploaded. Then why is this the way it happens all too often?
  • by tktk ( 540564 ) on Wednesday May 12, 2004 @02:46PM (#9130144)
    I know this has been suggested before but why doesn't /. at least mirror the first page of the submitted links? I mean, how many people read the article in the first place, and then, of these, how many continue onto the subsequent pages?

    I'm sick of having to visit /. in order find a potential site to slashdot to hell.

  • by Russellkhan ( 570824 ) on Wednesday May 12, 2004 @02:48PM (#9130177)
    Yes, the site is down. Yes, it's ironic that this should happen to a site hosting information about a service that's being claimed as a solution to the slashdot effect.

    But I don't think that it really is an indicator. I happen to have read the site yesterday after reading the Petabox [slashdot.org] article, so I think I have some of the basic concepts down. As I understand it, the idea works with cooperation from ISPs (and others) to provide more localized caches of large popular files. The motivation for the ISPs is that by providing the cache, they save on their upstream bandwidth and the associated costs.

    So, while it's funny that we've slashdotted the archive.org server where the Freecache website is, Freecache itself is not dependant upon archive.org's bandwidth.

    It's also worth noting that the concept is still in beta and pretty new - I don't think they've got a lot of ISPs on board yet. From what I can tell, it seems a very good concept - the only thing I can think of that I would want to make sure of if I were an ISP is that my cache is only available to users on my network (the whole saving on bandwidth usage argument falls apart if you suddenly become a cache for users on other ISPs) but I would think that would be pretty easy to do.

    For those who haven't yet been able to read about it, here's Google's cache [google.com] of the front page.
  • by Short Circuit ( 52384 ) <mikemol@gmail.com> on Wednesday May 12, 2004 @03:07PM (#9130437) Homepage Journal
    I'm waiting for the introduction of the resource file. Sort of like a jar file...you can access content in it, but it transfers as a unit.

    An entire site might be stored in a resource file. Or just the files a single page depends on. You could have a meta tag that points to the resource file for a site. Or a hyperlink on the front page to the resource file for an entire site.

    And guess what...if it's over 5MB, Freecache will cache it.

    There will be some conflict with per-MB bandwidth charges for hosts, though. But I'm sure someone will work out a decent solution. (like Freecache. ;) )
  • Re:Some questions (Score:2, Insightful)

    by Anonymous Coward on Wednesday May 12, 2004 @04:22PM (#9131667)
    Who's responsible for ensuring that it doesn't turn into a pr0n/warez stash?

    "warez" usually means material that violates copyright laws. Illegal. Criminal.

    But what have you got against straightforward porn, usually meaning pictures of naked or scantily-clad women?

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...