Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

Freecache 258

TonkaTown writes "Finally the solution for slashdotting, or just the poor man's Akamai? Freecache from the Internet Archive aims to bring easy to use distributed web caching to everyone. If you've a file that you think will be popular, but far too popular for your isp's bandwidth limits, you can just serve it as http://freecache.org/http://your.site/yourfile instead of the traditional http://your.site/yourfile and Freecache will do all the heavy lifting for you. Plus your users get the advantage of swiftly pulling the file from a nearby cache rather than it creeping off your overloaded webserver."
This discussion has been archived. No new comments can be posted.

Freecache

Comments Filter:
  • by RobertB-DC ( 622190 ) * on Wednesday May 12, 2004 @01:35PM (#9129058) Homepage Journal
    As I understand the setup, the ideal would be for ISPs to install this system on their networks like AOL's infernal content caching, except that it would only cache what the site owner wants cached. It seems like anyone with a static IP could join in the fun, too.

    But would they? I saw this on the new service's message forum [archive.org]
    I was perusing the content in my cache and checking the detailed status page and I noticed illegal content containing videos in one of the caches I run. What is freecache.org doing to stop people from mirroring illegal content. I currently run 2 fairly heavily used caches and it looks like only one of them had illegal content. I cleared the cache to purge the problem, but the user just abused the service again by uploading the content again. I know freecache.org cannot be responsible for uploaded content, but there has to be some sort of content management system to make sure freecache doesn't turn into just another way to hide illegal content.

    Whether you believe this guy's story [slashdot.org] or not, it seems like this could subject small ISPs to the sort of problems that P2P has brought to regular users. It's not going to matter who's right -- just the idea of having to go to court over content physically residing on your server is a risk I don't see a marginal ISP being willing to take.

    So we're left with the folks with static IP addresses. They're in even more trouble if John Ashcroft decides to send his boyz over to check for "enemy combatants" at your IP address.

    With the current state of affairs in the US, and the personal risk involved, I'd have to pass on this cool concept.
  • Taking bets.... (Score:2, Interesting)

    by JoeLinux ( 20366 ) <joelinux.gmail@com> on Wednesday May 12, 2004 @01:36PM (#9129080)
    How much you wanna bet this is going to become a haven for bit-torrent seeds? Put 'em up, get 'em to people, get it started, then take 'em down.
  • by Comsn ( 686413 ) on Wednesday May 12, 2004 @01:38PM (#9129107)
    its pretty good. lots of the servers are swamped tho, need more of them, anyone can run a freecache 'node'. its almost like freenet, cept not anonymous.

    too bad the status seems to be down, its fun to see what clips/games/demo/patches are going around.
  • by Mr_Silver ( 213637 ) on Wednesday May 12, 2004 @01:40PM (#9129141)
    1. Does that mean that Slashdot will now link to potentially low-bandwidth sites using Freecache?
    2. Will you update their FAQ on the whole subject of caching since Google and Freecache seem to feel that the legalities of site caching is small enough for it to be a non-issue?
    3. Or are we still going to be relying on people posting links and site content in the comments because the original site has been blown away under the load?
    Inquiring minds would like to know.
  • by ianbnet ( 214952 ) on Wednesday May 12, 2004 @01:42PM (#9129166)
    There are a lot of problems, but for all those "home publishers" on cable or slow DSL accounts, this is great -- they can publish content out to the wide, wide and wild web that they could never hope for before.

    Although I predict this gets used heavily for less savory content - manifestos and the like that people want to get out there. But we'll see.
  • Some questions (Score:5, Interesting)

    by GillBates0 ( 664202 ) on Wednesday May 12, 2004 @01:42PM (#9129172) Homepage Journal
    Definitely not an adequate solution, given it's current condition: slashdotted to hell.

    I have a few questions though, which I guess may be answered on the website:

    1. Can users submit/upload files to be hosted on their website.

    2. Who's responsible for ensuring that it doesn't turn into a pr0n/warez stash?

    3. Can users request removal of cached content (something not possible with the Google cache).

  • by dan_sdot ( 721837 ) * on Wednesday May 12, 2004 @01:43PM (#9129191)
    Yes, but the thing that you are not considering is that probably 75% the slashdot effect is just people looking at the link for about 5 seconds, and then closing the page and moving on the the next story. This means no browsing, meaning that it is not important if the whole page is not up there. And as far as pictures go, I would guess that alot of people click on the link, even though they are not too interested, see the text, and realize that they are _really_ not interested. So they close the page before they even need pictures.

    In other words, the important stuff, like the rest of the site and the pictures, will be resources only used on those that really care, while those that don't get to see a flash of the text for a second to get a really general idea.

    After all, thats what the slashdot effect is, a whole bunch of people that don't really care that much, but want a quick, 5 second look at it.
  • by ACNeal ( 595975 ) on Wednesday May 12, 2004 @01:44PM (#9129214)
    I see dreaded pictures from goatse.cx in the future. This will break the nice convenient domain name clues that Slashdot gives us, so we don't accidently do things like that.

  • by Tinidril ( 685966 ) on Wednesday May 12, 2004 @01:48PM (#9129276)
    What you are proposing wont work. Only the original linked file (or implied index.?) will be cached. In order for the bulk of the content to be cached, the site owner would have to change all internal links to point to freecache.

    The working solution would be for the slashdot editors to give a site owner a heads-up so that they can prepare for the flood.
  • Alternative solution (Score:4, Interesting)

    by Ryvar ( 122400 ) on Wednesday May 12, 2004 @01:50PM (#9129307) Homepage
    Create a file format that is basically just the web page plus dependent files tar'd and gzip'd - then release browser plugins that automatically take any file with the correct extention, and seamlessly ungzip/untar it to the local cache before displaying it like normal - I have yet to understand why nobody has combined this basic idea with BitTorrent. Seems like you could get a lot of mileage with it.
  • by Doc Ruby ( 173196 ) on Wednesday May 12, 2004 @01:51PM (#9129324) Homepage Journal
    This use of Freecache is still subject to the actual problem that enables Slashdotting: inadequate scaling planning. Some sites are limited by the cost of effective scaling failover countermeasures, but most are limited by lack of any planning for even potential Slashdotting - this use of Freecache still falls prey to that primary problem. And who can remember to prepend "http://freecache.org/" to their entire domain URL, including their repetitive "http://"?

    A better use of Freecache is "under the hood". Make your webserver redirect accesses to your "http://whatever.com/something" to "http://freecache.org/http://whatever.com/somethin g". More sites will be able to plan for that single change to their webserver config, than will be able to plan to distribute the freecache.org compound URL. And it won't depend on users correctly using the compound URL. More sites will get the benefit of the freecache.org service. And when freecache.org disappears, or ceases to be free, switching to a competitor will be as easy as changing the config, rather than redistributing a new URL.
  • by Anonymous Coward on Wednesday May 12, 2004 @01:55PM (#9129389)
    "Haven't we seen that profit motive destroys pretty much anything useful?"

    No. In fact, it makes many useful things.

    "Before the McInternet, there was a real, useful resource that had great information on it"

    No, before it was commercialized, there was hardly anything on it.

    "Fire, The Wheel, Electricity"

    You said that profit destroys everything. Well, we still have fire, the wheel, and electricity, now, don't we? And thanks to the profit motive, we have iPods, "The Simpsons", and allergy medicines.

  • by curator_thew ( 778098 ) on Wednesday May 12, 2004 @02:07PM (#9129566)

    Freecache is really just a half-baked ("precursor") version of P2P; not in any sense a long term solution, but interesting at least.

    Correct use of P2P with network based caches (i.e., your ISP installs content caching throughout the network) and improved higher level protocols (i.e. web browsing actually runs across P2P protocols) would resolve slashdot effect type problems and usher in an age of transparent, ubiquities, long-lived, replicated content.

    For example,

    Basically, your request (and thousands of other slashdot readers requests) would fetch "closer" copies of content rather than having to reach directly to the end server (because, the content request [i.e. HTTP GET] actually splays itself out from your local node to find local and simultaneous sources, etc]. In theory, the end server would only deliver up one copy into the local ISP's content cache for transparent world-wide replication, and each end point would gradually drag replicated copies closer - meaning that subsequent co-located requests ride upon the back of prior ones. I'm just repeating the economics of P2P here :-).

    In additional to all of this, you'd still have places like the Internet Archive, because they would be "tremendously sized" content caches that do their best to suck up and permanently retain everything, just like it does now.

    Physically locality would still be important: if I were a researcher doing mass data analysis / etc, then I'd be better of walking into the British Library and co-locating myself on high speed wi-fi or local gigabit (or whatever high speed standards we have in a couple of years time) to the archive rather than relying upon relatively slower broadband + WAN connections to my house or work place.

    For example, say I'm doing some research on a type of flying bird and want to extract, process and analyse audiovisual data - this might be a lot of data to analyse.

    Equally, places like the British Library will also have large clusters, so when I want in there to do this data analysis, I can make use of large scale co-located computing to help me with the task.

    Nothing here is now: if you think about it, these are logical extensions of existing concepts and facilities.

  • by GeorgeH ( 5469 ) on Wednesday May 12, 2004 @02:46PM (#9130142) Homepage Journal
    1. Install Freecache node at your ISP
    2. Cache extremly popular media files for your customers
    3. Advertise that customers can access Freecached files from the local network instead of the Internet.
    4. Get more customers and pay less bandwidth costs.
    5. PROFIT!!!!
  • by infolib ( 618234 ) on Wednesday May 12, 2004 @03:08PM (#9130449)
    How are they supposed to be making money onhttp://en.wikipedia.org/wiki/Brewster_Kahle this?

    It's not a way of making money, it's a way of spending them. It's run by the Internet Archive, founded and funded by Brewster Kahle [wikipedia.org]. It's there for your free enjoyment - revel in the goodness of humanity!
  • Censored (Score:5, Interesting)

    by jdavidb ( 449077 ) on Wednesday May 12, 2004 @03:40PM (#9130931) Homepage Journal

    This would be great if my employer didn't restrict access to archive.org as allegedly being in the "sex" category.

  • by evilviper ( 135110 ) on Wednesday May 12, 2004 @06:18PM (#9133129) Journal
    The problem with non-comusator caching systems is that there is little if any incentive for the end user to want to use them.

    What ISPs should really do, is sell you a 256K internet connection (or whatever speed you happen to get), but then make all local content available at maximum line speeds... In other words, if you use the caching system (which saves the ISP money on the price of bandwidth) you get your files 6Xs as fast, or better in some cases.

    I don't see why ISPs don't do that. It seems like everyone would win then. It wouldn't just need to be huge files either, they could have a Squid cache too, and not force people to use it via transparent proxy (most people would actually want to use it, despite the problems with proxy caches).

    Right now, users have incentive not to use it. Mainly because it's another manual step for them, and to a less extent because caching systems usually have a few bugs to work out (stale files, incomplete files, etc).

    I know that it would only require minor modifications to current DSL/Cable ISP's systems to accomplish the two zones with different bandwidth.
  • by lousyd ( 459028 ) on Wednesday May 12, 2004 @10:51PM (#9135471)
    Packaging up entire websites is a problem the Freenet people are working on. When latency shoots through the roof, website "jar" files start sounding good.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...