In addition to my "online" storage pools, I keep a three-way ZFS mirror on encrypted hard drives of everything that is irreplacable. If you have more than fits on one disk, maybe use a 1+0 or 1+5, even, setup. This story does depend upon you being able to connect two complete copies of your backup to one machine at a time, so again, if it's really a lot of data, external RAID enclosures may be in your future. In any case, one of the full copies lives off-site and every month I grab one of the two at home, swap it with the off-site one, allow it to come up to date with the third, and then push that month's backups to both. Rinse, repeat, like clockwork.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
They don't hash the whole shebang into one number. Rather, they take a (random) number and use that to generate a set of mutations and then probe for that set of mutations in the leaked document. So now, even if you alter the document further, you probably didn't undo the mutations in question. Even if you did, you probably didn't undo all of them and you almost certainly didn't produce a high-confidence result that it's somebody else's copy.
The correct design is neither this reactive monitoring nor the UNIX-standard "oh sure, go ahead!" approach. I contend that the correct approach is one of a capability system: an application which could not even name a remote network endpoint unless it was granted a handle to it is in no position to leak data.
> and users can move freely between them.
The proprietary world has yet to invent a mechanism for that, and it's been a known problem for a long while (decades). Data "liberation" is challenging and, even if you don't think that is a problem, cross-realm authentication is all but nonexistent. They have little incentive to provide these things unless people demand them, and by and large people don't. (And before you bring up LiveJournal's OpenID protocol, I've two things to say: 1) it's not worthy of the trust placed in it because not all parties srongly authenticate each other, and 2) note that commercial OpenID providers do not, and fundamentally cannot by nature of the beast, make it easy to transition from an identity rooted at one to an identity rooted at another.)
The only truly distributed bring-your-identity-with-you schemes out there have come from the open, usually academic, world: PGP, SPKI/SDSI, E rights, the Petname system and protocol, and so on. Similarly, shared, secure-against-the-owner storage is not something social network companies have huge incentives to produce, but it exists in open research: TAHOE-LAFS exists and Diaspora has made vague promises to being similarly secure.
I'm not sure where the claim about "can't use each other's code" comes from. Perhaps a subtle misunderstanding. While Avida does keep each virtual machine fully isolated from the others, Avida _does_ have explicit support for parasitic behaviors, in the form of code injection into neighboring organisms.
The technology you're looking for is called the TLS SNI extension. It's even vaguely supported these days, though there isn't a huge push to deploy it, sadly.