Forgot your password?
typodupeerror

A Move to Secure Data by Scattering the Pieces 141

Posted by Zonk
from the can-you-find-me-now dept.
uler writes "The NY Times has an article about an interesting new open source storage project. Unlike data storage mechanisms today that work 'by making multiple copies of data,' the Cleversafe software takes an 'approach based on dispersing data in encrypted slices.' It's an elegant solution and one that's been a long time coming: the software uses algorithmic techniques known by mathematicians since the 70's. Adi Shamir (of RSA) first wrote of information dispersal is his 1979 paper 'How to Share a Secret (pdf).'"
This discussion has been archived. No new comments can be posted.

A Move to Secure Data by Scattering the Pieces

Comments Filter:
  • Hmmm.... (Score:4, Insightful)

    by Anonymous Coward on Monday August 21, 2006 @10:57AM (#15948595)
    PAR? PAR2?
    • Re:Hmmm.... (Score:5, Informative)

      by Disoculated (534967) <rob@ s c y l l a . o rg> on Monday August 21, 2006 @12:00PM (#15949111) Homepage Journal
      Whomever marked this as offtopic was a little quick on the gun. I believe the coward is referring to PAR [par2.net] files, a method of breaking up data and reassembling it commonly used on newsgroups.
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      PAR/PAR2 takes data, chunks it up, and adds RAID-like redundancey. Add a layer of compression for space savings, and these files can be scattered around the net (in fact, already are), and the original data can be reconstructed with a sub-set of the data, even if the sub-set is damaged. (within calculable limits of course)

      (wow, unintentional FP even...)

  • I've been out of the freenet loop for a long time, but I thought I remembered reading in its documentation a few years ago that it did this same kind of encrypting and dispersing chunks of data.
  • by andrewman327 (635952) on Monday August 21, 2006 @11:00AM (#15948623) Homepage Journal
    Although the goal was different, this is in the spirit of the creation of the Internet. DARPAnet [ideasociety.co.uk] was designed to scatter information to maintain communications. to use a different example it reminds me of RAID.


    With all of this encryption technology, people still need to remember basic security tips. Use good passwords ("password" could be cracked very quickly even with 128 bit AES), maintain physical security (hardware keyloggers can find out about the manifesto you're writing before you even save the file) and use common sense.


    Before you all ask, yes it does run Linux. The company was actually at Linuxworld.

    • Re: (Score:3, Insightful)

      by Detritus (11846)
      Although the goal was different, this is in the spirit of the creation of the Internet. DARPAnet was designed to scatter information to maintain communications.

      Cite? From what I've read about the original Arpanet, it was designed to allow the sharing of computer resources and data among DoD researchers. It wasn't designed to be a failure-tolerant network, although DARPA funded quite a bit of research in that area.

      • by zacronos (937891)
        I thought that was common knowledge that they wanted to allow sharing of resources in a failure-tolerant way -- after all they didn't want to become reliant on a communication and collaboration technology that could be easily disrupted in wartime. That's just common sense.

        Since you demand a citation -- from the textbook Understanding Computers: Today & Tomorrow, 10th edition [google.com], by Charles S. Parker, page 365 (under Evolution of the Internet), emphasis mine:

        One objective of the ARPANET project was to

        • by Detritus (11846)
          I thought that was common knowledge that they wanted to allow sharing of resources in a failure-tolerant way -- after all they didn't want to become reliant on a communication and collaboration technology that could be easily disrupted in wartime. That's just common sense.

          You need a better textbook. The idea that the Arpanet was designed to be a survivable network is a particularly persistent myth.

          It was from the RAND study that the false rumor started claiming that the ARPANET was somehow related to

      • Sharing data was the function of the Arpanet that the academics were most interested in. The military was interested for different reasons. Remember, this was developed when the cold war was raging full force and the US was worried about a Soviet strike knocking out our military communication grid. It was designed to be failure-tolerant so if one hub was taken out, communication could still happen due to it's packet switching method of transferring data.
        To this day most military/government information is s
    • This was several years ago, but I read a paper, I believe on Slashdot, about a crypto system intended for people like human rights observers working in the field. Basically you would write up your report, call up this program, pass your report to it, and the program would write it in crypto to uninitialized blocks of the file system so that it appeared to be random noise.

      The concept was that the watcher's laptop was likely to be inspected when they left the country. The inspectors wouldn't find anything s
      • You're thinking of Steganography [wikipedia.org]. Somewhere in here is the story you mentioned [google.com].
        • by wwphx (225607)
          No, it wasn't steganography, at least in the conventional useage of the term. Basically when you installed your *nix distro, you would not partition the entire disk. This program would write your files to the uninitialized portion of the disk. Since the data was written outside of the partitioned area, it would not be seen in a casual search. Additionally, the data was encrypted in such a way as to appear no different than uninitialized drive space.
      • a crypto system intended for people like human rights observers working in the field.

        That would be Rubberhose [wiretapped.net].

        It scares me that Bruce said he didn't know about it. That means he doesn't want anyone to know. Please tell my kids to be good to their mom and that I love them.
  • From the article: Cleversafe is significant because it is an open-source project -- that is, the technology will be freely licensed, enabling others to adopt the design to build commercial products. This could be a very important OSS tool.
  • by nick_davison (217681) on Monday August 21, 2006 @11:04AM (#15948651)
    Storing data in random locations, often garbled beyond all recognition?

    Clearly Windows ME's memory -l-e-a-k-s- management made it the most secure OS ever. If only they had some way of reconstructing that data when you wanted it back again.
  • by xxxJonBoyxxx (565205) on Monday August 21, 2006 @11:06AM (#15948667)
    The Project uses information dispersal algorithms (IDAs(TM)) to separate data into 11 unrecognizable DataSlices(TM) and distribute them, via secure Internet connections, to 11 storage locations throughout the world, creating a storage grid. With dispersed storage, transmission and storage of data is inherently private and secure. No single entire copy of the data is in one location, and only 6 out of the 11 nodes need to be available in order to perfectly retrieve the data.
    ...like network RAID? The site needs spellchecking - badly - but the encryption seems to be based on a key derived after you do some kind of RSA public/private key sign on.
    • information dispersal algorithms (IDAs(TM))
      I'm not sure what they think they're trademarking here. "IDA" is an abbreviation used by many other things, some of them overlapping computer or storage technology . "Information dispersal algorithms" is a term already in common usage. (Just do a search for either term...)
    • Re: (Score:3, Funny)

      Interesting. Why 11?
      "It's one louder."
    • by hey (83763)
      They don't need spell checking they simply need to add in the version of their
      site where the spelling in "incorrect" in the complementary ways which result in
      conventually correct English.
    • Is it like RAID? Can one remote location be reconstructed from the other 10?
  • by Red Flayer (890720) on Monday August 21, 2006 @11:07AM (#15948676) Journal
    This concept just adds another layer
  • Freenet? (Score:5, Interesting)

    by BigZaphod (12942) on Monday August 21, 2006 @11:08AM (#15948681) Homepage
    Isn't this basically what freenet does? It encrypts the data into chunks and spreads it around all over the place.

    I was working on a p2p system that worked in a similar manner. I was even thinking of repurposing it for the sake of doing online backups - but frankly the bandwidth just doesn't seem to be there yet to do that sort of thing in a practical manner. That, and I got bored with the project... (but nevermind that). :-)
    • Re:Freenet? (Score:5, Informative)

      by mrogers (85392) on Monday August 21, 2006 @11:35AM (#15948915)
      Freenet uses forward error correction [wikipedia.org], which guarantees that the original data can be reconstructed given a sufficient number of pieces. Shamir's information dispersal algorithm [wikipedia.org] makes the additional guarantee that nothing can be learned about the original data unless you have enough pieces to reconstruct it.
    • by Xenna (37238)
      Funny, I've been thinking of doing the same. P2P encrypted backups. Shouldn't be too difficult. I have a healthy suspicion towards systems that try to be too smart. I don't want my pieces scattered over various systems.

      For me a system that alows me and a buddy to backup each other's data without getting access to it would be ideal. Too much work to do myself, though, and apparently you weren't going to do it for me, so I got myself a colo machine instead and use it as an rsnapshot server as well as an openv
      • by rplacd (123904)

        Funny, I've been thinking of doing the same. P2P encrypted backups.

        I give you... All My Data [allmydata.com]. Its distant cousin (Mnet [mnetproject.org]) is still around, but sorta moribund.

  • by Anonymous Coward
    It's '70s not 70's.
    • Re: (Score:2, Informative)

      by joebutton (788717)

      It's '70s not 70's

      Is it, though?

      According to Lynne Truss's Eats, Shoots & Leaves:

      Until quite recently, it was customary to write "MP's" and "1980's" - and in fact this convention still applies in America. British readers of The New Yorker who assume that this august publication in in constant ignorant error when it allows "1980's" evidently have no experience of how that famously punctilious periodical operates editorially.

      Having said which, 1980s clearly makes more sense. It's a plural, not a

    • by Red Flayer (890720) on Monday August 21, 2006 @12:04PM (#15949141) Journal
      It's '70s not 70's.
      Not really -- it should be '70s' in all likelihood. The first apostrophe is to represent the missing "19", the second is to denote the possessive that is implied. The term "the 1970's" is a shortening of "the years of the decade we call the 1970s," or "the 1970s' years."

      This gets messy, however, since the word 'years' is implied, and to say during the '70s' will make people wonder which 70 seconds you're talking about, and why it needs to be encapsulated with apostrophes -- is it an idiomatical 70 seconds? Kinda like the Biblical '40 days'?

      For that matter, if you really want to get pedantic, what's the use of referencing the 70s at all if you're not going to bother denoting the scale? I mean, surely not mentioning that it's AD (or CE) is going to confuse people using other calendars... more so than misusing an apostrophe, right?

      Along the same lines, it's just horrific that they'd abbreviate the decade anyway, how are we to know that the writer didn't intend the 1870s, or the 2070s even, if he happens to be living backwards in time?

      Bah, there are grammatical rules, and it's great if everyone follows them, but really, it makes no difference if he spelled it 70's, '70s, or seventies (which is the proper spelling, btw).
      • Along the same lines, it's just horrific that they'd abbreviate the decade anyway, how are we to know that the writer didn't intend the 1870s, or the 2070s even, if he happens to be living backwards in time?
        well, from context one could assume that "the software uses algorithmic techniques known by mathematicians since the 70's" is not referring to knowing something "since" the future, and it's pretty rare to run across data storage algorithms from the 1870s or earlier
        • well, from context one could assume that "the software uses algorithmic techniques known by mathematicians since the 70's" is not referring to knowing something "since" the future, and it's pretty rare to run across data storage algorithms from the 1870s or earlier
          Sure, and from context, one could assume that 70's means '70s. That's the point I was making -- pedantry based on the apostrophe is wasted, since the context makes what the author intended absolutely clear.
  • 6 of 11 (Score:5, Informative)

    by Bonker (243350) on Monday August 21, 2006 @11:09AM (#15948692)
    After RTFA, it occurs that this is mostly a research project. The goals (and downloadables) include libraries that allow PCs to mount a distributed encrypted filesystem and others.

    In a business example where you know that you can ultimately control the sites where you're storing your partial data, this would be a very good thing.

    For the single user attempting to secure his information by using the existing network, there are some downfalls. 6 of 1l slices of the data are needed to recontstruct the whole. Therefore if a party intent on obtaining secret data obtains the majority of the servers, he has the data.

    Also, if a disaster wipes out the majority of the servers, leaving five or less of the eleven, the data is gone.

    This is a very, very important concept for business storage, but I have to wonder if it scratches any geek itches not already soothed by Truecrypt and Par2.
    • I believe that distributing sensitive information over different servers is a great idea in security in that it allows variety in platforms. Some of the nodes can run Linux, others BSD, and some can run Windows. The servers would be in physically disparate places on different power grids/internet lines. You don't have to put all your eggs in one basket. A hacker would have to compromise many more operating systems in order to get your information. The same is true of any flaw that could kill your data: it h
  • by Red Flayer (890720) on Monday August 21, 2006 @11:09AM (#15948695) Journal
    See Comment 15948676 [slashdot.org]

    Of complexity, but also adds
  • In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable
    from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D

    I use this approach in my sex life, however, rather than obscuring information about D, even knowing one "piece" p reveals way more information than I'd like to have out there. Hell, ever since k-1 got a page on myspace, every potential n+1 knows about me before we even get started.
  • I can only hope that this scheme includes distributed storage of the pointers to all the fragments, too. Distributed data is only as reliable as the metadata that record where the data fragments are located. If the user of the system loses their only copy of the map to their fragments, the data is lost. If, on the other hand, each fragment also includes encrypted pointers to a few other fragments, then decrypting any fragment lets one bootstrap recovery of the entire network of fragments (a good thing
  • by Red Flayer (890720) on Monday August 21, 2006 @11:12AM (#15948718) Journal
    See Comment 15948695 [slashdot.org]Another layer of inefficiency and
  • Like mnet? (Score:5, Insightful)

    by haeger (85819) on Monday August 21, 2006 @11:12AM (#15948719)
    If I'm not mistaken, this was one of the goals with the (now dead?) mnet project. [mnetproject.org]
    From what I remember they split up data into multiple pieces, encrypted it and distributed it over a number of nodes, with some redundancy in it. If you know python and are intrested in p2p I'm sure there's a lot to be learned from that project.

    .haeger

    • by Jim McCoy (3961)
      Actually, mnet was based on mojonation, which used Rabin's IDA for splitting the data in a distributed filesystem. While I created the mojonation architecture (and can actually say "been there, done that, printed the t-shirts...") I can't actually claim precendence on the idea -- the real first ideas for this space came from the Intermezzo system and also from Mark Lillibridge's work at DEC.

      The current incarnation of these ideas can be seen in the Allmydata [allmydata.com] service, which uses Tornado/Raptor codes (very

  • by Anonymous Coward
    Secure Data by Scattering the Pieces

    You mean to tell me that all those hours of defragging my HD's on Windows 98 were actually a waste of time?? ;-)
  • Sure, this will work until someone comes up with an Average White Band exploit. Then it's useless.

    -Peter
  • by Red Flayer (890720) on Monday August 21, 2006 @11:14AM (#15948740) Journal
    See comment 15948718 [slashdot.org]

    an increased risk of loss of data.

    Burma Shave.
  • by thrillseeker (518224) on Monday August 21, 2006 @11:17AM (#15948761)
    ... the ...
  • by greg_barton (5551) * <(greg_barton) (at) (yahoo.com)> on Monday August 21, 2006 @11:17AM (#15948766) Homepage Journal
    I thought about a system to do this a few years ago, but with a little twist: distribution of the pieces would be via computer virus. The pieces would be stored in user's computers, but more importantly in intrusion logs of "secure" systems as well. Retrieval would be a social act, kind of like a treasure hunt. "Hey, geeks of the world, there's this important information out there. Go figure out how to get it!"

    This system could be used for high profile secrets, like government whistle-blower data and the like. Storage would be secret and nearly undetectable because of all the other virus noise. Retrieval would be highly public by necessity, both to make retrieval possible and to publicize the contents of the data.
    • that sounds like that episode of star trek where they discovered a message hidden in the DNA of all humanoid species, that explained why all the aliens in star trek look like humans with little bits of rubber glued to their faces.
  • by thrillseeker (518224) on Monday August 21, 2006 @11:19AM (#15948775)
    ... novelty.
  • Bah, I'm not interested. Moving data around is just another form of transposition cipher. Proven-good cipher systems use both transposition and substitution, preferably on compressed data.

    This only works if the distance between the moved elements is greater than the attacker can cross. Not much different than sending reset passwds unencrypted through emails.

  • by bingbong (115802) on Monday August 21, 2006 @11:24AM (#15948819)
    Ross Anderson [cam.ac.uk] of the Computer Security Group at Cambridge University wrote a paper called the Eternity Service. [cam.ac.uk] It has had a few [mff.cuni.cz] different [cypherspace.org] attempts [usenix.org] at implementation, as well as some reworks in terms of design [freehaven.net]. The primary difference is in the Eternity Service - you had no idea what data you had, nor did you have access to the keys. This new concept/design seems to provide more control/granualirity for the user. Given the new proposed encryption laws in the UK, I'm not sure this is a good idea.
  • by Alistar (900738)
    I've been doing something like this for years.

    First I would encrypt the original file, split it up into 10-100 pieces, encrypt those, hide them in other files, encrypt those, then store them in random locations around the internet either by emailing a piece to a webmail or uploading to a server somewhere, posting the binary or hex sequence to a forum, things like that.

    Heck sometimes I'd repeat the repeat the encrypt/split/hide process several times, or even put the last step as hidden. Yes I realize anyone
    • I use it for things that are better off gone forever than being leaked.

      You mean like the details of your encryption/splitting/hiding algorithm?
      • by Alistar (900738)
        Heh,

        While, for one, the individual steps may not be perfectly secure they are certainly far more complex and involve several expert and natural language systems.
        But besides that, I figure if you can find the pieces, put them together in the right order (several times) and decrypt them, then my hat's off to you and I deserve whatever I get for my arrogance in my security.
    • Re: (Score:2, Funny)

      by Nerdfest (867930)
      Osama?
    • by version2 (569804)
      Don't want those Vince Foster memos to fall into the wrong hands now do we?!
  • The problem... (Score:3, Interesting)

    by Fulkkari (603331) on Monday August 21, 2006 @11:55AM (#15949068)

    The problem with this idea is bandwidth and speed. You think your broadband is fast, but if you have to download the 27 gigabytes of photos, music and stuff, it won't be exactly fast on a 8 Mbps DSL, not to talk about 1 Mbps or less. You might wait a couple of hours, but you won't wait a couple of days.

    Okay. So you tell me that amount of available bandwidth will increase? But so will the amount of data that needs to be backed up. And it will grow faster than the bandwidth. Think of homemade movies. You can already fill up your average drive in no-time. What do you then do, when you get a HD camera?

    Although the idea isn't a new one, I think it is still neat. It might work for some stuff, but I don't see this becoming mainstream with technologies like Time Machine [apple.com] coming to the end-users.

    • by imsabbel (611519)
      Not to mention that you have to upload it, too, which is usually a order of magnitude slower.

      Plus you have to upload it more than once (a LOT more than once if you want to be sure) to avoid emberassing "the important last piece of my backup was on the old 486 of a hobo that got thrown away" situations.
      Learning from normal P2P, if you want to get it back after a year, there should be at least a 10-20 factor of redundancy.
      Which leads to another point: those redundancy of course is incredibly wasteful. Just i
      • by Fulkkari (603331)

        Not to mention that you have to upload it, too, which is usually a order of magnitude slower.

        True. Uploading will be very slow and you would have to consider the fact that depending on the system you might need to upload the same data more than once. However, uploading backups would not be as a big priority to users as restoring them. It could happen all the time slowly in the background. Once all the data, say 80 GB, is uploaded you would only need to update the changes. Say you changed an average of 1

  • Adi Shamir (of RSA) first wrote of information dispersal is his 1979 paper 'How to Share a Secret (pdf).'

    Oh come on, a paper?

    Everyone knows that if you want to share a secret, you just tell it to a -- eh, never mind. :P

  • by Animats (122034) on Monday August 21, 2006 @12:00PM (#15949107) Homepage

    Not quite, but the coding scheme that makes CDs and DVDs resistant to dust and scratches works much like that. Big blocks have an error correcting code appended, and then the bits of the data plus error correcting code are rearranged and spread widely across the block. So when you lose a contiguous set of bits, you can replace it by using data distributed across the block.

    It's a good error correction scheme, but it's not exactly new. Every CD player in the world has this. CDs aren't encrypted (there's no key, just an well-known algorithm), but you could mix encryption in if you wanted. This wouldn't help the error recovery.

  • Ancient (Score:3, Informative)

    by Salamander (33735) <jeffNO@SPAMpl.atyp.us> on Monday August 21, 2006 @12:00PM (#15949108) Homepage Journal
    This is so not-new it's not even funny. I've already seen FreeNet and MNet mentioned as precursors, which is appropriate. Dozens of other P2P "filesystems" (in quotes because I don't believe it's truly a filesystem unless it's fully integrated into the OS) and block-level data stores have done this. Probably the one that most thoroughly examined the inherent tradeoffs, and that's most directly based on Shamir's IDA work, is PASIS [cmu.edu] at CMU. Presenting Cleversafe as the first to move in this direction is an insult to those who have gone before.
    • Probably the one that most thoroughly examined the inherent tradeoffs, and that's most directly based on Shamir's IDA work, is PASIS at CMU.

      PASIS is not based on IDA. PASIS is not a mechanism for data dispersal. It is a family of storage protocols that make efficient use of data dispersal mechanisms. Any mechanism that satisfies the m-of-n condition (where of n data fragments, m are necessary to reconstruct the orignal data item) can be used. This can be mirroring, striping, erasure coding, IDA, and anyth

      • by Salamander (33735)

        The motivation behind PASIS is to expose all options and possibilties.

        ...which is exactly what I would want from a research project, and why I cited it. When I saw it at a PDL open house as a representative from EMC I was very impressed with how thoroughly some of those tradeoffs had been examined, and couldn't have cared less that it was nowhere near being a completely functional system. Producing a completely functional system is what I was doing, in a commercial context, though it turned out that my

  • While secret sharing is cool, one of its primary drawbacks is that it's usually built using asymmetric crypto (as in, based on number theoretic assumptions and the like). That means it's potentially quite slow. Ross Anderson [cam.ac.uk] wrote a paper on a cool alternative [cam.ac.uk] which uses only symmetric primitives to achieve the same result. (In fact, he's able to build a lot of different things by combining symmetric primitives in the right way.)
  • I'm a little reminded of the Judge from Buffy. Pieces scattered around the world. For security. This seems like a better application of the technique.
  • by davidwr (791652) on Monday August 21, 2006 @01:13PM (#15949594) Homepage Journal
    A friend taught me this. The secret in his case was a proprietary industrial process.

    You take the secret and divide it into 3 pieces. You have a team of 3 people to each carry or memorize two of the 3 pieces.

    Amy carries pieces 1 and 2
    Bob carries pieces 2 and 3
    Charlie carries pieces 3 and 1

    If any one of them is compromised by bribery or other means, 1) the information is not lost and 2) the enemy has only an incomplete picture of what is going on.

    This can be extended to more people to achieve greater redundancy or less exposure:

    More redundancy: 4 people with 4 peices, each person knows 3 elements. Any 2 of 4 people needed to put the pieces together.

    Less exposure: 4 people with 4 pieces, each knows 2 elements. Any 3 of 4 people needed to put the pieces together. Loss of 1 person exposes 1/2 of the total secret.

    There's no reason to stop with 4 people and 4 pieces.

    Think of this as RAID for human-knowledge.
  • This concept just adds another layer Of complexity, but also adds Another layer of inefficiency and an increased risk of loss of data..
    Juu b33n h4x3d bY el1t35, f001

    ~Teh Def1c4t05S~
  • for...our...overloards
  • by mengland (78217) on Monday August 21, 2006 @04:09PM (#15950913)
    (Fyi: this link to the New York Times article [nytimes.com] bypasses any need to login/register with the nytimes.com website.)

    I'm the Cleversafe Dispersed Storage software-development project leader. I work with Chris Gladwin (mentioned in the New York Times article) as a fellow manager at Cleversafe.

    I offer some comments below to help outline some of the unique aspects of the Cleversafe technology.

    Encryption is not dispersal. Cleversafe provides both, and then some. The Cleversafe Dispersed Storage software disperses any "datasource" (typically a file) into several slices (our current software current uses 11 slices in an 11-lose-any-5 scheme; future versions may use additional schemes with "wider" slice sets). Additionally, our software also encrypts, compresses, scrambles, and signs the datasource content, but we are not trying to reinvent the wheel: other software technologies exist to do these things, and we leverage them extensively.

    We found that a bigger challenge than creating or managing dispersal algorithms was to make the entire storage system [cleversafe.org] regardless of the dispersal algorithm used (and we design the system to be dispersal-scheme agnostic). The meta-data management system and many other things took us far longer to implement then the Cleversafe IDA [cleversafe.org]. It's not hard to use Reed-Solomon, or some other algorithm on a single file or a small set of files and disperse the slices by hand onto several different system (or use variants of this like the 3-piece secret story with Amy, Bob, and Charlie mentioned above). It's much harder to manage this across an entire file system (with hundreds of thousand of files--or many more depending on the file system) for an unlimited number of file systems from all the various users across to be stored on heterogeneous set of an unlimited-number of geographically-dispersed, commodity-storage nodes in a completely-decentralized way with no dependence on the original source of the data (eg, you could sledgehammer your laptop and not lose any data that's stored on our grid/storage service). (I apologize for that run-on sentence.)

    Further, dispersed-storage systems do not require replication. (Dispersal systems may replicate data for performance purposes, if at all, depending on the application/configuration/installation/context.) If a system replicates entire copies of the data (be they encrypted or not) then it, by (our) definition is not a dispersed-storage system. So a continual question I have when evaluate other systems: do they replicate the data in whole or not? Most systems replicate.

    Cleversafe is not the first to present a dispersal system, but we like to think we are the first to make it broadly usable by people and inter-operable with other systems. See our cmdline client [cleversafe.org] (which will soon have continous-backup and XML-programmable policy management), our Dispersed Storage API [cleversafe.org], our dsgfs file system [cleversafe.org], a soon-to-be released GUI client, and future "connectors" (what we call the applications that leverage our technology) to come, all available at http://www.cleversafe.org [cleversafe.org].

    A side note: "revision management" is built into the Cleversafe system to address what I call "soft" failures (accidental deletes, application failures, etc) vs. "hard" failures (hard disk crashes) as well as archival requirements.

    I believe that the concept of "dispersed storage" will eventually change how the world thinks about storage systems--regardless of whether or not these are Cleversafe-based systems (I think Cleversafe presents the best such system, but I of course am biased).
  • There is prior art:

    "Blondie, what did he tell you? I know which graveyard the money is buried in. Don't die on me Blondie. What did he tell you?"

    "A name... a name on a gravestone..."

    "Ah! We are partners! I know the graveyard, you know the name! Partners just like good old times, eh?!"

  • (Didn't bother to RTFA but..)
    Why not just get K random sequences and XOR them together to get a 1 time pad. Then encrypt the data and store it in public view. You will need ALL the pads to unlock it.
  • So this is why there were so many of those "scattered items" type quests in console RPGs.

    Elder: "We need the sacred information of Pr0n!"
    Elder: "Unfortunately, the dastardly Cleversafe has scattered this information into 12 parts."
    Elder: "You must go to each of the 12 ancient ruins and collect the sacred information for us!"
    Player: "This quest sucks."

    Makes sense now....
  • by WoTG (610710) on Monday August 21, 2006 @10:31PM (#15953085) Homepage Journal
    I've often wondered when someone would get around to perfecting a dispersed backup system for LAN's. With the average workstation toteing 100GB drives, and the average use of a handful of GB's, there seems to be a surplus of cheap disk space on the LAN... at least compared to backup tapes or other media. Though, in hindsight, I guess a single fire or building disaster would still be catastrophic...

186,000 Miles per Second. It's not just a good idea. IT'S THE LAW.

Working...