New 25x Data Compression? 438
modapi writes "StorageMojo is reporting that a company at Storage Networking World in San Diego has made a startling claim of 25x data compression for digital data storage. A combination of de-duplication and calculating and storing only the changes between similar byte streams is apparently the key. Imagine storing a terabyte of data on a single disk, and it all runs on Linux." Obviously nothing concrete or released yet so take with the requisite grain of salt.
What kind of data? (Score:4, Insightful)
Re:What kind of data? (Score:5, Insightful)
it can compress anything: email, databases, archives, mp3's, encrypted data or whatever weird data format your favorite program uses.
In other words, they're full of crap.
Re:What kind of data? (Score:4, Insightful)
Re:What kind of data? (Score:5, Informative)
Re:What kind of data? (Score:5, Funny)
I can compress anything you give me by a factor of at least 1 (inclusive of my own output).
"-1 pedantic", I know.
-nB
Re:What kind of data? (Score:3, Insightful)
I can compress anything you give me by a factor of at least 1 (inclusive of my own output).
"-1 pedantic", I know.
It would be more pedantic if it were accurate...
Re:What kind of data? (Score:4, Funny)
A single byte that is all other data compressed together, and from which all knowledge flows! The universal black hole of data!
Don't tell me
Actually, I once tried that. (Score:5, Interesting)
So then I tried it with LZW compression, and it still eventually grew in size.
The neat thing about doing this, though, is that it taught me something about the mathematical basis for entropy. You see, I couldn't believe that I was getting the diminishing returns, so I wrote some algorithms to output the histogram curves.
What I saw was that the best Huffman compression came when the Histogram was farthest from what I'll call a "perfect bell curve". I don't know if that is the same curve or not, but it looks a lot like one half of a perfect bell; or maybe like the radiation output of a blackbody in physics.
Anyhow, as I successively compressed the data, the data moved towards a tighter bell curve in general, and always towards that perfect bell, in specific (so long as the data would compress, that is.) I didn't do the calculation, but it would be interesting to calculate what the closest bell curve was, and then do a standard deviation of the histogram from the bell curve, and correlate it to compression.
So then I thought "well, I'll compress only a portion of the data, the part that is compressible". But any typical portion of the data still seemed to follow that pesky bell curve. So then I thought to intercept the data, and see if I could visually spot any patterns.
Indeed, I could. Wow -- look at that string of zeros here; and that repeated series 1001001001001, *four times*, there. Surely I could get compression out of that. Funny thing, though. Every time I tried, I could get compression for that data set, but then lousy compression for anything else. When I tried to generalize the compression to include every possibility, I again couldn't get compression. In other words, truly entropic data does have repetition. It does have some item that shows up more commonly than others. It does have patterns. But the patterns are no more than what you would expect, (or actually, if you want to be correct but confusing, only an expectable percentage of the patterns are more than what you would expect, by any given amount.) And when you include all the patterns of length n, including patterns of length n=1, then there just isn't any more entropy possible for the data.
And just as it takes an increase in entropy to drive a heat engine (2nd law of thermo), it also takes an increase in data entropy to get compression.
You geek! (Score:3, Funny)
Re:Compression hoax number 3 (Score:3, Insightful)
Re:What kind of data? (Score:5, Funny)
Re:What kind of data? (Score:3, Funny)
Re:What kind of data? (Score:5, Funny)
But the Slashdot Post says that is all runs on Linux. And knowing the infinite power of Linux, I believe them.
In addition to being the best OS in the world, Linux is also the most secure, does everything better than every other OS, and if given the right developers it is the ONLY os that could do something as impressive as compress data past the limits of possiblity.
I'm sure with the right developer, Linux could also be used to harness zero point energy, create wormholes for travel in your basement, and possibly cure most diseases...
Well that's not surprising. (Score:5, Informative)
Systems like this bank on the fact that most enterprise backup systems (that is... Veritas) can't tell when a file is changed slightly between backups. They use a coarser-grained whole-file approach (which is very reliable though, and already only stores one copy of each file). But people who know about the magic of rsync understand the speedups that can be obtained by leveraging rolling hashtables and other tricks to get binary deltas of large files, and only transmitting those changes.
Given a large enough set of backups and enough time, the potential size savings is enormous.
Veritas should really be implementing this themselves, though.
And I have a feeling this is what's behind the 25x claims of the article. The key is the mention "enterprise"... large data sets... lots of potential redundancy to exploit.
Re:What kind of data? (Score:2, Insightful)
/dev/zero ? (Score:5, Funny)
gives about three kilobytes for a terabyte of data.
Re:/dev/zero ? (Score:2)
Re:What kind of data? (Score:5, Insightful)
20 email accounts subscribed to the same mailing list? Store the bodies of those e-mails only once, and you save a big chunk of disk space. A bunch of people downloaded the same MP3 file? We only need one copy in the archive. As long as there are multiple copies of the same data, it can compress any type of data.
The difference here is that they are taking advantage of the redundancy of files across an entire filesystem (and a HUGE one), rather than the redundancies within an individual file. (I would assume they also do the latter type of compression with a conventional algorithm.) 25x compression seems extreme, but I am sure they can achieve some extra compression here.
Re:What kind of data? (Score:2)
What is old is new again.
Linux has something related.. RZIP (Score:3, Interesting)
Although its not for every file, some times, this can be a huge win. In my case, backing up 60 versions of a 700kb XML file, I get 500:1 co
Re:What kind of data? (Score:5, Informative)
So they're not exactly lying about the compression ratios, they're just redefining the term to describe compression not of data-sets but of data-sets-over-time.
Breaking news! (Score:3, Insightful)
Seriously though. Gzip can compress down to 98%... if your data is mostly redundant. The chance that they're doing this on the random data they claim in the article is nil.
Re:Breaking news! (Score:2)
dd if=/dev/urandom of=file bs=10MB count=1 (Score:2)
Re:Breaking news! (Score:5, Funny)
Re:Breaking news! (Score:3, Funny)
Just like the "our intelligence wasn't wrong about Sadam having WMD's, the satalite images just come to us as lossy JPEGs"
(the point of this post lost due to compression)
Re:Breaking news! (Score:2)
Re:Breaking news! (Score:5, Interesting)
Lossy compression and compression of particular data sets do not have to obey this. With lossy compression you can compress down as far as you can tolerate.
Coding particular sets gets some extra compression by coding some of the data in the compress/decompress utility. For example if all your files have a 1MB standard header and 1KB of data, you can omit the 1MB of header because it's always there, and just send the 1KB of data! Truly amazing compression! Of course it only works under those conditions.
Re:Breaking news! (Score:3, Insightful)
Re:Breaking news! (Score:3, Interesting)
Despite the obvious answer (he's simply wrong), 7zip is somewhat "cheating" in this 3-way comparison, as it uses a much, much, much larger block-size (memory). You can set it to use hundreds of MBs of RAM, whereas gzip and bzip2 are both limited to 9KB max.
.
Off-topic Rant:
I was actually quite impressed with 7zip and it's lzma/ppmd compression methods when I first saw it compressing better than bzip2. However, once the novelty
Re:Breaking news! (Score:3, Insightful)
*sniff* (Score:5, Insightful)
I smell
Re:*sniff* (Score:3)
Limited application (Score:5, Funny)
Re:Limited application (Score:5, Funny)
Re:Limited application (Score:3, Funny)
Re:Limited application (Score:3, Funny)
Re:Limited application (Score:5, Funny)
Re:Limited application (Score:2)
Re:Limited application (Score:4, Funny)
Re:Limited application (Score:4, Funny)
hate to break it to you this way
-nB
Re:Limited application (Score:5, Funny)
Re:Limited application (Score:2)
Re:Limited application (Score:2)
Re:Limited application (Score:3, Insightful)
Heard this before (Score:5, Interesting)
Re:Heard this before (Score:2)
Re:Heard this before (Score:2)
Re:Heard this before (Score:2)
Re:Heard this before (Score:4, Interesting)
Back in the day, I figured out what was going on when I took a disk to another machine, couldn't restore the file. I then tested the disk in the machine I had made the archive on, and it worked fine. It was a good hoax. We all got a good laugh out of it.
Re:Heard this before - OWS (Score:2, Informative)
The proof... (Score:5, Funny)
Re:The proof... (Score:4, Funny)
*
MOD PARENT DOWN (Score:5, Funny)
Grain of salt (Score:2)
Or atleast with 1/25th a grain of salt.
Re:Grain of salt (Score:2)
right. sure. (Score:3, Interesting)
Number of them which were anything other than complete bullshit: 0
I'm not holding my breath.
This post is sooo full of BS (Score:2)
You can do better than that. (Score:2)
Without putting much thought into it, I can even do that. 2 gigs of straight 0's with a real-world algorithm pretty easily compresses down to 12 bytes, far fewer than the kilobyte you quote. You could store it in just: 2000000000x0
Use an abbreviation for 2 billion or other byte-saving tricks and you could compress it down even more.
I suspect such smoke and mirrors is something similar to
Re:You can do better than that. (Score:2)
Currently.. (Score:2)
Incomplete Article Summary (Score:5, Funny)
StorageMojo is reporting that a company named Practical Nano Cold Fusion Duke Nukem Forever at Storage Networking World in San Diego has made a startling claim of 25x data compression for digital...
Dubious (Score:5, Insightful)
Re:Dubious (Score:2)
Re:Dubious (Score:2)
The Burrows Wheeler Transform is very cool indeed. Brian Ewins used it to make the PMD duplicate code detector [sourceforge.net] much much faster.
sounds like a O(n^n^n) problem. (Score:5, Interesting)
Re:sounds like a O(n^n^n) problem. (Score:2)
If your logs are on the same partition (let alone _disk_) as your database files, you deserve this kind of fate.
Shame on you, ScuttleMonkey! (Score:4, Funny)
from the make-sure-to-give-it-to-more-than-just-the- corporate-monkies dept.
You would think that an editor called Scuttle Monkey would know that the correct plural of "Monkey" is "Monkeys", not "Monkies".
"Monkies" would be the plural of "Monkie", which I guess is what you'd call a baby Monk Seal [wheelock.edu], or if you knew him really well, a resident of a Monastery [wikipedia.org]. "Hey, Monkie, nice robe!"
Of course, if you were talking to Michael Nesmith [wikipedia.org], the singular form would be "Monkee". But that's neither here nor there.
Re:Shame on you, ScuttleMonkey! (Score:2)
No, really, it's true! (Score:2)
A grain of salt? (Score:2)
Actually, I'd say take the news of this "breakthrough" with a Salt Lick. [wikipedia.org]
I hope it's true, but I'm not holding my breath.
Re:A grain of salt? (Score:2)
Calgary / Canterbury corpus? (Score:4, Interesting)
Sad truths about data compression. (Score:5, Informative)
2. Achievable compression depends on the nature of the input material. Big files (music, movies) these days are already compressed by their respective codecs, so they compress really badly.
3. While there are algorithms that, on average, compress better than others, usually this is paid for by running slower, often much, much slower.
Mmmmmmh, salt.
OSHI! (Score:2)
Maybe you want to tale a gander at RLE [wikipedia.org]
25x compression for something repeated 25 times (Score:2, Insightful)
Where have we heard this one before? (Score:4, Insightful)
A cow-orker asked if it could be used on its own ouput.
Re:Where have we heard this one before? (Score:2)
Yeah, it's called rm, isn't it? You can even use the flags '-r' for recursive (compress the compression for even more savings) and '-f' for flatten (makes the result occupy even less space than before). Run rm -rf from the root directory and just watch how much disk space frees up. Amazing!
Re:Where have we heard this one before? (Score:2)
rm -f
Re:Where have we heard this one before? (Score:2)
A cow-orker asked if it could be used on its own ouput.
Answer: Sure! But decompressing the data is still under development.
I've always imagined this conversation (Score:5, Funny)
Marketing: How much in the best conceivable case?
Developers: Oh, I dunno, maybe 25x.
Marketing: 25x? Is that good?
Developers: Yeah, I suppose, but the cool stuff is...
Marketing: Wow! 25x! That's a really big number!
Developers: Actually, please don't quote me on that. They'll make fun of me on Slashdot if you do. Promise me.
Marketing: We promise.
Developers: Thanks. Now, let me show you where the good stuff is...
Marketing (on phone): Larry? It's me. How big can you print me up a poster that says "25x"?
Re:I've always imagined this conversation (Score:3)
Comedian Bill Hicks had the most insightful proposal for marketing types:
"By the way if anyone here is in advertising or marketing... kill yourself. No, no, no it's just a little thought. I'm just trying to plant seeds. Maybe one day, they'll take root - I don't know. You try, you do what you can. Kill yourself. Seriously though, if you are, do. Aaah, no really, there's no
damn people! (Score:2)
of course you can do this. Look at datadomain.com.
they expect 20-80x compression because they're marketing themselves as backup to disk (doing repetitive full backups). you get the same patterns over and over again.
and whoever posted the RLE wikipedia article, thank you for understanding the solution.
and no, everything isn't going to compress 25x, but everything will compress some. There are repeated bitstreams in everything. a 64bit string has a finite number of patterns. I don't know how small th
Re:damn people! (Score:2)
Hmm, I don't like the thought of all my backups utilizing a single copy of a pattern that happens a million times. Imagine; You have 30 days of backups, and a single pattern occurs 25,000 times between all 30 backups. You get block errors where that single pattern exist on the disk there by destorying all 30 backups. Now, I can understand ke
What's that smell in the air? Oh yeah, Bullshit. (Score:2)
It can compress anything!1111 Even already compressed mp3s and encrypted data, both of which have a high degree of data entropy, and are essentially uncompressible!
Magical compression for everyone!!
This definitely works (Score:5, Funny)
Great job Slashdot... (Score:2)
Vist the Diligent WebSite and learn.... (Score:5, Informative)
Re:Vist the Diligent WebSite and learn.... (Score:3, Insightful)
That blog entry smells artificial, though. Very calculated. Right about here, I become wary:
"The way Diligent achie
Results of Search in 1976-present db for: (Score:2)
TFA (Score:4, Insightful)
To those who're wondering... (Score:3, Insightful)
Lossless compression is nothing more than an algorithmic lookup table. It's a substitution cipher like what you find in famous quote puzzles.
Take two different messages. Compress each. When you decompress them, you have to get two different messages back, right? So you need two different messages in compressed form. If your compressed message uses the same symbolic representation as the uncompressed message--and, since we're talking ones and zeros here with computers, that's exactly the case--then it should quickly be apparent that, for any given length message, there're so many possible permutations of symbols to create a message...and you need exactly that same number of permutations in compressed form to be able to re-create any possible message.
Compression is handy because we tend to restrict ourselves to a tiny subset of the possible number of messages. If you have a huge library but only ever touch a small handful of books, you only need to carry around the first drawer of the first card cabinet. You can even pretend that the other umpteen hundred drawers don't even exist.
It's the same with text. You only need six bytes to store most of the frequently-used characters in text, but we sometimes use a lot more than just the standard characters so they get written on disk using eight bytes each. English doesn't even use every permutation of two-letter words, let alone twenty-letter ones, so there's a lot of wasted space there. You only need about eighteen bits to store enough positions for every word in the dictionary. A good compression algorithm for text will make that kind of a look-up table optimized for written English at the expense of other kinds of data. ``The'' would be in the first drawer of the cabinet, but ``uyazxavzfnnzranghrrt'' wouldn't be listed at all. If you actually wrote ``uyazxavzfnnzranghrrt'' in your document, the compression algorithm would fall back to storing it in its uncompressed form.
Also, don't overlook the overhead of the data of the algorithm itself. If you've got a program that could compress a 100 Mbyte file down to 1 Mbyte...but the compression software itself took several gigabytes of space, that ain't gonna do you much good. It's sometimes helpful to think of it in terms of the smallest self-contained program that could create the desired output. An infinite number of threes is easy; just divide 1 by three. Pi is a bit more complex, but only just. The complete works of Shakespeare is going to have a lot more overhead for a pretty short message. And ``uyazxavzfnnzranghrrt'' might even have so much overhead for such a short message that ``compression'' just makes it bigger.
Cheers,
b&
Reminds me of "fractal compression." (Score:2)
The description of the process sounds pretty good, but then again, so too does the medi
4000:1 compression (Score:2)
OK, OK, you still have to store a full version of each file (or a traditionally compressed version). So for a single PC it doesn't make sense. But for an enterprise there are thousands of copies of those Windows OS files, tens or hundreds of those Powerpoint presentations, scatter-gun emails, etc - so why not just store them j
For Christ's sake, Slashdot editors (Score:2)
Lossless and Reliable? (Score:2)
The best current compression algorithms for English text come close to 10:1 lossless compression - so there is hope that their system could do that good.
Even simple run-length encoding will manage spectacular compression ratios well over 100:1 on images that are diag
Might work for typical back-up (Score:3, Informative)
See http://en.wikipedia.org/wiki/Venti [wikipedia.org] for similar ideas in a system that easily achives 25x compression for typical archival storage. When a file has been changed only those 512 kbyte blocks that are really new are saved, other blocks are just mapped by their SHA1 hashes to existing blocks. So files with small changes, very similar files and files sharing common parts will all compress very nicely. In a multi-user system the files of different users tend to also have lots of similar parts: same emails, same office documents with perhaps minor changes, same reference material / tools / libraries as personal copies etc.
My guess is TFA refers to a re-invention of this wheel, most likely in an inferior way.
Entirely possible (Score:3, Informative)
The idea is based on "de-duplication" of data and is only really practical for backups (where most data from backup to backup is identical) or central repositories of data for a large organization that has multiple similar data sets, for example, many installations of Windows that are often similar.
From my experience x25 is a bold claim for general data. I've seen small scale tests that showed x30 compression over backup sets but those implementations had performance issues.
From the description in their white-paper, despite their claims, it appears they are performing some kind of hash by definition (e.g. mapping a space to a smaller space).
it's a CVS!! (Score:4, Informative)
Basically it's a CVS, if your backing up multiple computers, or user directories your going to see tons of repeate files, heck they'll even be the same name. Saving the diffs is a good idea. And not at all dificult to duplicate.
For instance what if you were doing back up for a team of animators. Their files are HUGE, but 90% of the frames will be identical between the individual systems. (indeed the frames between one another will likely be very similar) You could get far more than 25x compression that way. The big downside of this idea is the memmory & CPU vs Speed trade off. You can't use this kind of system to back up to a tape or DVD system, it needs to be random access media.
You could probably get nearly the same results by hacking rsync and diffing identical file names in different directories. Possible bonus for diffing files of similar file type.
It's a clever idea, not a radical new technology.
Re:100X - 1000X (Score:4, Informative)