Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

ZeoSync Makes Claim of Compression Breakthrough 989

dsb42 writes: "Reuters is reporting that ZeoSync has announced a breakthrough in data compression that allows for 100:1 lossless compression of random data. If this is true, our bandwidth problems just got a lot smaller (or our streaming video just became a lot clearer)..." This story has been submitted many times due to the astounding claims - Zeosync explicitly claims that they've superseded Claude Shannon's work. The "technical description" from their website is less than impressive. I think the odds of this being true are slim to none, but here you go, math majors and EE's - something to liven up your drab dull existence today. Update: 01/08 13:18 GMT by M : I should include a link to their press release.
This discussion has been archived. No new comments can be posted.

ZeoSync Makes Claim of Compression Breakthrough

Comments Filter:
  • by Atzanteol ( 99067 ) on Tuesday January 08, 2002 @09:14AM (#2803130) Homepage
    Maybe they just needed more bandwidth for their terrible site?
  • by Anonymous Coward on Tuesday January 08, 2002 @09:15AM (#2803134)
    The odds on a compression claim turning out to be true are always identical to the compression ratio claimed?
  • Maybe they'll be able to compress their debt to $1 when they go under.
  • by neo ( 4625 ) on Tuesday January 08, 2002 @09:19AM (#2803150)
    ZeoSync said its scientific team had succeeded on a small scale in compressing random information sequences in such a way as to allow the same data to be compressed more than 100 times over -- with no data loss. That would be at least an order of magnitude beyond current known algorithms for compacting data.

    ZeoSync announced today that the "random data" they were referencing is string of all zero's. Technically this could be produced randomly and our algorythm reduces this to just a couple of characters, a 100 times compression!!
  • by Anonymous Coward on Tuesday January 08, 2002 @09:20AM (#2803154)
    Looks like Wired has a start to their top 10 list for 2002.
  • by Sobrique ( 543255 ) on Tuesday January 08, 2002 @09:21AM (#2803158) Homepage
    100 to 1? Bah, that's only 99%.
    The _real_ trick is getting 100% compression. It's actually really easy, there's a module built in to do it on your average unix.
    Simply run all your backups to the New Universal Logical Loader and perfect compression is achieved. The device driver, is of course, loaded as /dev/null.
  • by Rentar ( 168939 ) on Tuesday January 08, 2002 @09:26AM (#2803185)
    I'm going to agree with you here. If there's no pattern in the data, how can you find one and compress it. The reason things like gzip work well on c files (for instance) is because C code is far from random. How many times do you use void or int in a C file? a lot :)

    So a perl programm can't be compressed?

  • Blah! (Score:2, Funny)

    by jsse ( 254124 ) on Tuesday January 08, 2002 @09:32AM (#2803233) Homepage Journal
    We already have lzip [slashdot.org] to compress the files down to 0% of their original size. ZeoSync doesn't catch up with latest technologies on /. it seems.
  • by Anonymous Coward on Tuesday January 08, 2002 @09:33AM (#2803239)
    Screw ZeoSync, I've built a compression algorithm that is 1000:1 and is completely lossless. I've yet to demonstrate it in public though but please give me venture capital. Thank you.
  • by Mr Thinly Sliced ( 73041 ) on Tuesday January 08, 2002 @09:34AM (#2803248) Journal
    Not only that, but I just hacked their site, and downloaded the entire source tree here it is:

    01101011

    Pop that baby in an executable shell script. Its a self extracting
    ./configure
    ./make
    ./make install

    Shh. Don't tell anyone.

    Mr Thinly Sliced
  • by the bluebrain ( 443451 ) on Tuesday January 08, 2002 @09:35AM (#2803251)
    ... by compressing some VC's bank account, by a factor of greater than 100!

    "It was just data, you know," the sobbing wretch was reportedly told, "just ones and zeros. And hey - you can look at it as a proof of principle. We'll have the general application out ... real soon now, real soon".
  • Egads... (Score:5, Funny)

    by RareHeintz ( 244414 ) on Tuesday January 08, 2002 @09:36AM (#2803261) Homepage Journal
    ZeoSync said its scientific team had succeeded on a small scale...

    The company's claims, which are yet to be demonstrated in any public forum...

    ...if ZeoSync's formulae succeed in scaling up...

    Call the editors at Wired... I think we have an early nominee for the 2k2 vaporware list.

    ZeoSync expects to overcome the existing temporal restraints of its technology

    Ah... So even if it's not outright bullshit, it's too slow to use?

    "Either this research is the next 'Cold Fusion' scam that dies away or it's the foundation for a Nobel Prize," said David Hill...

    Somehow I think this is going to turn out more Pons-and-Fleischmann than Watson-and-Crick. Almost anytime there's a press release with such startling claims but no peer review or public demonstration, someone has forgotten to stir the jar.

    When they become laughingstocks, and their careers are forever wrecked, I hope they realized they deserve it. And I hope their investors sue them.

    I should really post after I've had my coffee... I sound mean...

    OK,
    - B

  • by sprag ( 38460 ) on Tuesday January 08, 2002 @09:42AM (#2803303)
    A thought just occurred to me: If you can do 100:1 compression and compress something down to, say, 2 bytes, what would 'ab' expand to? My thought is "ZeoSync Rulz, Suckas"
  • by harlows_monkeys ( 106428 ) on Tuesday January 08, 2002 @09:44AM (#2803313) Homepage
    From one of the things on their site: Although currently demonstrating its technology on very small bit strings, ZeoSync expects to overcome the existing temporal restraints of its technology and optimize its algorithms to lead to significant changes in how data is stored and transmitted (emphasis added).

    Using time travel, high compression of arbitrary data is trivial. Simply record the location (in both space and time) of the computer with the data, and the name of the file, and then replace the file with a note saying when and where it existed. To decompress, you just pop back in time and space to before the time of the deletion and copy the file.

  • by friscolr ( 124774 ) on Tuesday January 08, 2002 @09:44AM (#2803316) Homepage
    But this is no joke.

    Please note they claim to be able to compress data 100:1, but do not say they can decompress the resultant data back to the original.

    By the way, so can i.
    Give me your data, of any sort, of any size, and i will make it take up zero space.

    Just don't ask for it back.

  • by HalfFlat ( 121672 ) on Tuesday January 08, 2002 @09:47AM (#2803334)
    They're looking for investment money?

    Just think of it as an innumeracy tax on
    venture capitalists.
  • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday January 08, 2002 @09:48AM (#2803350) Journal

    So everything compresses into 1 byte.

    Duh, are you like an idiot or something?

    When you send me a one-byte copy of, say, The Matrix, you also have to tell me how many times it was compressed so I know how many times to run the decompressor!

    So everything compresses to *two* bytes. Maybe even three bytes if something is compressed more than 256 times. That's only required for files whose initial size is more than 100^256, though, so two bytes should do it for most applications.

    Jeez, the quality of math and CS education has really gone down the tubes.

  • Simply have the bit big enough. Let's say you're using one of those old-fashioned binary computers, and want to compress everything to 1/Nth the size. No problem, you simply need a bit with 2^N states. Everything then fits on that single bit.


    (Of course, this DOES create all sorts of other problems, but I'm going to ignore those, because they'd go and spoil things.)

  • by Sobrique ( 543255 ) on Tuesday January 08, 2002 @10:09AM (#2803464) Homepage
    Don't bother compressing it, just delete it, and then get an infinite number on monkeys on an infinite number of typewriters to re-produce the original.
  • by pmc ( 40532 ) on Tuesday January 08, 2002 @10:12AM (#2803479) Homepage
    Duh, are you like an idiot or something?

    You're the moron, moron. When you get the one byte compressed file, you run the decompressor once to get the number of additional times to run the decompressor.

    What are they teaching the kids today? Shannon-shmannon nonsense, no doubt. They should be doing useful things, like Marketing and Management Science. There's no point in being able to count if you don't have any money.
  • by Bandman ( 86149 ) <bandman.gmail@com> on Tuesday January 08, 2002 @10:12AM (#2803480) Homepage
    I get the idea that this part of the algorithm is perfected by them...its the decompresser that's giving them fits...

    Step 1: Steal Underpants
    Step 3: Profit!

    We're still working on step 2
  • Re:Egads... (Score:2, Funny)

    by shic ( 309152 ) on Tuesday January 08, 2002 @10:19AM (#2803521)
    > > ZeoSync expects to overcome the existing temporal restraints of its technology
    > Ah... So even if it's not outright bullshit, it's too slow to use?

    No, my friend - you are missing the whole point. ZeoSync HAVE succeeded (in a limited sense.) You see, in order to achieve implausible compression rates on random data - all you need to do is overcome a few temporal issues - follow this line of thinking...

    1) Each implementation of the compression algorithm will only be applied to (a relatively small finite number of) finite sequences of bits.
    2) Encode exactly these sequences in the compression tool.
    3) Astonishing compression is achieved - only a small ordinal need be stored to represent each compressed result.

    So your data will always be small, but your compression program will grow rather quickly!

    Puzzle solved.
  • by BubbaFett ( 47115 ) on Tuesday January 08, 2002 @10:19AM (#2803523)
    ZeoSync said its scientific team had succeeded on a small scale in compressing random information sequences in such a way as to allow the same data to be compressed more than 100 times over -- with no data loss.

    Ok, say I want to compress "foo" 100 times over:

    bash$ for i in $(seq 1 100); do gzip foo; mv foo.gz foo; done

  • by Erik Hensema ( 12898 ) on Tuesday January 08, 2002 @10:22AM (#2803539) Homepage
    Perl source is as close to truly random data as possible.
  • Re:Egads... (Score:5, Funny)

    by RareHeintz ( 244414 ) on Tuesday January 08, 2002 @10:24AM (#2803546) Homepage Journal
    Of course! What was I thinking? Why not just use a table lookup of every possible sequence of bytes of any length?

    See you all later - I have some coding to do!

    OK,
    - B

  • by radish ( 98371 ) on Tuesday January 08, 2002 @10:50AM (#2803714) Homepage

    *Reads FAQ* *Blushes*

    OK, so I went the "negligable housekeeping route". Maybe I should get a job in the patent office. ;-)
  • by kpayson ( 221071 ) on Tuesday January 08, 2002 @10:59AM (#2803751)
    The pigeon hole principle says that you can't stick more than one pigeon in your hole. In fact, even trying to stick one pigeon in your hole is probably a bad idea
  • by daniel_howell ( 457947 ) on Tuesday January 08, 2002 @11:30AM (#2803888)
    Maybe they just write all the 1s and 0s *really small*?
  • by kitts ( 545683 ) on Tuesday January 08, 2002 @11:32AM (#2803901) Homepage
    Beautiful flash animation, though. I particularly like the fact that clicking the 'skip intro' button does absolutely nothing -- you get the flash garbage anyway.

    Actually, no. What you're seeing is their new compression methodology in action, applied to their website. By clicking on Skip Intro, you're actually hurtled through a registration process at lightning speed and signed up to several of their services, but for security purposes in order to validate those services you're redirected to the main page. However, in order to expediate the service, the exact location of the time of your click on the Skip Intro is kept in a data file in your cookies folder (you might not see it there because, you guessed it, it's compressed to a single byte), and when redirected the cookie is read to get the exact location of your click in the Flash Intro so that the intro fast-forwards to that point in time when you clicked, giving the impression of seemless, uninterrupted animation.

    Go on, give it a try. Try clicking the Skip Intro button multiple times, and you'll notice that once you click it'll look like nothing's changing, with no trace in a cookie file of where that spot is. Now THAT'S impressive. And they've got all of your personal information from that registration which you didn't even know you did compressed to a single byte on the server, just waiting to be uncompressed so they can start sending you more information (they just need to work the decompression kinks out).

    Cool, huh? I'm giving them all my money.
  • by delta407 ( 518868 ) <slashdot@nosPAm.lerfjhax.com> on Tuesday January 08, 2002 @11:44AM (#2803967) Homepage
    No, see, it's 100:1 in binary.
  • by Archanagor ( 303653 ) on Tuesday January 08, 2002 @11:58AM (#2804041) Homepage Journal
    You know,

    If you just remove the flashy buzzwords. Their press release compresses ~100:1

    Here's the result:

    Bullshit.
  • by Graspee_Leemoor ( 302316 ) on Tuesday January 08, 2002 @11:58AM (#2804045) Homepage Journal
    Heheh, I always wanted to write a "gainy compression" routine. It would probably have a special marker in there like the ascii string:

    "The next three bytes are compressed!"

    graspee
  • by FlatEarther ( 549227 ) on Tuesday January 08, 2002 @12:41PM (#2804236)
    It is possible despite the many (uninformed) negative comments that have appeared concerning this truly amazing breakthrough in compression technology. I, myself, using my own patented compression technology - The Shannon-Transmogrificator (TM) have managed to compress the entire Reuters article to a mere 4 ASCII characters (!), with essentially no loss in meaning: 'C', 'R', 'A', 'P'. I wonder if anyone can improve on this ?
  • by hackerhue ( 182083 ) on Tuesday January 08, 2002 @01:36PM (#2804490) Homepage
    The output from a pseudo-random number generator is usually considered "random enough for practical purposes." So if you define "practically random data" as "data that is random enough for practical purposes," you can compress it by storing the random seed and the string length. ;-)

    I think I can beat their 100:1 compression ratio with this scheme.
  • by grytpype ( 53367 ) on Tuesday January 08, 2002 @01:47PM (#2804560) Homepage
    I just ran another compression pass on that, and i got:

    BS
  • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday January 08, 2002 @02:05PM (#2804681) Journal

    I don't need to encode the number of compressions, every decompression consists of decompressing 256 times.

    I think you mean at most 256 times. Supposing I had to perform 10 compressions to compress to a singe byte. After you had decompressed 10 times, you'd have the data. the next decompression would make some other file 100 times larger than the Matrix. So if you could recognize the correct file when you saw it, I could avoid transmitting the decompression count.

    So, I just have to prepend a string saying "This is it!" before compressing!

    Also, it occurred to me after my previous posting (and to another poster, I saw) that if we can compress to a single byte, why not to a single bit? This is a great advance, which I believe I shall patent quickly before that other poster does, because now I can give you my copy of The Matrix over the phone! I can just tell you if it's a 1 or 0. For that matter, I don't even have to tell you -- you can just try both possibilities!

    So my question now is, does the decompressor only produce strings of bits that exist somewhere and were once compressed, or does it produce anything? Can I just think "I want a great term paper..." and then try decompressing both 1 and 0 until I get it (in no more than 8 or ten iterations of the decompressor, 'cause I want a paper, not a novel).

  • by zhensel ( 228891 ) on Tuesday January 08, 2002 @02:28PM (#2804795) Homepage Journal
    Quantum theory has everything to do with compression. Inside sources have revealed that this compression scheme works on the uncertainty principles key to quantum physics. You see, any strinng of 100 bits has a distinct probability of being compressable to a single bit. Of course, this means that this compression scheme will produce bogus results 99.999999% of the time, but think of the wonder of compression realized the other .000001% of the time! Furthermore, the system requirements for their technology are as follows: x86 PC running WindowsXP (to take advantage of DirectX in wickedly rendering the fractals neccessary for the compression), a particle accelerator, and a heavy dose of optimism combined with a complete lack of skepticism.
  • by curunir ( 98273 ) on Tuesday January 08, 2002 @02:44PM (#2804929) Homepage Journal
    Therefore, I should easily be able to compress The Matrix into a single byte with 256 passes.

    I'm not so sure about that...It takes a lot of bytes to represent our entire society (in 1999, at least). The AI for Hugo Weaving's character must have been a couple of gigs of code at least.

    However, if you want to compress the movie "The Matrix" into a single byte...here goes:
    <breathy_keanu_voice>Whoah...</breathy_ke anu_voice> (soundByte® compression...far from lossless compression, but this is as close as anyone will ever come to one byte compression).

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...