Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

ZeoSync Makes Claim of Compression Breakthrough 989

dsb42 writes: "Reuters is reporting that ZeoSync has announced a breakthrough in data compression that allows for 100:1 lossless compression of random data. If this is true, our bandwidth problems just got a lot smaller (or our streaming video just became a lot clearer)..." This story has been submitted many times due to the astounding claims - Zeosync explicitly claims that they've superseded Claude Shannon's work. The "technical description" from their website is less than impressive. I think the odds of this being true are slim to none, but here you go, math majors and EE's - something to liven up your drab dull existence today. Update: 01/08 13:18 GMT by M : I should include a link to their press release.
This discussion has been archived. No new comments can be posted.

ZeoSync Makes Claim of Compression Breakthrough

Comments Filter:
  • how can this be? (Score:3, Informative)

    by posmon ( 516207 ) on Tuesday January 08, 2002 @09:14AM (#2803128) Homepage
    even lossless compression still relies on redundancy within the data, normally repeating patterns of data. surely 100-1 on TRUE random data is impossible?
  • by bleeeeck ( 190906 ) on Tuesday January 08, 2002 @09:15AM (#2803135)
    ZeoSynch's Technical Process: The Pigeonhole Principle and Data Encoding Dr. Claude Shannon's dissertation on Information Theory in 1948 and his following work on run-length encoding confidently established the understanding that compression technologies are "all" predisposed to limitation. With this foundation behind us we can conclude that the effort to accelerate the transmission of information past the permutation load capacity of the binary system, and past the naturally occurring singular-bit-variances of nature can not be accomplished through compression. Rather, this problem can only be successfully resolved through the solution of what is commonly understood within the mathematical community as the "Pigeonhole Principle."

    Given a number of pigeons within a sealed room that has a single hole, and which allows only one pigeon at a time to escape the room, how many unique markers are required to individually mark all of the pigeons as each escapes, one pigeon at a time?

    After some time a person will reasonably conclude that:
    "One unique marker is required for each pigeon that flies through the hole, if there are one hundred pigeons in the group then the answer is one hundred markers". In our three dimensional world we can visualize an example. If we were to take a three-dimensional cube and collapse it into a two-dimensional edge, and then again reduce it into a one-dimensional point, and believe that we are going to successfully recover either the square or cube from the single edge, we would be sorely mistaken.

    This three-dimensional world limitation can however be resolved in higher dimensional space. In higher, multi-dimensional projective theory, it is possible to create string nodes that describe significant components of simultaneously identically yet different mathematical entities. Within this space it is possible and is not a theoretical impossibility to create a point that is simultaneously a square and also a cube. In our example all three substantially exist as unique entities yet are linked together. This simultaneous yet differentiated occurrence is the foundation of ZeoSync's Relational Differentiation Encoding(TM) (RDE(TM)) technology. This proprietary methodology is capable of intentionally introducing a multi-dimensional patterning so that the nodes of a target binary string simultaneously and/or substantially occupy the space of a Low Kolmogorov Complexity construct. The difference between these occurrences is so small that we will have for all intents and purposes successfully encoded lossley universal compression. The limitation to this Pigeonhole Principle circumvention is that the multi-dimensional space can never be super saturated, and that all of the pigeons can not be simultaneously present at which point our multi-dimensional circumvention of the pigeonhole problem breaks down.

  • Is this April 1st? (Score:3, Informative)

    by tshoppa ( 513863 ) on Tuesday January 08, 2002 @09:16AM (#2803136)
    This has *long* been an April 1st joke published in such hallowed rags as BYTE and Datamation for at least as long as I've been reading them (20 years).

    The punchline to the joke was always along the lines of

    Of course, since this compression works on random data, you can repeatedly apply it to previously compressed data. So if you get 100:1 on the first compression, you get 10000:1 on the second and 1000000:1 on the third.
  • Press Release here (Score:2, Informative)

    by thing12 ( 45050 ) on Tuesday January 08, 2002 @09:17AM (#2803139) Homepage
    If you don't want to wade through the flash animations...

    http://www.zeosync.com/flash/pressrelease.htm [zeosync.com]

  • by Xentax ( 201517 ) on Tuesday January 08, 2002 @09:20AM (#2803151)
    No...the compressed data is almost certainly NOT random, so it couldn't be compressed the same way. It's also highly unlikely any other compression scheme could reduce it either.

    I'm very, very skeptical of 100:1 claims on "random" data -- it must either be large enough that even being random, there are lots of repeated sequences, or the test data is rigged.

    Or, of course, it could all be a big pile of BS designed to encourage some funding/publicity.

    Xentax
  • The pressrelease (Score:4, Informative)

    by grazzy ( 56382 ) <(ten.ews.ekauq) (ta) (yzzarg)> on Tuesday January 08, 2002 @09:20AM (#2803153) Homepage Journal
    ZEOSYNC'S MATHEMATICAL BREAKTHROUGH OVERCOMES LIMITATIONS OF DATA COMPRESSION THEORY

    International Team of Scientists Have Discovered
    How to Reduce the Expression of Practically Random Information Sequences

    WEST PALM BEACH, Fla. - January 7, 2001 - ZeoSync Corp., a Florida-based scientific research company, today announced that it has succeeded in reducing the expression of practically random information sequences. Although currently demonstrating its technology on very small bit strings, ZeoSync expects to overcome the existing temporal restraints of its technology and optimize its algorithms to lead to significant changes in how data is stored and transmitted.

    Existing compression technologies are currently dependent upon the mapping and encoding of redundantly occurring mathematical structures, which are limited in application to single or several pass reduction. ZeoSync's approach to the encoding of practically random sequences is expected to evolve into the reduction of already reduced information across many reduction iterations, producing a previously unattainable reduction capability. ZeoSync intentionally randomizes naturally occurring patterns to form entropy-like random sequences through its patent pending technology known as Zero Space Tuner(TM). Once randomized, ZeoSync's BinaryAccelerator(TM) encodes these singular-bit-variance strings within complex combinatorial series to result in massively reduced BitPerfect(TM) equivalents. The combined TunerAccelerator(TM) is expected to be commercially available during 2003.

    According to Peter St. George, founder and CEO of ZeoSync and lead developer of the technology: "What we've developed is a new plateau in communications theory. Through the manipulation of binary information and translation to complex multidimensional mathematical entities, we are expecting to produce the enormous capacity of analogue signaling, with the benefit of the noise free integrity of digital communications. We perceive this advancement as a significant breakthrough to the historical limitations of digital communications as it was originally detailed by Dr. Claude Shannon in his treatise on Information Theory." [C.E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal, 27:379-423, 623-656, 1948]

    "There are potentially fantastic ramifications of this new approach in both communications and storage," St. George continued. "By significantly reducing the size of data strings, we can envision products that will reduce the cost of communications and, more importantly, improve the quality of life for people around the world regardless of where they live."

    Current technologies that enable the compression of data for transmission and storage are generally limited to compression ratios of ten-to-one. ZeoSync's Zero Space Tuner(TM) and BinaryAccelerator(TM) solutions, once fully developed, will offer compression ratios that are anticipated to approach the hundreds-to-one range.

    Many types of digital communications channels and computing systems could benefit from this discovery. The technology could enable the telecommunications industry to massively reduce huge amounts of information for delivery over limited bandwidth channels while preserving perfect quality of information.

    ZeoSync has developed the TunerAccelerator(TM) in conjunction with some traditional state-of-the-art compression methodologies. This work includes the advancement of Fractals, Wavelets, DCT, FFT, Subband Coding, and Acoustic Compression that utilizes synthetic instruments. These are methods that are derived from classical physics and statistical mechanics and quantum theory, and at the highest level, this mathematical breakthrough has enabled two classical scientific methods to be improved, Huffman Compression and Arithmetic Compression, both industry standards for the past fifty years.

    All of these traditional methods are being enhanced by ZeoSync through collaboration with top experts from Harvard University, MIT, University of California at Berkley, Stanford University, University of Florida, University of Michigan, Florida Atlantic University, Warsaw Polytechnic, Moscow State University and Nankin and Peking Universities in China, Johannes Kepler University in Lintz Austria, and the University of Arkansas, among others.

    Dr. Piotr Blass, chief technology advisor at ZeoSync, said "Our recent accomplishment is so significant that highly randomized information sequences, which were once considered non-reducible by the scientific community, are now massively reducible using advanced single-bit- variance encoding and supporting technologies."

    "The technologies that are being developed at ZeoSync are anticipated to ultimately provide a means to perform multi-pass data encoding and compression on practically random data sets with applicability to nearly every industry," said Jim Slemp, president of Radical Systems, Inc. "The evaluation of the complex algorithms is currently being performed with small practically random data sets due to the analysis times on standard computers. Based on our internally validated test results of these components, we have demonstrated a single-point-variance when encoding random data into a smaller data set. The ability to encode single-point-variance data is expected to yield multi-pass capable systems after temporal issues are addressed."

    "We would like to invite additional members of the scientific community to join us in our efforts to revolutionize digital technology," said St. George. "There is a lot of exciting work to be done."

    About ZeoSync

    Headquartered in West Palm Beach, Florida, ZeoSync is a scientific research company dedicated to advancements in communications theory and application. Additional information can be found on the company's Web site at www.ZeoSync.com or can be obtained from the company at +1 (561) 640-8464.

    This press release may contain forward-looking statements. Investors are cautioned that such forward-looking statements involve risks and uncertainties, including, without limitation, financing, completion of technology development, product demand, competition, and other risks and uncertainties.
  • Re:Current ratio? (Score:3, Informative)

    by CaseyB ( 1105 ) on Tuesday January 08, 2002 @09:30AM (#2803213)
    but whats the current ratio?

    For truly random data? 1:1 at the absolute best.

  • Re:how can this be? (Score:2, Informative)

    by mccalli ( 323026 ) on Tuesday January 08, 2002 @09:30AM (#2803214) Homepage
    even lossless compression still relies on...normally repeating patterns of data. surely 100-1 on TRUE random data is impossible?

    However, in truly random data such patterns will exist from time to time. For example, I'm going to randomly type on my keyboard now (promise this isn't fixed...):

    oqierg qjn.amdn vpaoef oqleafv z

    Look at the data. No patterns. Again....

    oejgkjnfv,cm v;aslek [p'wk/v,c

    Now look - two occurences of 'v,c'. Patterns have occured in truly random data.

    Personally, I'd tend to agree with you and consider this not possible. But I can see how patterns might crop in random data, given a sufficiently large amount of source data to work with.

    Cheers,
    Ian

  • Re:Current ratio? (Score:5, Informative)

    by radish ( 98371 ) on Tuesday January 08, 2002 @09:30AM (#2803216) Homepage

    For lossless (e.g. zip, not jpg, mpg, divx, mp3 etc etc) you are looking at about 2:1 for 8-bit random, much better (50:1?) for ascii text (e.g. 7-bit non-random).

    If you're willing to accept loss, then the sky's the limit, mp3 @ 128kbps is about 12:1 compared to a 44k 16bit wave.
  • by color of static ( 16129 ) <smasters&ieee,org> on Tuesday January 08, 2002 @09:31AM (#2803226) Homepage Journal
    There seems to be a company claiming to exceed, go around, obliterate Shannon every few years. In the early 90's there was a company called Web (before the WWW was really around by a year or so). They made claims of compressing any data, even data that had already been compressed. It is a sad story that you should be able to find in either the sci.compression FAQ or the renewed deja archives. It basically boils down to as they got closer to market, they found some problems... you can guess the rest.
    This isn't limited to the field of compression of course. There are people that come up with "unbreakable" encryption, infinite gain amplifier (is that gain in V and I?), and all sorts of perpetual motion machines. The sad fact is that compression and encryption are not well understood enough for these ideas to be killed before a company is started or stacked on the claims.
  • Re:how can this be? (Score:3, Informative)

    by s20451 ( 410424 ) on Tuesday January 08, 2002 @09:46AM (#2803329) Journal
    Of course patterns occur in random data. For example, if you toss a fair coin for a long time, you will get runs of three, four, or five heads which recur from time to time. The point is that in random, noncompressible data, the probability of occurrence for any given pattern is the same as the probability of any other pattern.
  • Re:how can this be? (Score:2, Informative)

    by Catiline ( 186878 ) <akrumbach@gmail.com> on Tuesday January 08, 2002 @09:47AM (#2803339) Homepage Journal
    Simple. You're doing binary counting. To decompress using this algorythm you need to know the number of cycles performed, for which the smallest (uncompressed) form is the original imput data.
  • by richieb ( 3277 ) <richieb@gmai l . com> on Tuesday January 08, 2002 @09:52AM (#2803371) Homepage Journal
    If I recall my set theory properly the "Pigeon Hole Principle" simply states that if you have 100 holes and 101 pigeons, when you distribute all the pigeons into all holes, there will be at least one hole with at least two pigeons.

    I don't recall any of this crap about pigeons flying out of boxes. Or am I getting old?

  • Re:how can this be? (Score:5, Informative)

    by tjansen ( 2845 ) on Tuesday January 08, 2002 @09:57AM (#2803393) Homepage
  • Re:Current ratio? (Score:5, Informative)

    by markmoss ( 301064 ) on Tuesday January 08, 2002 @10:01AM (#2803418)
    whats the current ratio? I would take the *zip algorithms as a standard. (I've seen commercial backup software that takes twice as long to compress the data as Winzip but leaves it 1/3 larger.) Zip will compress text files (ASCII such as source code, not MS Word) at least 50% (2:1) if the files are long enough for the most efficient algorithms to work. Some highly repetitive text formats will compress by over 90% (10:1). Executable code compresses by 30 to 50%. AutoCAD .DWG (vector graphics, binary format) compresses around 30%. Back when it was practical to use PKzip to compress my whole hard drive for backup, I expected about 50% average compression. This was before I had much bit-mapped graphics on it.

    Bit-mapped graphic files (BMP) vary widely in compressibility depending on the complexity of the graphics, and whether you are willing to lose more-or-less invisible details. A BMP of black text on white paper is likely to zip (losslessly) by close to 100:1 -- and fax machines perform a very simple compression algorithm (sending white*number of pixels, black*number of pixels, etc.) that also approaches 100:1 ratios for typical memos. Photographs (where every pixel is colored a little differently) don't compress nearly as well; the JPEG format exceeds 10:1 compression, but I think it loses a little fine detail. And JPEG's compress by less than 10% when zipped.

    IMHO, 100:1 as an average (compressing your whole harddrive, for example), is far beyond "pretty damn good" and well into "unbelievable". I know of only two situations where I'd expect 100:1. One is the case of a bit-map of black and white text (e.g., faxes), the other is with lossy compression of video when you apply enough CPU power to use every trick known.
  • Not possible (Score:5, Informative)

    by Eivind ( 15695 ) <eivindorama@gmail.com> on Tuesday January 08, 2002 @10:12AM (#2803477) Homepage
    Someone already pointed out that repeated compression would give infinite compression with this method. But there's another easy way to show that no compressor can ever manage to shrink all messages

    The proof goes like this:

    • Assume someone claims a compressor that will compress any X-byte message to Y bytes where Y<X
    • There are 2^(8*X) possible messages X bytes long.
    • There are 2^(8*Y) possible messages Y bytes long.
    • Since Y is smaller than X, this means that no 1 to 1 mapping between the two sets can exist, because they're not equally large.
    You see this simply if I claim a compressor that can compress any 2-byte message to 1 byte.

    There are then 65536 possible input-messages, but onle 256 possible outputs. So It is mathemathically certain that 99.7% of the messages can not be represented in 1 byte. (regardless of how I choose to encode them)

    These claims surface ever so often. They're bullshit every time. It's even a FAQ-entry on sci.compression

  • Re:how can this be? (Score:5, Informative)

    by ergo98 ( 9391 ) on Tuesday January 08, 2002 @10:17AM (#2803505) Homepage Journal

    Well firstly I'd say the press release gives a pretty clear picture of the reality of their technology: It has such an overuse of supposedly TM'd (anyone want to double check the filings? I'm going to guess that there are none) "technoterms" like "TunerAccelerator" and "BinaryAccelerator" that it just is screaming hoax (or creative deception), not to mention a use of Flash that makes you want to punch something. Note that they give themselves huge openings such as always saying "practically random" data: What the hell does that mean?

    I think one way to understand it (Because all of us at some point or another have thought up some half-assed, ridiculous way of compressing any data down to 1/10th -> "Maybe I'll find a denominator and store that with a floating point representation of..."), and I'm saying this as not a mathematician or compression expert : Let's say for instance that this compression ratio is 10 to 1 on random data, and I have every possible random document 100 bytes long -> That means I have 6.6680144328798542740798517907213e+240 different random documents (256^100). So I compress them all into 10 byte documents, but the maximum variations of a 10 byte documents is 1208925819614629174706176 : There isn't the entropy in a 10-byte document to store 6.6680144328798542740798517907213e+240 different possibilities (it is simply impossible, no matter how many QuantumStreamTM HyperTechTM TechoBabbleTM TermsTM) : You end up needed, tada, 100 bytes to have the entropy to possibly store all variants of a 100 byte document, but of course most compression routines put in various logic codes and actually increase the size of the document. In the case of the ZeoSync claim though they're apparently claiming that somehow you'll represent 6.6680144328798542740798517907213e+240 different variations in a single byte : So somehow 64 tells you "Oh yeah, that's variation 5.5958572359823958293589253e+236!". Maybe they're using SubSpatialQuantumBitsTM.

  • by kzinti ( 9651 ) on Tuesday January 08, 2002 @10:29AM (#2803569) Homepage Journal
    Seriously though, the comp.compression FAQ [faqs.org] is really worth a read, especially question #9 [faqs.org]

    YES! Ditto. Seconded. Somebody mod this guy up.

    Here's a bit to whet your appetite:

    9.1 Introduction

    It is mathematically impossible to create a program compressing without loss
    *all* files by at least one bit (see below and also item 73 in part 2 of this
    FAQ). Yet from time to time some people claim to have invented a new algorithm
    for doing so. Such algorithms are claimed to compress random data and to be
    applicable recursively, that is, applying the compressor to the compressed
    output of the previous run, possibly multiple times. Fantastic compression
    ratios of over 100:1 on random data are claimed to be actually obtained.

    Such claims inevitably generate a lot of activity on comp.compression, which
    can last for several months. Large bursts of activity were generated by WEB
    Technologies and by Jules Gilbert. Premier Research Corporation (with a
    compressor called MINC) made only a brief appearance but came back later with a
    Web page at http://www.pacminc.com. The Hyper Space method invented by David
    C. James is another contender with a patent obtained in July 96. Another large
    burst occured in Dec 97 and Jan 98: Matthew Burch applied
    for a patent in Dec 97, but publicly admitted a few days later that his method
    was flawed; he then posted several dozen messages in a few days about another
    magic method based on primes, and again ended up admitting that his new method
    was flawed. (Usually people disappear from comp.compression and appear again 6
    months or a year later, rather than admitting their error.)

    Other people have also claimed incredible compression ratios, but the programs
    (OWS, WIC) were quickly shown to be fake (not compressing at all). This topic
    is covered in item 10 of this FAQ.
  • by jcasey ( 264935 ) on Tuesday January 08, 2002 @10:41AM (#2803657)
    The flaw here is simple,

    When you reorganize the string of data, and sort by value, you must retain information on how to restore the string to its original order. There is no effiient way to save this "undo" information without negating the benefit gained from compression.

    For example:

    Given a series of random numbers: 34, 8, 244, 127

    If you reorganize them by value: 8,34,127,244

    You can create redundancy if the string is large enough - for 8 bit values, a string of 25,600 values should produce a lot of repettition - in this example, there would be an average of 10 repetitions per value (10*256=25,600).

    This is nice until you try to decompress the file. Without a record of how to reorganize the values, you are left with junk.

    Even if you keep a record with info for reorganizing the data, the overhead needed to store the undo info outweighs the compression benefit.

    If you did find an efficient way to to store the undo information, it would be more effective to simply apply this algorithm directly to the random data!

  • by Nakoruru ( 199332 ) on Tuesday January 08, 2002 @10:56AM (#2803734)
    I believe in this example, you HAVE TO mark a pigeon with something. There is no such thing as a pigeon without a marker (or, a pigeon without a marker is one of the 100 ways to mark a pigeon). You only have 100 different types of markers, so two pigeons would share one if you had 101 pigeons. If you leave a marker off a pigeon, this would be the same as having a 101st type of marker. In other words, if you can tell two pigeons apart, then they have been marked. You could have just as easily said "well, some pigeons have different spots on them, some are big, some are small." But that its kind of beside the point.

    Its just a silly way of saying that if you have fewer categories than things to put into categories then some categories have to have more than one thing in them. For instance, you could say there are 6 different races of people on Earth, and there are 6 billion people. So, obviously at least one of the categories has more than one person in it. It is a very simple principle, but can be used to as part of a proof to show less obvious things (sorry, no examples spring to mind).

    Don't get blinded by all this pigeon crap ^_^

  • by Quaternion ( 34622 ) on Tuesday January 08, 2002 @11:19AM (#2803839) Homepage
    Do you mean the Steve Smale from Berkeley who won a Fields Medal?

    smale bio [st-andrews.ac.uk]

    I heard him speak at MIT, and read a paper of his that was published in the Bulletin of the American Mathematical Society... On the Mathematical Foundations of Machine Learning, with Felipe Cucker I think. That was published in Oct. 2001, which qualifies as within the last 5 years, right?
  • by Anonymous Coward on Tuesday January 08, 2002 @11:29AM (#2803879)
    The following is a proof that a perfect compression algorithm has an average compression rate of ONE. Yes, ONE. That translates into NO COMPRESSION WHATSOEVER. A short aside on why compression is used if it "doesn't do anything" follows. I'm not doing this rigorously because I don't remember the rigorous proof, before anyone asks. This is sometimes referred to as "the enumeration proof."

    Assume any given data N to be compressed can be viewed as a binary number. Assuming the algorithm works on any given data (sure I can say that your file is "1" compressed and refuse to compress anything else, but do you need my help for that?), it must be able to compress all numbers from 0 to N. It must also give UNIQUE compressed answers (I can say that ALL files are "1"...decompression's tricky, though). Therefore, if the algorithm is used to compress all numbers from 0 to N, it will return the numbers 0 to N in a different order, in the BEST CASE. If the algorithm isn't perfect, it will return numbers GREATER than N as well.

    So why do we use compression? Because we don't compress numbers from 0 to N. We compress things that have patterns. Lots of them. Because of that, algorithms can make additional assumptions (some quantity of repeated data will be present in the data set being the usual one). Because of this, the average comopression of an algorithm, when used on random files and not enumerated numbers, is usually less than one (i.e. usually makes the file smaller). If you custom-write a file in binary that contains little to no patterns, you'll find that most compression algorithms will either make it larger or leave it the same. The last thing I'll mention is an example of where compression works really well: text documents. Since most letters in a document are within a certain ASCII range, the document can be reduced in size. For example, if you use no character under 65 (and your document has no header. Shh...it's an example), the first bit of every byte inthe file will be 1. The compression algorithm can see this, and remove all these ones. It will have to add a couple bytes at the end mentioning how the file was compressed, but you'll be getting rid of 1/8th of the file for the cost of a couple bytes. That's pretty good.

    I'm sure no one will mod this up, because no one likes anonymous cowards, but it might as well be here for posterity's sake.
  • by Happy Monkey ( 183927 ) on Tuesday January 08, 2002 @11:43AM (#2803960) Homepage
    You then need to add one bit of data to tell whether you've compressed it or not.
  • by King Babar ( 19862 ) on Tuesday January 08, 2002 @12:11PM (#2804101) Homepage
    Okay, the mysterious Dr. Wlodzimierz Holtzinski doesn't get a single hit on Google.

    Well, that's because they mis-spelled his name. Seriously, I bet they are really trying to refer to Wlodzimierz Holsztynski, who posts to Polish newsgroups from the address "sennajawa@yahoo.com". His last contribution to the one Usenet thread that mentions "zeosync" and his name uses the word "nonsens" a lot [google.com], also the phrase "nie autoryzowalem", and the sentence "Bylem ich konsultantem, moze znowu bede, a moze nie, z nimi nie wiadom." Somebody who really knows Polish could probably have a field day with this and other posts...

    I'm getting the idea that some people on the scientific team might be better termed "random people we sent email to who actually responded once or twice".

  • An obvious fake (Score:2, Informative)

    by arvindn ( 542080 ) on Tuesday January 08, 2002 @01:42PM (#2804527) Homepage Journal

    100:1 ratio? On random data?
    Considerations far more elementary than Shannon's limits rule out compression of statistically random data by even a single bit. Here's why:
    There are 2^n bit strings of length n. Any compression method purporting to compress random strings (by even a single bit) must produce output of length at most n-1 for these 2^n inputs. But in that case the mapping is not unique, since there are only (2^n)-1 bit strings of length n-1 or less. (So decoding is not possible.)
    Once every so often some "researchers" claim to have attained the holy grail of compression. Too bad we never hear of them again :(

    From the comp.compression faq [faqs.org]
    this topic has generated and is still generating the greatest volume of news in the history of comp.compression
    ...
    The advertized revolutionary methods have all in common their supposed ability to compress random or already compressed data. I will keep this item in the FAQ to encourage people to take such claims with great precautions
  • by Evacuator ( 63990 ) on Tuesday January 08, 2002 @02:06PM (#2804686)
    With my limited understanding of polish I can add that he talks about the nonsense of him beeing in the scientific team. He also states that his name was used without any authorisation and he points out that the whole affair is only for hustling the money from investors.
  • by Ewann ( 209481 ) on Tuesday January 08, 2002 @03:00PM (#2805033)
    We have three native Polish speakers in my office. I asked one of them to translate the professor's reply. She said the gist of it is that he was upset they released his name, he didn't authorize any information release, etc. Apparently didn't deny or confirm the truth of the information but said something about having "more important things in my career" or something like that (not verbatim quote).
  • by Kythorn ( 52358 ) on Tuesday January 08, 2002 @05:20PM (#2806021)
    This may not appear immediately relevant, but bear with me.

    I'm not agreeing or disagreeing with ZeoSync's claims, but if you can impose a semblance of order on something that only appears chaotic, you can do some pretty cool stuff.

    Take for example this little demo at this website in germany [theproduct.de]. (I realize what the domain looks like, there's nothing for sale or license, trust me). The actual download link is about halfway down the page.

    This isn't "compression" in the conventional sense, but they still manage to contain a demo that contains hundreds of megs of textures and samples, in addition to the engine itself in *64kb*. Now thats a hell of a ratio.

    They do this not by storing the raw data, but instead storing the instructions needed to reconstruct the data as it is needed.

    Granted, I realize that they only accomplished this with their own data, but I don't think taking this a step further to an arbitrary set of textures and sounds is impossible. Granted, this idea won't work for all types of data, and also can not be considered "lossless", (hell, it's not even strictly compression) but I still think it's incredible that you can get this high quality results out of something this small.

    (Disclaimer: The above link is to a demo that requires directx 8.1 and I sincerely doubt will run under wine. It also doesn't work with every video card out there. I've scanned the binary, and it doesn't appear to have any viruses or trojans, but I won't guarantee it. If you can't accept the risk, don't download the binary.)

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...