Forgot your password?
typodupeerror

First Hutter Prize Awarded 191

Posted by kdawson
from the compress-this dept.
stefanb writes, "The Hutter Prize for Lossless Compression of Human Knowledge, an ongoing challenge to compress a 100-MB excerpt of the Wikipedia, has been awarded for the first time. Alexander Ratushnyak managed to improve the compression factor to 5.86 and will receive a 3,416-Euro award. Being able to compress knowledge well is believed to be related to acting intelligently." The Usenet announcement notes that at Ratushnyak's request, part of the prize will go to Przemyslaw Skibinski of the University of Wroclaw Institute of Computer Science, for his early contributions to the PAQ compression algorithm.
This discussion has been archived. No new comments can be posted.

First Hutter Prize Awarded

Comments Filter:
  • by Salvance (1014001) * on Sunday October 29, 2006 @10:17PM (#16637656) Homepage Journal
    It turns out that Alexander Ratushnyak was the only person to even meet the challenge's guidelines, and one of only 3 people who submitted entries. Seems like a strange thing to award up to 50,000 Euro for ... his compression was only 20% smaller than what I just quickly did with gzip. I'm rather surprised that more people didn't try though ... it's not like people are throwing money at data compression researchers.
    • Re: (Score:3, Informative)

      by Barny (103770)
      20% of a few petabytes is.... a lot :)

      And yes, offering cash in this way is a great incentive for programers.

      Also, if its a cpu friendly (read: reasonable cpu useage or even able to be offloaded on a processing card) method it could potentially add 20% onto your bandwidth :)
      • by Salvance (1014001) * on Sunday October 29, 2006 @10:35PM (#16637822) Homepage Journal
        20% is a lot, when the compression/decompression is fast. gzip/WinRAR/WinZip all compress this file to the 22-24MB mark in a minute or two on my desktop (which is comparable to their test machine). The winner's algorithm compressed the data to 16.9MB, but spent 5 hours and 900MB of RAM doing so. The contest would be far more interesting if it added a reasonable time/RAM restriction (e.g. 10 minutes and 500MB of RAM).
        • by Skuto (171945)
          Well, there *is* a restriction on the amount of time/CPU. Just read the website.

          Whatever is "reasonable" is just personal preference.
        • by vidarh (309115) <vidar@hokstad.com> on Monday October 30, 2006 @04:50AM (#16639693) Homepage Journal
          You miss the point. The goal isn't to achieve better usable compression, but to encourage research they believe will lead to advances in AI.
        • by aug24 (38229)
          The point of the exercise is not really to worry about how much time/memory it took, but to improves mechanisms for finding and then understanding structure within data.

          Perhaps two classes would be interesting: one with, and one without, time/space limitations.

          Justin.
        • The contest would be far more interesting if it added a reasonable time/RAM restriction (e.g. 10 minutes and 500MB of RAM).

          Maybe it would be more interesting, but it would also be a totally different contest. It isn't a contest for generic file compressors, goddamit! It's supposed to drive research in the field of knowledge representation, based on the supposition that in order to compress knowledge well, you have to find good representation for it. The compression factor is supposed to be a kind of benchm

          • It's supposed to drive research in the field of knowledge representation, based on the supposition that in order to compress knowledge well, you have to find good representation for it.

            The problem that I see here is that we're precisely not compressing knowledge, but a certain, precisely-drawn-but-arbitrary slice of it. It might be possible to represent, say, all knowledge of classical electrodynamics in form of a bunch of equations, but how do you represent the subset of that knowledge contained in an a

            • Since you have to choose at least one representation from which you derive your knowledge English text is a pretty good place to start. Wikipedia consists of more than English text of course and that means those representations will need to be compressed too.

              The result of this is pretty much what you need for epistemology, software and legal disciplines: Optimal language models telling you precisely how language maps to knowledge.

              There was some debate about using the multilingual versions of Wikipedi

        • The contest would be far more interesting if it added a reasonable time/RAM restriction (e.g. 10 minutes and 500MB of RAM).

          What's the point of adding processing and/or memory requirements if the sole point of this prize is to squeeze the most information into the smallest possible package? After all in this case it's the end product that matters, not what it takes to get there.

        • by gurps_npc (621217)
          Next year, his algorithm, running on a standard desktop will do it in 3 hours. By 2015, a standard computer using his algorithm will do it in that same minute or two.
    • Re: (Score:3, Interesting)

      by Raul654 (453029)
      There aren't many Wikipedia related things I don't know about (I'm a Wikipedia administrator, arbitrator, and bureacrat, featured article director, and I'm on the Wikimedia Foundation's Press Committee), and this is the first time I've ever heard of this contest. I think it's fair to say it's been relatively low-profile.
    • It turns out that Alexander Ratushnyak was the only person to even meet the challenge's guidelines

      Wow, that makes me really wish I had entered. There are some great multipass lossless text compression systems that would work well for Wikipedia...
    • by Shados (741919)
      Keep in mind that the rule was that the file had to be a self executable. If you just did default gziping, you'd have to include the overhead of a self executable in the total.
      • by gameforge (965493)
        The overhead for both compression and decompression on my box anyway looks to be about 50k (the size of my /bin/gzip).

        No clue whether it requires more code to compress than to decompress, but I would guess your overhead would be less than 30k. On an order of maybe 20MB of compressed data, that's not much.

        GZIP is a bad example because it also works with binary data; it's too much. If you only have to worry about encountering 128 possible values, compression can get real interesting, and there are a lot mor
        • by Firehed (942385)
          The only problem is that if your heuristics are designed for this specific chunk of data, it'll end up being largely useless for a full database crunching. Mind you, they could still be useful, but you don't want them to be sample-specific when you're talking about orders of magnitude more data that will need to get compressed.

          I really don't know jack about how compression is done, but I can toss out an idea. Scan article text for all of the used characters... expect to find your A-Z a-z 0-9 and some char
    • Finding a way to compress just a little bit more, becomes an increasingly harder problem. You begin to hit entropy limits, and have to use methods that are only relevant to the type of data being compressed. For example, a good lossless audio compressor won't handle text data very well, and vice versa.
      Most compressors in this space use a prediction algorithm to generate a signal, and then only compress the differences between the generated signal and the actual data. To get the best possible compression, yo
    • Re: (Score:2, Informative)

      by CryoPenguin (242131)
      Since the criteria for entry say that any new submission must beat the current record, it's no surprise that only 3 people are listed. You're not seeing any of the people who didn't win.
    • I tried. It's hard.
  • by Anonymous Coward
    "The Hutter Prize for Lossless Compression of Human Knowledge, an ongoing challenge to compress a 100-MB excerpt of the Wikipedia"

    Wikipedia? Knowledge? Isn't that already a lossy compression mechanism?
  • Hmm... (Score:4, Funny)

    by Cinder6 (894572) on Sunday October 29, 2006 @10:25PM (#16637732)
    Am I the only one who finds it slightly ironic that (as of this writing), there is no entry for the Hutter Prize on Wikipedia?
  • For comparison .... (Score:4, Informative)

    by Ignorant Aardvark (632408) <cydeweys AT gmail DOT com> on Sunday October 29, 2006 @10:26PM (#16637738) Homepage Journal
    For comparison purposes, WinRAR on its best setting only gets this down to 24MB. Doubtless 7zip could get even lower, but I don't think either could crack the 17MB mark. And certainly neither of those would be self-extracting, which this contest requires.
      • 1. Itemize the articles in the Wikipedia extract.
      • 2. Edit them down to stubs.
      • 3. Compress File
      • 4. ...
      • 5. Profit!
      • 6. ...
      • 7. Other Wikipedia authors replace and augment the stubs.
    • by thue (121682)
      I tried compressing it with gzip, bzip2, and lzma programs. I tried posting the results, but they do not fit within the lameness filter :(.

      The best result was with lzma, the algorithm used by 7zip, which got it down to 25,188,131 bytes. So the 17MB achieved in this contest is pretty impressive.
  • How slow this code is. Usually when you are trying to squeak out another 1% of space in a file your algorythm triples in size and quadruples in runtime to get that improvement. I wonder how practical this new algorythm really is. Probably not so much an issue nowadays but I also wonder how big the compression program is. This used to be a really big deal many years ago when the compression programs could easily be larger than the data they were compressing.
    • by CastrTroy (595695)
      Accoring to somebody who posted above (he probably read it in the article, but i'm not about to look there), the program took, 5 hours and 900MB of RAM. I find it's interesting that he only got down to 17MB, while consuming all those resources. It makes it seem like it isn't even worth it. I'm sure you could find some pretty small program that would compute pi to some very large number of digits, and find the wikipedia excerpt in the results, but It would take a very long time to run.
      • Re: (Score:3, Informative)

        by AxelBoldt (1490)

        I'm sure you could find some pretty small program that would compute pi to some very large number of digits, and find the wikipedia excerpt in the results, but It would take a very long time to run.

        In all likelihood, that trick won't work: your program would need to know the start index of the Wikipedia text in the digit sequence of pi, and that index would be so astronomically huge that writing it down would be about as long as writing down Wikipedia itself. As a little example of the effect, think about

        • by Henry V .009 (518000) on Sunday October 29, 2006 @11:58PM (#16638307) Journal
          Assuming a random distribution of digits in pi (and why does everyone assume this? -- there is certainly no proof), the odds are that you'll see one sequence of 4 zeros every 10,000 (decimal) digits. On the other hand, you'd expect to see 4 zeros once every 16 binary digits. Wikipedia, at 100MBs would be expected to be found once every 2^800,000,000 binary digits of pi.

          Then again, maybe we're lucky, and this universe is God's way of storing the universal encyclopedia in the digits of pi. Wikipedia might be up near the front somewhere.
          • by Fyz (581804)
            But suppose it's somewhere near the front, except there's a speling mistake every place "speling" occurs. Would that be proof that God isn't infallible?
        • by arrrrg (902404)
          Tangentially related and mildly amusing trivia:

          Feynman Point [wikipedia.org]. From the Wikipedia article: "The Feynman point comprises the 762nd through 767th decimal places of pi ... The name refers to a remark made by the physicist Richard Feynman, expressing a wish to memorise the digits of as far as that point so that when reciting them, he would be able to end with '... nine, nine, nine, nine, nine, nine, and so on.'"

          So, I dunno about "0000," but for some interesting sequences we may get lucky :).
        • by dido (9125)

          But again, this assumes that Pi is a normal number [everything2.com]. So far, nobody knows for sure whether or not this is true. I don't know why everyone seems to make this unfounded assumption [everything2.com] when dealing with Pi. Attempts to establish the truth of this matter are one of the main reasons why mathematicians are engaged in calculating Pi to billions and billions of digits; there is no other practical use for that many decimal places of accuracy; 39 digits of the number [everything2.com] would be enough to calculate the circumference of th

        • by x2A (858210)
          How about storing how long it would take a room full of monkeys to type it?

        • by CastrTroy (595695)
          I was just pointing out how resource intensive this program really was. I mean, sure you could use this program for compression (i'm interested on how it would perform on other pieces of data, or whether or not there was specific hacks put in for the data set), but it consumes a ton of resources and doesn't really give you something that much better than using gzip. At least not good enough that it's worth spending 5 hours decompressing 100 MB worth of data. Oh, and you could fit very large numbers in a
          • by AxelBoldt (1490)
            Oh, and you could fit very large numbers in a very small space, as long as you use the correct notation.
            That works only for very few numbers; you would have to be extraordinarily lucky for this trick to work with the starting index of Wikipedia in the digit sequence of pi.
    • by mh101 (620659)
      FTFA:
      Restrictions: Must run in less than 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD.
      If someone's program takes close to 10 hours to compress 100MB it better get considerably more than just an additional 1% out of it!

      • by arth1 (260657)

        If someone's program takes close to 10 hours to compress 100MB it better get considerably more than just an additional 1% out of it!

        Indeed. Who in their right mind would spend 10 hours compressing instead of simply moving the entire dataset in a fraction of the time?

        Oh, and for a good compression scheme:

        uuencode filename filename | mail external@stora.ge

        Then all the decompresser needs to do is fetch the mail and uudecode it.

        Regards,
        --
        *Art

    • Re: (Score:2, Insightful)

      If you'd RTFA you'd find the running times ranged from 30 minutes to 5 hours. They have a whole table and everything.



      The whole point of the challenge was to create a self-executing compression program that made a perfect copy of their 100MB file. Final file sizes were in the 16MB range. Geeze, seriously RTFA.

  • by Profound (50789) on Sunday October 29, 2006 @10:56PM (#16637998) Homepage
    Being able to compress knowledge well is believed to be related to acting intelligently. - IHNPTTT (I Have Not Passed The Turing Test), but while my brain is good at remember the gist of knowlege, but really bad at losslessly recalling it.
    • intelligence != acting like a human
      • by Kris_J (10111) *
        intelligence != acting like a human
        Truer words were never typed.
      • by qbwiz (87077) *
        Intelligence != being able to store Wikipedia in a small space. In fact, according to Wikipedia, programs are already able to predict what comes next in English text [wikipedia.org] approximately as well as humans, but they aren't as intelligent as humans.

        If we want to make something less intelligent more intelligent, it seems likely that at some point, when its intelligence is in some sense the same as the average human's, that its behavior would be more similar to a human's than it is now. Of course, that's hardly a cer
        • A more precise Wikipedia link is to the erroneous section on Entropy as information content [wikipedia.org] which reads:

          Experiments with human predictors show an information rate of between 1.1 and 1.6 bits per character, depending on the experimental setup; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character.

          The range for Shannon's experiments are actually between .6 and 1.3 bit per character.

          This error even caught Dr. Dobbs compression expert, Mark Nelson [marknelson.us] so I guess it isn't too

        • If we want to make something less intelligent more intelligent, it seems likely that at some point, when its intelligence is in some sense the same as the average human's, that its behavior would be more similar to a human's than it is now.

          Why would it?

          The first flying machines we built weren't very good. Today we have flying machines that fly much better than birds. At now point in the intervening time did we ever have a machine that behaved anywhere similar to a bird.

          A point could be made that airp

  • ... by Douglas Adams. Wonder what rate of compression is putting the meaning of life, universe and everything in just the number 42.
  • is what you get when you run the new compression backwards!

    This reminds me of the cryptographer paid to crypto-compress datastreams into musical notation. The work laid the foundation to real time packet sniffing of the Internet

    Cryptographer? Got atta boys
  • Done on 9/25/06? (Score:3, Interesting)

    by SillySnake (727102) on Monday October 30, 2006 @01:02AM (#16638629)
    According to http://prize.hutter1.net/ [hutter1.net] this happened on Sept 25 of 2006.

    The site also gives some of the requirements..
    Create a compressed version (self-extracting archive) of the 100MB file enwik8 of less than 17MB. More precisely:

            * Create a Linux or Windows executable of size S L := 17'073'018 = previous record.
            * If run, it produces (without input from other sources) a 108 byte file that is identical to enwik8.
            * If we can verify your claim, you are eligible for a prize of 50'000×(1-S/L). Minimum claim is 500.
            * Restrictions: Must run in 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD.
  • http://www.cs.fit.edu/~mmahoney/compression/text.h tml [fit.edu]

    Info about contenders and results of common compression programs on the testset. (All the "just use gzip/rar/winrk/..." fools can stop jabbering now...)
  • Being able to compress knowledge well is believed to be related to acting intelligently.

    Bah! Knowing what to compress is more intelligenter...

  • Being able to compress knowledge well is believed to be related to acting intelligently.

    There's actually a little more to it than that. The creators of the Hutter prize believe that intelligence is the ability to compress knowledge. That's why they're offering this prize -- to solve the artificial intelligence problem. I'm not saying I buy into it, but that's their claim. That's why they call it "The most crucial technology prize of all." Here [geocities.com] is the page describing the origin of the prize.

  • I have tested some of the standard compression tools, here are the compressed sizes:

    100000000 | original size
    36445241 | gzip --best
    29008758 | bzip2
    28860178 | rzip -9
    24864529 | 7zr -mx=9
    24753293 | rar a -m5 -mdG

    7z does not do so well, I think because not so much tuned for text compression, it is much better at compressing binaries. For text compression PPMD and the variants are quite good, so I guess you will see good results with WinRK, Compressia, PAQ and the like.
  • I really misread this as a Hitler Prize.
  • For all the people laughing at this contest: Read this [fit.edu] and this [hutter1.net] before making any more posts about how ridiculous this contest seems, how stupid it is to compress wikipedia, how compression is such a dumb process, and how bad the compressor was for taking a lot of time to process the text.
    • by Viol8 (599362)
      These sorts of algorithms might not be ridiculus but they have limited scope. To compress text in this way you have to understand the language and its idioms. So the algo that did this would for example be useless at compressing French and possibly even English written in some colloquial form. It might have uses in some specific instances but as a general purpose text compression algorithm it leaves a lot to be desired.
      • by rbarreira (836272)
        We can't know that until the algorithms are invented. And I guess the important thing here is not the algorithms, but the theories behind them (which hopefully will be possible to apply in a general way to all the languages).
  • First Hustler Prize Awarded ?

    Maybe I am the only Slashdot reader that enjoys pr0n ...

  • ...the prize was compressed by a factor of 5.86 as well.
  • See I have a reference file Zipper. That means that I take long strings and make a tag outta them. Right now it has 1 tag and using one huge 100 meg string reference file. I will update this later to include more. I think I should get the money! It also takes 20 seconds to run!

You've been Berkeley'ed!

Working...