Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Compress Wikipedia and Win AI Prize 324

Baldrson writes "If you think you can compress a 100M sample of Wikipedia better than paq8f, then you might want to try winning win some of a (at present) 50,000 Euro purse. Marcus Hutter has announced the Hutter Prize for Lossless Compression of Human Knowledge the intent of which is to incentivize the advancement of AI through the exploitation of Hutter's theory of optimal universal artificial intelligence. The basic theory, for which Hutter provides a proof, is that after any set of observations the optimal move by an AI is find the smallest program that predicts those observations and then assume its environment is controlled by that program. Think of it as Ockham's Razor on steroids. Matt Mahoney provides a writeup of the rationale for the prize including a description of the equivalence of compression and general intelligence."
This discussion has been archived. No new comments can be posted.

Compress Wikipedia and Win AI Prize

Comments Filter:
  • WikiPedia on iPod! (Score:2, Interesting)

    by network23 ( 802733 ) *

    I'd love to be able to have the whole WikiPedia available on my iPod (or cell phone), but without destroying [sourceforge.net]

    info.edu.org [edu.org] - Speedy information and news from the Top 10 educational organisations.

  • But captain (Score:5, Funny)

    by Anonymous Coward on Sunday August 13, 2006 @06:52PM (#15899787)
    Marcus Hutter has announced the Hutter Prize for Lossless Compression of Human Knowledge the intent of which is to incentivize the advancement of AI through the exploitation of Hutter's theory of optimal universal artificial intelligence.

    But captain, if we reverse the tachyon inverter drives then we will have insufficient dilithium crystals to traverse the neutrino warp.
  • Painful to read (Score:4, Insightful)

    by CuriHP ( 741480 ) on Sunday August 13, 2006 @06:54PM (#15899793)
    For the love of god, proofread!

  • lossy compression (Score:5, Insightful)

    by RenoRelife ( 884299 ) on Sunday August 13, 2006 @06:55PM (#15899798)
    Using the same data lossy compressed, with an algorithm that was able to permute data in a similar way to the human mind, seems like it would come closer to real intelligence than the lossless compression would
    • by Anonymous Coward
      Funny? That's most intelligent and insightful remark I've seen here in months, albeit rather naively stated.
      The human brain is a fuzzy clustering algorithm, that's what neural networks do, they reduce the space of a large
      data set by eliminating redundancy and mapping only the salient features of it onto a smaller data set, which in bio systems
      is the weights for triggering sodium/potassium spikes at a given junction. If such a thing existed a neural compression algorithm would be capable of immense data red
    • by Vo0k ( 760020 )
      that's one piece, but not necessarily - "lossy" nature of human mind compression can be overcome by "additional checks".

      Lossy relational/dictionary based compression is the base. You hardly ever remember text by order of letters or sound of voice reading it. You remember the meaning of a sentence, plus optionally some rough pattern (like voice rhythm) to reproduce the exact text from rewording the meaning. So you understand meaning of sentence, store it as relation to known meanings (pointers to other entri
      • by swillden ( 191260 ) * <shawn-ds@willden.org> on Sunday August 13, 2006 @11:14PM (#15900595) Journal

        You just need to re-create afile that matches the md5sum and still follows the rules of a Linux kernel. It is extremely unlikely any other file that can be recognized as some kind of Linux kernel and matches. Of course there are countless blocks of data that still match, but very few will follow the ruleset of "ELF kernel executable" structure which can be deduced numerically.

        Mmmm, no. You were fine up until you said "very few will follow the ruleset". That's not true. To see that it's not true, take your kernel, which consists of around 10 million bits. Now find, say, 512 of those bits that can be changed, independently, while still producing a valid-looking kernel executable. The result doesn't even have to be a valid, runnable kernel, but it wouldn't be too hard to do it even with that restriction.

        So you now have 2^512 variants of the Linux kernel, all of which look like a valid kernel. But there are only 2^128 possible hashes, so, on average, there will be four kernels for each hash value, and the odds are very, very good that your "real" kernel's hash is also matched by at least one of them. If by some chance it isn't, I can always generate a whole bunch more kernel variants. How about 2^2^10 of them?

        A hash plus a filter ruleset does not constitute a lossless compression of a large file, even if computation power/time is unbounded.

  • by gatkinso ( 15975 ) on Sunday August 13, 2006 @06:56PM (#15899799)


    There. All of wiki, in 31 bytes.
  • by blueadept1 ( 844312 ) on Sunday August 13, 2006 @06:59PM (#15899809)
    Man, WinRar is taking its bloody time. But oh god, when its done, I'll be rich!
    • Damn you and your WinRAR! When is the deadline? WinRK says it needs 3 days 14 hours, you might be finished before then, but I'll surely take the prize... when it's done®
  • Easy! (Score:2, Funny)

    arj
  • by Millenniumman ( 924859 ) on Sunday August 13, 2006 @07:06PM (#15899826)
    Convert it to AOL! tis wikpedia, teh fri enpedia . teh bst in da wrld.
  • Comparison (Score:2, Informative)

    by ronkronk ( 992828 )
    There are some amazing compression programs out there, trouble is they tend to take a while and consume lots of memory. PAQ [fit.edu] gives some impressive results, but the latest benchmark figures [maximumcompression.com] are regularly improving. Let's not forget that compression is not good unless it is integrated into a usable tool. 7-zip [7-zip.org] seems to be the new archiver on the block at the moment. A closely related, but different, set of tools are the archivers [freebsd.org], of which there are lots with many older formats still not supported by open s
  • by Harmonious Botch ( 921977 ) on Sunday August 13, 2006 @07:09PM (#15899835) Homepage Journal
    "The basic theory...is that after any set of observations the optimal move by an AI is find the smallest program that predicts those observations and then assume its environment is controlled by that program." In a finite discrete environment ( like Shurdlu: put the red cylinder on top of the blue box ) that may be possible. But in the real world the problem is knowing that one's observations are all - or even a significant percentage - of the possible observations.
    This - in humans, at least - can lead to the cyclic reinforcement of one's belief system. The belief system that explains observations initially is used to filter observations later.

    TFA is a neat idea theoreretically, but it's progeny will never be able to leave the lab.

    --
    I figured out how to get a second 120-byte sig! Mod me up and I'll tell you how you can have one too.
    • But in the real world the problem is knowing that one's observations are all - or even a significant percentage - of the possible observations.

      This is precisely the assumption of Hutter's theory.

      Chapter 2 of his book "Simplicity & Uncertainty" deals with this in more detail but the link provided does do an adequate job of stating:

      The universal algorithmic agent AIXI. AIXI is a universal theory of sequential decision making akin to Solomonoff's celebrated universal theory of induction. Solomonoff d

    • This - in humans, at least - can lead to the cyclic reinforcement of one's belief system. The belief system that explains observations initially is used to filter observations later.

      There is no allowance for lossy compression. The requirement of lossless compression is there for precisely the reason you state.

    • ... but it's progeny will never be able to leave the lab.

      "it is progeny"? Damn, I thought we'd fixed that bug. Back to the lab with you!
       
    • But in the real world the problem is knowing that one's observations are all - or even a significant percentage - of the possible observations.

      No, that's just deductive science. I (or we, as a society) haven't tested that every cup, glass and plate in my kitchen (or the world) is affected by gravity, but I'm preeeeeeetty sure they are.

      The problem - and the really hard AI problem - is that there is no single "program", there's in fact several billion independent "programs" running. These "programs" operate i
    • by gardyloo ( 512791 ) on Sunday August 13, 2006 @07:54PM (#15899973)
      TFA is a neat idea theoreretically, but it's progeny will never be able to leave the lab.

            Your use of "TFA" is a good compressional technique, but you could change "it's" to "its" and actually GAIN in meaning while losing a character! You're well on your way...
    • by DrJimbo ( 594231 ) on Sunday August 13, 2006 @08:13PM (#15900036)
      Harmonious Botch said:
      This - in humans, at least - can lead to the cyclic reinforcement of one's belief system. The belief system that explains observations initially is used to filter observations later.
      I encourage you to read E. T. Jaynes' book: Probability Theory: The Logic of Science [amazon.com]. It used to be available on the Web in pdf form before a published version became available.

      In it, Jaynes shows that an optimal decision maker shares this same tendency of reinforcing exiting belief systems. He even gives examples where new information reinforces the beliefs of optimal observers who have reached opposite conclusions (due to differing initial sets of data). Each observer believes the new data further supports their own view.

      Since even an optimal decision maker has this undesirable trait, I don't think the existence of this trait is a good criteria for rejecting decision making models.

      • In it, Jaynes shows that an optimal decision maker shares this same tendency of reinforcing exiting belief systems. He even gives examples where new information reinforces the beliefs of optimal observers who have reached opposite conclusions (due to differing initial sets of data). Each observer believes the new data further supports their own view.

        I think what Hutter has shown is that there is a solution which unifies the new data with the old within a new optimum, which is most likely unique. I think

        • Be that as it may (and I highly doubt there will always be unique solutions) it is a separate issue entirely from what the original poster and I were talking about. We were talking about the problem of deciders becoming prejudiced over time, falling into a rut where new information tends to confirm and reinforce existing "beliefs".

          This effect is often seem clearly when as I said before and you quoted:

          ... optimal observers who have reached opposite conclusions (due to differing initial sets of data)

          • The fact that there are local optimal in utility functions in which you can get stuck is a problem for all learning systems but to varying degrees depending on the details of how they look around the space of "programs". The space of programs measured for Kolmogorov complexity has a lot of discontinuity.
    • I think the original premise is wrong. Real world intelligence is not lossless. The algorithms only have to be right most of the time to be effective. And our intelligence is incredibly redundant. If you want robust AI, you're going to have to accept redundancy and imperfection. Same goes for data transmission. Sure, you compress, but then you also add in self-error correcting codes with a level on redundancy based on the known reliability of the network.
      • > Real world intelligence is not lossless. The algorithms only have to be right most of the time to be effective.

        Right- then all you need to do is run the data through the AI system and make a list of the few times it is wrong- This would be a small list. Then add this list to the end of the compressed data- If the AI is any good then you should still have fantastic compression.

        ...so now you've taken intelligence that is "lossy" and made it "lossless" in a highly efficient manner.

        ...I can't see
      • by kognate ( 322256 ) on Sunday August 13, 2006 @09:45PM (#15900333)
        Yeah, but you can use Turbo codes to achieve near Shannon limit, and you don't have to worry too much about the addition of the ECC. Remember kids: study that math, you never know when information theory can suddenly pay off.

        Just to help (and so you don't think I made Turbo Codes up -- it's sounds like I did 'cause it's such a bad name)
        http://en.wikipedia.org/wiki/Turbo_code [wikipedia.org]
        • I hope the prize fund becomes very large and someone like you comes up with an algorithm and gets rich enough to retire.
    • "The belief system that explains observations initially is used to filter observations later. "

      Hoo-ray for empiricism! Which as Hume pointed out is circular reasoning.
      It does keep one from getting hit by buses, and is the driving force behind the alarm clock industry.
      So it's not a total waste, nevermind the nasty philosophical bit.
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Sunday August 13, 2006 @07:15PM (#15899857)
    Comment removed based on user account deletion
    • Wikipedia is a representation of accumulated human knowledge (experience) presented primarily in natural language. The smallest self-extracting archive will necessarily have rules that imply that knowledge and in such a way that the rules of natural language are represented as well.

      The distinction between compressed experience and rules is an illusion. Rules _are_ compressed experience in the same sense that "x+y=z" is a compressed representation of the table of all ordered pairs (x,y) of numbers and th

  • Solution. (Score:5, Funny)

    by Funkcikle ( 630170 ) on Sunday August 13, 2006 @07:34PM (#15899916)
    Removing all the incorrect and inaccurate data from the Wikipedia sample should "compress" it down to at least 20mb.

    Then just apply your personal favourite compression utility.

    I like lharc, which according to Wikipedia was invented in 1904 as a result of bombarding President Lincoln, who plays Commander Tucker in Star Trek: Enterprise with neutrinos.
  • That's easy, all you have to do is run your program on a computer that users 32-bit bytes. That way you can fit more bits in your bytes, and automatically beat the record by 4 times.
    • Bytes are always eight bits. Nibbles are always four bits. Kilobytes are always 1024 bytes. Are you seeing a pattern?

      The word you should be using is, well, "word." That's what we used to call the bus width of the system. Today, though, the world is much more complicated. Lots of different buses, lots of different bus widths. CISC instruction sets are more CISC than they used to be (outside of mainframes).

      So really, that doesn't apply very well even in the way you intended it.
  • Doesn't the dictionary in PAQ8A,B,C,D result in smaller filesizes if you're talking about a 100M+ large file?

  • by noidentity ( 188756 ) on Sunday August 13, 2006 @07:47PM (#15899952)
    the intent of which is to incentivize the advancement of AI

    Sorry, anything which uses the word "incentivize" does not involve intelligence, natural or artificial.

    • Please join me in my personal crusade to eliminate the word "incent" and "incentivize" from our culture.

      The root word is "incentive" and it wasn't until the last few decades that people came up with "incent", "incentivize", and -- god forbid -- "disincent".

      May I suggest these alternate words:
      - encourage
      - give incentive
      - influence
      - motivate
      - stimulate

      Hell, even "prod" is a better word. Now let us raise our torches and pitch forks and put these rogue words to rest.
  • I'll try: (Score:5, Funny)

    by dcapel ( 913969 ) on Sunday August 13, 2006 @07:59PM (#15899986) Homepage
    echo "!#/bin/sh\nwget en.wikipedia.org/enwiki/" > archive

    Mine wins as it is roughly 40 bytes total.To get your results, you simply need to run the self-extracting archive, and wait. Be warned, it will take a while, but that is the cost of such a great compression scheme!
  • Not to be concited, but I thought of that over a decade ago. I labeled it Transformation Theory. The theory essentially says, given an input and a desired output am explicit mapping can be drawn between the two and algorithms (and hence AI) derive from applying reductions to the mapping (eg. compression). I later dubbed it Reductionary Transformation Theory, so as not to confuse it with another meme by the same name.
  • by Anonymous Coward
    I would argue that lossless compression really is not the best measure of intelligence. Humans are inherently lossy in nature. Everything we see, hear, fear, smell, and taste is pared down to its essentials when we understand it. It is this process of discarding irrelevant detials and making generalizations that is truly intelligence. If our minds had lossless compression we could regurgitate textbooks, but never be able to apply the knowledge contained within. If we really understand, we could reproduce wh
    • Absolutely! The "right answer" for best *intelligent* compression would only store a *minimal* set of pertinent data points and then would use intellect to flesh out the details on decompression. Although you could end up with http://www.reducedshakespeare.com/ [reducedshakespeare.com], I'll worry that when some AI researcher starts down this path

      Minor disagreement with what you said... The details are relevant; they just aren't important enough to store in and of themselves. Sort of like mathematics where you only need to lear
  • Sounds quite similar to Solomonoffs' universal prediction.
    • Hutter's theory is about an active AI agent seeking to maximize some utility function rather than passive prediction. However, it is probably the case that prizes for human knowledge compression should have been in place a long time ago -- probably as long ago as William of Ockham. IMHO guys like Solomonov provided sufficient formal rigor to justify massive government funding of a prize competition for compression of human knowledge decades ago, but it seems to be the case that people who control money ge
  • Since it's wikipedia, am I allowed to edit the entries before I compress them? ;-)
  • C++ (Score:3, Funny)

    by The Bungi ( 221687 ) <thebungi@gmail.com> on Sunday August 13, 2006 @08:32PM (#15900103) Homepage
    Interestingly enough, the source code [fit.edu] for the compressor is C++. One would expect the thing to be written in pure C.

    A (good) sign of the times, I guess.

  • echo I'mright!NoI'mright!You'reanasshole!So'syourmother !! >distilled-wiki.txt
  • # tar cvf wikipedia.tar /wikipediapath
    # bzip2 wikipedia.tar
  • Start with: "the" "and" Then remove periods commas semicolons Speling mistkes ok if readable If other words are commonly used those could be removed It would be a first a fill in the blank-as-you-read encyclopedia Sales staff for door to door enclopedias can convince you new edition saves trees

    The above text is a demonstration. No punctuation. Left out "and" and "the". Since some entries are hard to read, the fill-in-the-blank interpretation adds to the experience! ;)

    If this new compression is added to the
  • ...be a program that creates an "encyclopedia" website and invites humans to contribute to it?
  • I've got it!

    while (result != wikipedia)
    {
    result = [randomGenerator randomStringOfLength:[wikipedia length]];
    }
  • that the entire knowledge of the world could simply be compressed without loss to

    yeah, you guessed it..

    42...
  • for any given dataset there is a pseudo-random number generator that, with the proper parameters, produce that dataset of bytes in sequence.

    The problem is that it is non-trivial to find the proper algorithm and parameters. However, if trial-and-error computation is not a problem, a large library of potential generators can be tried with all possible parameters.
  • The way I see it, there's mainly one good way to compress text (Disclaimer : I do not know if this idea has ever been used before, but if it has i'd be glad to know about it). Let's say you index the values of the most frequently used characters, and that you keep a 'joker' value to insert a custom character after, let's say that it sums up to about 45 characters for these indexed characters. You replace each character by its indexed value, and instead of having it in base 256, you calculate it all in base

  • douginadress [douginadress.com]
  • by giafly ( 926567 ) on Sunday August 13, 2006 @10:57PM (#15900538)
    The basic theory, for which Hutter provides a proof, is that after any set of observations the optimal move by an AI is find the smallest program that predicts those observations and then assume its environment is controlled by that program. Think of it as Ockham's Razor on steroids.
    A "Hutter AI" will be at a disadvantage when competing against an opponent which knows it's acting as above and can do the same calculations. Under these circumstances, the opponent will be one step ahead. The Hutter AI is predictable and so can be outmanoeuvered. Hence the Hutter AI's moves are not optimal.

    Human poker players address this issue by deliberately introducing slight randomness into their play. I think a "Hutter AI" will make better real-world decisions if it does the same (see Game Theory).

    Occam's razor (also spelled Ockham's razor) is a principle attributed to the 14th-century English logician and Franciscan friar William of Ockham. Originally a tenet of the reductionist philosophy of nominalism, it is more often taken today as a heuristic maxim that advises economy, parsimony, or simplicity in scientific theories. Occam's razor states that the explanation of any phenomenon should make as few assumptions as possible - Wikepedia [wikipedia.org]
    Occam's razor is also highly suspect. There's the issue of cultural bias when counting assumptions. And all programmers will be aware of how they fixed "the bug" that caused all the problems in an application, only to find there were other bugs that caused identical symptoms.
  • by Dachannien ( 617929 ) on Sunday August 13, 2006 @10:59PM (#15900551)
    Can't I just punch the monkey for $20 instead?

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...