Comment Re: "Now"? (Score 2) 47
He mentions why it looks old on the home page, point number 12.
He mentions why it looks old on the home page, point number 12.
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
kvezach writes: To put it differently: suppose that the text was 10 bytes long.
A better way of thinking about data scaling is to ask "How many digits of Pi would it take before, say, John Tromp's 401-bit binary lambda algorithm that generates Pi would become a better model than the literal string of those apparently-random digits?" (And by "better" I mean not only that it would be shorter than those digits, but that it would extrapolate to (ie: "predict") the next digit of Pi.)
In terms of how much data humans require, this is, as I said, something about which everyone has an opinion (including obviously you, to which you are of course entitled) but on which there is no settled science: Hence the legitimacy of the 1GB limit on a wide range of human knowledge for research purposes.
Concerns about the bias implied by "The Hardware Lottery" are not particularly relevant for engineering/business decisions, but path dependencies implicit in the economics of the world are always suspect as biasing research directions away from more viable models and, in the present instance, meta-models.
There is only one Hutter Prize contest and it's for 1GB. 100MB was the original size for the Hutter Prize starting in 2006, but it was increased to 1GB in 2020, along with a factor of 10 increase in the payout per incremental improvement. See the "Hutter Prize History".
Insofar as the size is concerned: The purpose of the Hutter Prize is research into radically better means of automated data-driven model creation, not biased by what Sara Hooker has called "The Hardware Lottery". One of the primary limitations on current machine learning techniques is their data efficiency is low compared to that which natural intelligence is speculated to attain by some theories. Everyone has their opinion, of course, but it is far from "settled science". In particular, use of ReLU activation seems to indicate machine learning currently relies heavily on piece-wise linear interpolation in construction of its world model from language. Any attempt to model causality has to identify system dynamics (including cognitive dynamics) to extrapolate to future observations (ie: predictions) from past observations (ie: "the data in evidence"). Although there is reason to believe Transformers can do something like dynamics within their context windows despite using ReLU (and that this is what gives them their true potential for "emergence at scale") it wasn't until people started going to State Space Models that they started returning to dynamical systems identification (under another name, as academics are wont to gratuitously impose on their fields).
When they load you into the trains, Iâ(TM)ll be taking photos to feed my Ai so folks never forget.
Until the next generation of you calls for censorship and forced speech.
Misinformation is free expression, comrade.
Yes, rampant inflation creates jobs.
Fuck off with those data points.
I voted for no one and am glad.
Good public transportation is always empty in Chicago, and it is always a city bus clogging up streets for my Uber.
Public transportation, even if on time, takes 150-300% longer than a car.
It keeps people poor by allowing them to live 90 minutes each way from work instead of making their bosses have to pay them better to afford to live closer.
Probably the best introduction would be Hutter's 2010 paper titled "A Complete Theory of Everything (Will Be Subjective)". From there you can follow its citations in google scholar.
You didn't submit an executable archive of enwik9 purported to expand into a file that matched bit for bit. While you also failed in some other ways to qualify, that is now the first bar you must clear, before any further investment by the judging committee.
Or are you Yann LeCun out to pull my leg again?
One big mistake they made early on with the Hutter Prize was not insisting that all contestants make their entries Open Source.
IIRC, only one entry was closed source. You may be thinking of Matt Mahoney's Large Text Compression Benchmark where the top contender is frequently closed source.
That the machine learning world has yet to recognize lossless compression as the most principled loss function is a tragedy, but it is due to a lot more than that entry. This failure stretches back to when Solomonoff's proof was overshadowed by Poppers falsification dogma in his popularization of the philosophy of science:
When a model's prediction is wrong, under Popper's falsification dogma, the model is "falsified", whereas under Solomonoff, the model is penalized by not only a measurement of the error (such as LSE), but by the literal encoding the error within the context of the model. The significance of this subtle difference is hard for people to understand, and this lack of understanding derailed the principled application of Moore's Law to science. Instead we got an explosion of statistical "information criteria for model selection", all of which are less principled than the Algorithmic Information Criterion, and now we have ChatGPT hallucinating us into genuinely treacherous territory.
Saurabh Kumar has just raised the bar 1.04% on the Hutter Prize for Lossless Compression of Human Knowledge with his "fast cmix" entry. If you would like to supplement Marcuss monetary award of €5187, one way is to send BTC to Saurabh at bc1qr9t26degxjc8kvx8a66pem70ye5sgdw7u4tyjy or contact Marcus Hutter directly.
Before describing Saurabhs contribution, there are two salient facts required to understand the importance of this competition:
1) It is more important than a language modeling competition. It is knowledge comprehension. To quote Gregory Chaitin, "Compression is comprehension."
Are you going to argue that comprehension of all that knowledge is insufficient to generatively speak the truth consistent with all that knowledge — and that this notion of "truth" will not be at least comparable to that generatively spoken by large language models such as ChatGPT?
2) The above also applies to Matt Mahoneys Large Text Compression Benchmark, which, unlike the Hutter Prize, allows unlimited computer resources. However the Hutter Prize is geared toward research in that it restricts computation resources to the most general purpose hardware that is widely available.
Why?
As described by the seminal paper "The Hardware Lottery" by Sara Hooker, AI research is biased toward algorithms optimized for existing hardware infrastructure. While this hardware bias is justified for engineering (applying existing scientific understanding to the "utility function" of making money) to quote Sara Hooker, it "can delay research progress by casting successful ideas as failures".
Saurabh Kumars Contribution
Saurabhs fast-cmix README describes how he went about substantially increasing the speed of the prior Hutter Prize algorithms, most recently Artemiy Margaritovs SorTing ARticLes by sImilariTy (STARLIT).
The complaint that this is "mere" optimization ignores the fact that this was done on general purpose computation hardware, and is therefore in line with the spirit of Sara Hookers admonition to researchers in "The Hardware Lottery". By showing how to optimize within the constraint of general purpose computation, Saurabhs contribution may help point the way toward future directions in hardware architecture.
How I spend my money IS expression and government in the US isn't allowed to censor me.
My freedom of speech = freedom to spend how I want to spend my savings.
You are aware that Facebook and prior Twitter took orders from the standing administration, correct?
Work continues in this area. -- DEC's SPR-Answering-Automaton