Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Earth

Bill Gates' Climate Group Lays Off US, Europe Policy Teams 110

Breakthrough Energy, the climate group founded by Bill Gates, has laid off dozens of employees in the U.S. and Europe, eliminating its public policy and partnerships teams as it shifts away from advocacy work. Its investment and grantmaking divisions will remain unaffected. The Detroit News reports: Breakthrough Energy is an umbrella organization founded by Gates that houses various initiatives aimed at accelerating the clean energy transition. It also encompasses Breakthrough Energy Ventures, one of the biggest investors in early-stage climate technologies with stakes in more than 120 companies, as well as a grantmaking program for early-seed stage company founders and Breakthrough Catalyst, a funding platform focused on emergent climate technologies. None of those divisions of the group were impacted by cuts, which were reported earlier by the New York Times.

[...] "In the United States especially, the conversation about climate has been sidetracked by politics," Gates wrote in the introduction to his 2021 book. "Some days, it can seem as if we have little hope of getting anything done." The climate pullback is happening at the same time as the US cuts foreign aid, a field where Gates is also a major donor. His nonprofit, the Gates Foundation, operates with a budget of billions and has a strong focus on overseas development.
"Bill Gates remains as committed as ever to advancing the clean energy innovations needed to address climate change," a Breakthrough Energy spokesperson said in an emailed statement. "His work in this area will continue and is focused on helping drive reliable affordable, clean energy solutions that will enable people everywhere to thrive."

On Wednesday, the EPA announced the agency will "undertake 31 historic actions in the greatest and most consequential day of deregulation in U.S. history..."

Submission + - SPAM: Kaido Orav and Byron Knoll's fx2-cmix Wins 7950€ Hutter Prize Award!

Baldrson writes: Kaido Orav and Byron Knoll just beat the Nuclear Code Golf Course Record!

What's Nuclear Code Golf?

Some of you may have heard that "next token prediction" is the basis of large language models generalizing. Well there is just one contest that pays cash prizes in proportion to how much you beat the best prior benchmark for the most rigorous measure of next token prediction: Lossless compression length including decompressor length. The catch is, in order to make it relevant regardless of The Hardware Lottery's hysterics*, you are restricted to a single general purpose CPU. This contest is not for the faint of heart. Think of it as Nuclear Code Golf.

Kaido Orav and Byron Knoll are the team to beat now.

*The global economy is starting to look like a GPU-maximizer AGI.

Comment Re:Errrm, .... no, not really. (Score 1) 94

That was 12 years ago. A 12 year out of date critique of a web technology that has had ongoing language updates and two entire rewrites in that interval should be viewed with some suspicion. Also, are you really just citing the title of the article and none of the content?

I'm not even defending PHP here, just questioning lazy kneejerk, "but it sucked once, so now I hate it forever" thinking.

Comment Re:A Voyager 4? (Score 1) 80

I'll disagree a little bit: we have heavy lift rockets bringing mass to orbit at a greater rate than any time in history and new larger and more efficient rockets on the cusp of being brought to use, with next generations planned for the future. Space launch technology -- the actual raw launching of mass to orbit, where it can be useful -- has advanced. And mass to orbit means more fuel -- if we really wanted to get something out there faster.

And that's where our statements arrive at the same conclusion: there's little need to do anything but super efficient deep space probes. While I can quibble with your implied assertion about newer technology not making a difference in ability, in a practical sense given our funding of deep space research, the big tech upgrade has been to data collection devices and communication. We'll have to have way cheaper lift capability before extra fuel to cut time off a project makes any kind of sense. But it is now at least plausible as an option.

(Also, this appears to be the only thread that isn't making Trek or Aliens jokes)

Comment Re:What's the size again? (Score 2) 22

kvezach writes: To put it differently: suppose that the text was 10 bytes long.

A better way of thinking about data scaling is to ask "How many digits of Pi would it take before, say, John Tromp's 401-bit binary lambda algorithm that generates Pi would become a better model than the literal string of those apparently-random digits?" (And by "better" I mean not only that it would be shorter than those digits, but that it would extrapolate to (ie: "predict") the next digit of Pi.)

In terms of how much data humans require, this is, as I said, something about which everyone has an opinion (including obviously you, to which you are of course entitled) but on which there is no settled science: Hence the legitimacy of the 1GB limit on a wide range of human knowledge for research purposes.

Concerns about the bias implied by "The Hardware Lottery" are not particularly relevant for engineering/business decisions, but path dependencies implicit in the economics of the world are always suspect as biasing research directions away from more viable models and, in the present instance, meta-models.

Comment Re:What's the size again? (Score 2) 22

There is only one Hutter Prize contest and it's for 1GB. 100MB was the original size for the Hutter Prize starting in 2006, but it was increased to 1GB in 2020, along with a factor of 10 increase in the payout per incremental improvement. See the "Hutter Prize History".

Insofar as the size is concerned: The purpose of the Hutter Prize is research into radically better means of automated data-driven model creation, not biased by what Sara Hooker has called "The Hardware Lottery". One of the primary limitations on current machine learning techniques is their data efficiency is low compared to that which natural intelligence is speculated to attain by some theories. Everyone has their opinion, of course, but it is far from "settled science". In particular, use of ReLU activation seems to indicate machine learning currently relies heavily on piece-wise linear interpolation in construction of its world model from language. Any attempt to model causality has to identify system dynamics (including cognitive dynamics) to extrapolate to future observations (ie: predictions) from past observations (ie: "the data in evidence"). Although there is reason to believe Transformers can do something like dynamics within their context windows despite using ReLU (and that this is what gives them their true potential for "emergence at scale") it wasn't until people started going to State Space Models that they started returning to dynamical systems identification (under another name, as academics are wont to gratuitously impose on their fields).

Submission + - Kaido Orav's fx-cmix Wins 6911€ Hutter Prize Award! (google.com)

Baldrson writes: Kaido Orav has just improved 1.38% on the Hutter Prize for Lossless Compression of Human Knowledge with his “fx-cmix” entry.

The competition seems to be heating up, with this winner coming a mere 6 months since the prior winner. This is all the more impressive since each improvement in the benchmark approaches the (unknown) minimum size called the Kolmogorov Complexity of the data.

Comment Re:No, that's not 114 megabytes (Score 1) 64

You didn't submit an executable archive of enwik9 purported to expand into a file that matched bit for bit. While you also failed in some other ways to qualify, that is now the first bar you must clear, before any further investment by the judging committee.

Or are you Yann LeCun out to pull my leg again?

https://twitter.com/ylecun/sta...

Comment Re:Silicon Valley anyone? (Score 1) 64

One big mistake they made early on with the Hutter Prize was not insisting that all contestants make their entries Open Source.

IIRC, only one entry was closed source. You may be thinking of Matt Mahoney's Large Text Compression Benchmark where the top contender is frequently closed source.

That the machine learning world has yet to recognize lossless compression as the most principled loss function is a tragedy, but it is due to a lot more than that entry. This failure stretches back to when Solomonoff's proof was overshadowed by Poppers falsification dogma in his popularization of the philosophy of science:

When a model's prediction is wrong, under Popper's falsification dogma, the model is "falsified", whereas under Solomonoff, the model is penalized by not only a measurement of the error (such as LSE), but by the literal encoding the error within the context of the model. The significance of this subtle difference is hard for people to understand, and this lack of understanding derailed the principled application of Moore's Law to science. Instead we got an explosion of statistical "information criteria for model selection", all of which are less principled than the Algorithmic Information Criterion, and now we have ChatGPT hallucinating us into genuinely treacherous territory.

Submission + - Saurabh Kumar's fast-cmix wins €5187 Hutter Prize Award! 1

Baldrson writes: Marcus Hutters tweet makes it official:

Saurabh Kumar has just raised the bar 1.04% on the Hutter Prize for Lossless Compression of Human Knowledge with his "fast cmix" entry. If you would like to supplement Marcuss monetary award of €5187, one way is to send BTC to Saurabh at bc1qr9t26degxjc8kvx8a66pem70ye5sgdw7u4tyjy or contact Marcus Hutter directly.

Before describing Saurabhs contribution, there are two salient facts required to understand the importance of this competition:

1) It is more important than a language modeling competition. It is knowledge comprehension. To quote Gregory Chaitin, "Compression is comprehension."

  • Every programming language is described in Wikipedia.
  • Every scientific concept is described in Wikipedia.
  • Every mathematical concept is described in Wikipedia.
  • Every historic event is described in Wikipedia.
  • Every technology is described in Wikipedia.
  • Every work of art is described in Wikipedia — with examples.
  • There is even the Wikidata project that provides Wikipedia a substantial amount of digested statistics about the real world.

Are you going to argue that comprehension of all that knowledge is insufficient to generatively speak the truth consistent with all that knowledge — and that this notion of "truth" will not be at least comparable to that generatively spoken by large language models such as ChatGPT?

2) The above also applies to Matt Mahoneys Large Text Compression Benchmark, which, unlike the Hutter Prize, allows unlimited computer resources. However the Hutter Prize is geared toward research in that it restricts computation resources to the most general purpose hardware that is widely available.

Why?

As described by the seminal paper "The Hardware Lottery" by Sara Hooker, AI research is biased toward algorithms optimized for existing hardware infrastructure. While this hardware bias is justified for engineering (applying existing scientific understanding to the "utility function" of making money) to quote Sara Hooker, it "can delay research progress by casting successful ideas as failures".

Saurabh Kumars Contribution


Saurabhs fast-cmix README describes how he went about substantially increasing the speed of the prior Hutter Prize algorithms, most recently Artemiy Margaritovs SorTing ARticLes by sImilariTy (STARLIT).

The complaint that this is "mere" optimization ignores the fact that this was done on general purpose computation hardware, and is therefore in line with the spirit of Sara Hookers admonition to researchers in "The Hardware Lottery". By showing how to optimize within the constraint of general purpose computation, Saurabhs contribution may help point the way toward future directions in hardware architecture.

Submission + - Artemiy Margaritov Wins €9000 In the First 10x Hutter Prize Award

Baldrson writes: The Hutter Prize for Lossless Compression of Human Knowledge has now awarded €9000 to Artemiy Margaritov as the first winner of the 10x expansion of the HKCP, first announced, over a year ago in conjunction with a Lex Fridman podcast!

Artemiy Margaritov's STARLIT algorithm's 1.13% cleared the 1% improvement hurdle to beat the last benchmark, set by Alexander Rhatushnyak. He receives a bonus in proportion to the time since the last benchmark was set, raising his award by 60% to €9000.

Congratulations to Artemiy Margaritov for his winning submission!

Slashdot Top Deals

God made the integers; all else is the work of Man. -- Kronecker

Working...