Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:US Picked Officials In Ukraine After 2014 Coup (Score 1) 49

Cool, except they''ve had multiple governments, some anti russian, some less so ever since, all because of democracy.

For reference, Zelensky was pro russian. He was also pro european and pro Nato, but he wanted to fix repair ties with the russians. Until, right after Nato voted to reject their application, the russians invaded.

The americans are irrelevant , this war had nothing to do with them.

Comment Re:AI has many uses (Score 1) 31

This is why I think a severe AI market crash might actually be good for AI. We've proven LLMs can be impressive, and occasionally even useful. Now, we just need the marketing people and CEO suite to fuck off and send it back into the labs for another decade or two to work on the more impressive stuff. And let the ethicists and policy wonks have a decade or so to get us ready for it so it doesnt dismantle civil society, the economy, and politics as insane silicon valley loons torch the forests and redirect half the planets worth into building a premature stupid product nobody wants.

Comment This RAM thing is the AI bubble inflection point (Score 1) 17

We've all seen for a while how the AI bubble has led to increasingly irrational market behavior. Nvidia priced higher than the entire pharma industry combined , OpenAI just churning through insane amounts of money while ranting incoherently about trillion dollar data centers. Microsoft just rolling out unprecedented data centers, all for a product that the public by and large appears to resesent and business companies struggle to figure out how to extract any sort of productivity out of it.

But its when the abstract market signals start reifying into real world failures that the bed officially shits itself. In the last major crash, that was when people started failing mortgages toppling the subprime house of cards. In the dot com crash, when a number of billion dollar companies just failed stupidly (pets dot com etc).

I think the tipping points gonna come down to RAM. Think about it. You now have a huge demand for RAM to build these budgeted super datacenters, but the budget just got blown to hell and back. Microsoft has also pivoted hard to rolling out new demand for these shitty "Copilot PCs", but the PC market is about to sieze up because computers are about to get real friggin expensive. (Google the price of 64gb of DDR5 and weep). Theres a whole ecosystem of "dumb shit as a service" companies about to discover their high memory GPU instances get real freaking expensive.

Something has to give, and I think that might be Microsoft, and possibly Amazon. Oh they wont die, Amazon and Microsoft have insane capital warchests. But both are incredibly exposed as major datacenter providers to RAM prices. Add onto that Amazons brick and mortar business taking a massive hit from tarrifs on the cheap chinese shit they sell, and finally the rock bottom consumer confidence hammering market behavior. This shits about to blow.

All because Sam "fucking" altman decided to buy 40% of the worlds RAM supply for his overblown spellchecker.

Comment Re:WTF (Score 1) 17

I have indeed thought it through. I have dealt with machine learning models for 30 years, and I've seen multiple generations of recycled broken ideas, and I'm seeing them recycled now.

However, open source is not about giving out a model for cheap/free to whoever asks. It is about giving away the foundations that allow complete duplication, so that other members of humanity, smarter or more informed, can contribute and/or branch away from the work.

The cost of training is irrelevant. It merely reflects the low quality of the processes and ideas that are being used by the companies that currently build them. It's by sharing the raw materials and allowing others to solve the same problems better that efficiency and progress is made.

The current paradigms of pretraining, fine tuning, transfer learning, etc lead to an enforced conceptual modularity that is just a way to embed a middle man economy into the science: Some provider takes care of data for others, builds a foundation model for others, and they can tinker on top of that. It is counter productive and scientifically a dead end, while giving you the feeling of progress that comes from taking psychological ownership of the full system when all you've done is tinkered at the edge by specializing an existing model.

You don't get anything new that way, only epsilon variations on an existing body of work. It's a dead end, because successful intelligences in the real world all around us do not need anywhere near the resources expended on AI and intelligent biological systems do not function anywhere near the way these AI systems do. For example, nobody reads the whole internet just to be able to talk about a topic, and no animal brain works like a deep network.

If you want (scientific) progress, you must break out of the tinkerer mindset. Take the full set of preferred elements that build the full state of the art system, and be prepared to do radical surgery at any level that makes sense, because the current architectures are simply bad. You can't do that with existing "open" systems that lock you into these architectural paradigms and choices.

Your example of Olmo talks about openness, but I had a look at their website and I don't see a link to raw data archives. There's instructions how to train a model, and they discuss a token data collection called Dolma 3. But tokens are not raw data, most of the implied information is already lost once you've tokenized. They do a good job of describing in detail their process for dataset curation on their GitHub page though, which deserves credit. It's worth reading, because it shows how their models are being locked into patterns that limit them from the get go, long before the first weight is even being trained.

Slashdot Top Deals

Real Programs don't use shared text. Otherwise, how can they use functions for scratch space after they are finished calling them?

Working...