Who gives an actual fuck if China gets to fancy AI first.
Seriously. As long as I can buy cool shit I dont care where its made.
Its not like ANY of this shit will actually create jobs once they automate us all into personal bankerupcy and perpetual unemployment.
"You can keep the whole town under surveilance as we jack their power bills through the roof. Think of the profits!"
Probably.
This is why I think a severe AI market crash might actually be good for AI. We've proven LLMs can be impressive, and occasionally even useful. Now, we just need the marketing people and CEO suite to fuck off and send it back into the labs for another decade or two to work on the more impressive stuff. And let the ethicists and policy wonks have a decade or so to get us ready for it so it doesnt dismantle civil society, the economy, and politics as insane silicon valley loons torch the forests and redirect half the planets worth into building a premature stupid product nobody wants.
We've all seen for a while how the AI bubble has led to increasingly irrational market behavior. Nvidia priced higher than the entire pharma industry combined , OpenAI just churning through insane amounts of money while ranting incoherently about trillion dollar data centers. Microsoft just rolling out unprecedented data centers, all for a product that the public by and large appears to resesent and business companies struggle to figure out how to extract any sort of productivity out of it.
But its when the abstract market signals start reifying into real world failures that the bed officially shits itself. In the last major crash, that was when people started failing mortgages toppling the subprime house of cards. In the dot com crash, when a number of billion dollar companies just failed stupidly (pets dot com etc).
I think the tipping points gonna come down to RAM. Think about it. You now have a huge demand for RAM to build these budgeted super datacenters, but the budget just got blown to hell and back. Microsoft has also pivoted hard to rolling out new demand for these shitty "Copilot PCs", but the PC market is about to sieze up because computers are about to get real friggin expensive. (Google the price of 64gb of DDR5 and weep). Theres a whole ecosystem of "dumb shit as a service" companies about to discover their high memory GPU instances get real freaking expensive.
Something has to give, and I think that might be Microsoft, and possibly Amazon. Oh they wont die, Amazon and Microsoft have insane capital warchests. But both are incredibly exposed as major datacenter providers to RAM prices. Add onto that Amazons brick and mortar business taking a massive hit from tarrifs on the cheap chinese shit they sell, and finally the rock bottom consumer confidence hammering market behavior. This shits about to blow.
All because Sam "fucking" altman decided to buy 40% of the worlds RAM supply for his overblown spellchecker.
AI may have found its niche as entertainment in its own right
Some of us have no sense of humour, you insensitive clod!
Let's not conflate speculations on the implied mechanism used for providing summaries by AI with requirements of correctness of summaries implied by consideration in a legal service agreement.
However, open source is not about giving out a model for cheap/free to whoever asks. It is about giving away the foundations that allow complete duplication, so that other members of humanity, smarter or more informed, can contribute and/or branch away from the work.
The cost of training is irrelevant. It merely reflects the low quality of the processes and ideas that are being used by the companies that currently build them. It's by sharing the raw materials and allowing others to solve the same problems better that efficiency and progress is made.
The current paradigms of pretraining, fine tuning, transfer learning, etc lead to an enforced conceptual modularity that is just a way to embed a middle man economy into the science: Some provider takes care of data for others, builds a foundation model for others, and they can tinker on top of that. It is counter productive and scientifically a dead end, while giving you the feeling of progress that comes from taking psychological ownership of the full system when all you've done is tinkered at the edge by specializing an existing model.
You don't get anything new that way, only epsilon variations on an existing body of work. It's a dead end, because successful intelligences in the real world all around us do not need anywhere near the resources expended on AI and intelligent biological systems do not function anywhere near the way these AI systems do. For example, nobody reads the whole internet just to be able to talk about a topic, and no animal brain works like a deep network.
If you want (scientific) progress, you must break out of the tinkerer mindset. Take the full set of preferred elements that build the full state of the art system, and be prepared to do radical surgery at any level that makes sense, because the current architectures are simply bad. You can't do that with existing "open" systems that lock you into these architectural paradigms and choices.
Your example of Olmo talks about openness, but I had a look at their website and I don't see a link to raw data archives. There's instructions how to train a model, and they discuss a token data collection called Dolma 3. But tokens are not raw data, most of the implied information is already lost once you've tokenized. They do a good job of describing in detail their process for dataset curation on their GitHub page though, which deserves credit. It's worth reading, because it shows how their models are being locked into patterns that limit them from the get go, long before the first weight is even being trained.
That's odd. I need large fonts, but I find dark mode unreadable. Black on cream or light beige is about ideal.
When you say "STEM vs pretend degrees", you clearly don't know what you're talking about. There is a near continuum of "hardness" of subject, and even that's not well defined, and the quesiton of whether EE is harder than pure math doesn't have a clear answer, but which way you answer definitely affects what the opposite is.
E.g., "German" is not a STEM major, but it's also not a pretend degree. OTOH, Philosophy is often a fluff major, but some of them attempt to be as rigorous as any experimental physicist. (Most don't succeed, because it's a really difficult thing to do.)
Outlawing home schooling is too dangerous. Also MOST homeschooling is destructive, but some is the exact opposite.
I'll agree that home schooling is destructive to society, even when making accommodation to geniuses and other "special needs" students, but it's destructiveness isn't even the same order of magnitude as that of "social media". (I'll agree that social media needn't be destructive, but just about all of it is.)
Silicon wafers are the wrong thing to compare against. The CZT semiconductors are like photodiodes (for other radiation), not what they make the logic out of. It would make more sense to compare with InGaN (for blue LEDs), which plays a similarly specific role in common devices.
"We Americans, we're a simple people... but piss us off, and we'll bomb your cities." -- Robin Williams, _Good Morning Vietnam_