Comment But it should be illegal to feed trolls (Score 1) 65
But by propagating the sock puppet's vacuous FP Subject you got me to look at the...
But by propagating the sock puppet's vacuous FP Subject you got me to look at the...
Did it include some historical details? Like the box scores Gemini told me about for World Series games that had actually ended with the other team winning. A simple "I don't know" would have sufficed.
Your (apparently ignored) FP reminded me of my own bad experience with 25.10. I think the bad experience was basically due to a lack of regression testing leading to more bad experiences with LLMs trying to help me fix the problem.
The problem I found was lost OSes. Turned out that the new 25.10 default is to disable the os-prober function in GRUB. I started with no idea of where the other partitions had gone, but eventually, with lots of so-called "help" from a couple of so-called "AI" helpers, now I believe that some fool (or AI tool) decided the default should be to disable the probe for other OSes. It's almost as though the purpose of GRUB has been forgotten?
Hey, but the AI helped me make a really stupid mistake or two along the way, so maybe we're even? All well that only wasted a few hours of my time? Lots of room for Funny on the story...
Oh, and also, the resources needed to do a finetune (to update knowledge) - or heck, even just a LoRa - are vastly less than the resources needed to train a foundation. And in any "AI-crash scenario", renting server time becomes cheap.
Installing Ollama is easy, and now there's even WebAssembly inference servers which load models straight in your browser but run on your computer. Literally just have to browse to the page and click.
RAG does not require any meaningful amount of maintenance.
What if perchance the track should bend?
The Trump administration has been broadly declaring anything related to DEI illegal and putting immense pressure against major organizations with DEI policies, when again, like 99% of policies, including those they've gone after, are like they above.
They literally had to turn down a government grant because they had a DEI policy. They're not lying about the liability risk they faced. And this is what the overwhelming majority (like 99%) of real-world DEI policies look like.
Imagine if those failed and we'd have to take the Shinkansen to our destination instead of the monorail to the airport...
I hear those things are awfully loud.
Way beyond golems - tons of old religions have notions of "craftsmen deities" making mechanical beings (like Hephestus making Talos, the Keledones, the Kourai Khryseai, etc) or self-controlled artifacts (such as Vishvakarma making an automated flying chariot, Hephaestos making self-moving tripods to serve the gods at banquets, etc), or even things that (mythological) humans created, such as the robots that guarded the relics of the Buddha, or a whole city of wooden robots made by a carpenter mentioned in the Naravahanadatta. And then you have actual early human attempts at automatons, such as robotic orchestras and singers in ancient China, a robotic orchestra and mechanical peacocks in the Islamic world, etc (China also had some mythological ones in addition to actual ones, such Yan Shi's automaton, who enraged King Mu by winking at his concubines).
Humans have been thinking about robots and "thinking machines" since time immemorial.
AI isn't going to disappear just because the stock prices of these companies crash, or even if they close together. It's too late. The models already exist, inference is dirt cheap to run (and can even be run on your own computer), and vast numbers of people demonstrably find it useful (regardless of whether you, reader, do).
It's funny, when you see "The AI bubble will collapse", you get two entirely different groups of people agreeing - one thinking, "AI is going to go away!", and the other thinking "Inference costs are going to zero!". Namely, because all the investors who spent their money building datacentres are going to lose their shirts, but those datacentres will still exist - and with much less demand for YOLOing new leading-edge foundations, it'll mainly be "inference for the lowest bidder, so long as they can bid more than the power cost at that point in time". Mean power costs for an AI datacentre are like a third of the total amortized cost of a datacentre, but with datacentres in broad geographic regions, spot prices can go well under that due to local price fluctuations. And any drop in prices triggers Jevon's Paradox.
The question for investors is really the correction timing, not whether it will happen. IMHO, as weird as it sounds, it likely has to do with highly visible inflation (groceries, fuel, etc). Inflation leads to voter rage, which leads to politicians pursuing anti-inflation strategies, which dry up capital in the market, which cause capital-hungry growth fields (like AI) to starve. Once investors catch wind that their previous growth field is no longer going to be in growth mode, they bail, causing a collapse in stock prices.
It was rate hikes that caused the internet bubble to pop.
Right now, Trump seems obsessed with rate cuts to juice the stock market, but at some point, the administration's chaotic, pro-inflation policies (tariffs, hits to the ag and construction labour supply, the war on wind and solar, etc) will catch up with them.
Define what you mean by "well". Vs. a database? No (but that's what RAG is for). Vs. humans? Absolutely yes. They achieve a much denser data representation than we do (albeit with a slower learning rate).
It's an apt description of what it's doing: allowing each given layer to pay attention to a small subset of its input at any given time instead of drowning in the noise of trying to process the whole thing at once.
Pretty weak FP. I think it is some kind of pre-loaded rant against fiat currency. Or maybe was an intended recursive joke about futures on futures? Insurance ^n as n approaches infinity? The Subject was certainly unhelpful. Maybe you care to clarify?
But I'm going to jump in a different direction: How do we tell if AI is failing. I think we are using the wrong metrics, so I would like to suggest a few candidates:
Best apologies: So far I think that one goes to Microsoft's Copilot for some stuff it said about the recent USB fiasco.
Most sycophantic: All of them try, but DeepSeek is noteworthy for that tone. Always wearing the rosiest of rose-colored glasses. (But it's apologies are lame an insincere.)
Most verbose: Oh, this is a tough category, but perhaps OpenAI deserves the award here?
Most infuriating: Again the competition is tough, but I think the AI-based "support" chatbot Rakuten Mobile is using is should win on consistency. It's been around for almost a year now, and I have yet to find a question simple enough for it to give a useful answer, but it usually elevates me to a towering rage within three or four responses without clarifications...
Best hallucinations: I think this one may belong to Gemini for World Series coverage. A couple of times I asked it about games that were apparently too recent for it's universe, so it just made up results. The summary of the scoring was especially impressive for a game that was actually won by the other time around the time Gemini was offering its answer.
Most profitable: No award in this category. Not this year and perhaps not ever. After the AI takes over it ain't goin' to waste no time gambling with quatloos.
Money is the root of all evil, and man needs roots.