Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Dumb (Score 1) 179

Yes, that's the story you and the article author probably learned in grade school. It's not true, of course, but it certainly appeals to the lone genius personality cult cognitive bias.

Comment Re:Dumb (Score 1) 179

Yeah, that's not much better.

Einstein, for example, conceived of relativity before any empirical evidence confirmed it.

WTF does that mean? How can you confirm something before anybody conceives of it? If we assume its just clumsy language then its just not true. Maxwell's electrodynamics were known to be in conflict with Galilean relativity (among other things) and the physics community had spent decades working on the problem including Lorentz, Lamor, Poincaire and others working out the necessary transform to replace the Galilean one. Einstein, who was a PhD student at the time, wrote a very nice paper tying it all together.

As for metaphors, or vocabularies or whatever, Einstein was notably not a fan of the metaphors that are currently most associated with general relativity.

There's also a nice little paper by some physicists where they train a small neural network (much, MUCH smaller than any LLM) on various types of observations and show that it learns symmetries of the physical system. One of their examples is learning the invariance of the spacetime interval.

Comment Re:Dumb (Score 1) 179

Perhaps drunk too?

I doubt it. The summary (I didn't read the article, I meant it when I said "stop reading") sounds like a typical opinion piece from someone with lots of opinions and not much concern for accuracy. To pick the same example again, the Einstein myth is pretty overwhelming but this guy takes it to the next level with the "didn't like the metaphor" stuff. Is that the 2025 version of "the narrative?"

Comment Re:Ah, well. (Score 1) 45

The Arduino bootloader does indeed make using ATMegas nicer, but the ESP32 uses its own bootloader, which is burned in, not modifiable. The RP2040 too, and I expect most or all of the RISC-V chips as well.

The Arduino toolchain also contributes a lot but, except for the ATMega, it's also created and maintained by not-Arduino.

Comment Re:PR article (Score 1) 179

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment Re:Really? (Score 1) 179

The movie analogy is old and outdated.

I'd compare it to a computer game. In any open world game, it seems that there are people living a life - going to work, doing chores, going home, etc. - but it's a carefully crafted illusion. "Carefully crafted" in so far as the developers having put exactly that into the game that is needed to suspend your disbelief and let you think, at least while playing, that there are real people. But behind the facade, they are not. They just disappear when entering their homes, they have no actual desires just a few numbers and conditional statements to switch between different pre-programmed behaviour patterns.

If done well, it can be a very, very convincing illusion. I'm sure that someone who hasn't seen a computer game before might think that they are actual people, but anyone with a bit of background knowledge knows they are not.

For AI, most of the people simply don't (yet?) have that bit of background knowledge.

Comment Re:PR article (Score 1) 179

And yet, when asked if the world is flat, they correctly say that it's not.

Despite hundreds of flat-earthers who are quite active online.

And it doesn't even budge on the point if you argue with it. So for whatever it's worth, it has learned more from scraping the Internet than at least some humans.

Comment Re:Wrong Name (Score 2) 179

It's almost as if we shouldn't have included "intelligence" in the actual fucking name.

We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.

Comment Re:What is thinking? (Score 1) 179

professions that most certainly require a lot of critical thinking. While I would say that that is ludicrous

It is not just ludicrous, it is irrationally dangerous.

For any (current) LLM, whenever you interact with them you need to remember one rule-of-thumb (not my invention, read it somewhere and agree): The LLM was trained to generate "expected output". So always think that implicitly your prompt starts with "give me the answer you think I want to read on the following question".

Giving an EXPECTED answer instead of the most likely to be true answer is literally life-threatening in a medical context.

Comment Difference in fundamental rights. (Score 1) 56

Jokes aside about Thanksgiving...

Thanksgiving dinner costs a little more this year, govt can I has a few thousand in free money? What's the difference between those examples and texas buying btc?

The difference is that food is part of(*) rights to an adequate standard of living as per Universal Declaration of Human Rights.
Not dying of starvation is a fundamental human right.

So yeah, I get that you're joking about somebody throwing an excessively opulent Thanksgiving party and then complaining that it costs a bit much.

But making sure that every single person has access to sufficient food is a core job that government has to do(**). You can make jokes around what constitutes "sufficient", but you can't deny that nobody should die of starvation.
On the other hand, making sure that your Ponzi scheme doesn't implode before you had time to make it to the bank isn't the government's job. At best government's job would be to regulate in order to make it less likely that unsuspecting idiots get caught up in such scams.

(**): Yes, I understand that from the US' point of view, I am an evil Euro-communist and my country is some socialist hell-hole.

(*): along with "clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control."

There isn't excess public money, its all deficit trailing back to the black hole $37t...38 whatever it is now since states are dependent on federal money

You do understand that government budgets don't work like balancing your home expenses, right?

Comment Re:PR article (Score 1) 179

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 2) 179

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Slashdot Top Deals

To err is human -- to blame it on a computer is even more so.

Working...