Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:PR article (Score 1) 163

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 2) 163

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Comment Re:just squeeze more juice from your customers (Score 2, Insightful) 55

Comment Re:just squeeze more juice from your customers (Score 2) 55

Sooner or later, we'll end up at the point where trying to maintain the ways of the past is a fruitless fight. Teachers' jobs are no longer going to be "to teach" - that that's inevitably getting taken over by AI (for economic reasons, but also because it's a one-on-one interaction with the student, with them having no fear of asking questions, and that at least at a pre-university level, it probably knows the material a lot better than the average teacher, who these days is often an ignorant gym coach or whatnot). Their jobs will be *to evaluate frequently* (how well does the student know things when they don't have access to AI tools?). The future of teachers - nostalgia aside - is as daily exam administrators, to make sure that students are actually doing their studies. Even if said exams were written by and will be graded by AI.

Comment Re: Doesn't matter (Score -1, Troll) 138

Yes. Ukraine had a chance in 2022 because Putin sent a very small force assuming that he could convince Putin to agree to the peace deal where they'd stop killing Ukrainians in Donbass. And it would have worked if NATO hadn't offered to send all the money and weapons Ukraine wanted to keep the war going.

Ukraine then pushed the Russians back because the Ukrainian military outnumbered the Russian forces and suddenly they had Starlink for communications, US intelligence data to tell when where the Russians were, and all the weapons they could eat. But once they failed to push the Russians out of Ukraine it was just a matter of time for Putin to build up the available forces and ramp up weapons production. There was no way to win after that other than for NATO to send in troops, and no NATO government wants to do that.

Now either NATO will send in troops or Ukraine is going to get a much, much worse deal than they were offered in 2022 and it may not longer even exist as a viable country. Particularly if Putin takes Odessa as well and cuts rump Ukraine off from the Black Sea entirely.

Comment Re:Wanna stop layoffs? (Score 0) 62

> And that means you vote for politicians who'll do it. If you're American that means a Democrat.

You mean, like the Microsoft anti-trust case which was filed in 2001 when both President and House were Republicans and the Senate was 50:50?

Has there been any anti-trust case against big business since then? Maybe the Democrats did something but I can't remember anything like that offhand.

At this point, expecting elections to do anything just makes you look incredibly naive. It's clear that the only thing the vast majority of populations care about is grift.

Comment Re: Way to protect the artists (Score 1) 46

Because billionaires are totally going to hand out vast amounts of money so people who don't make money for them can sit around at home watching pr0n.

UBI cultists love to talk about how evil the rich are while also claiming that the rich will pay them to do nothing productive. Because billionaires are such lovely, caring people.

Comment Re: Musicians deserve what they demand (Score 1) 46

There are some pretty good AI songs out there. Lots of really bad ones, but people who know what they're doing can now easily make the songs they want to hear.

I know someone who was a moderately-successful musician in the 90s (some Top 40 songs at the time) and now makes his own songs with AI after being out of music completely for twenty years. You could kind of tell the early ones were AI-generated but the later ones not really.

Comment Re:Musicians deserve what they demand (Score 3, Insightful) 46

Historically, musicians signed bad deals because the music labels were gatekeepers and if they wanted to have the success of a big popular band they had to sell their souls to get it.

Now musicians can make a decent living without having to sign their soul away. But thirty years ago there wasn't a lot of choice for most people.

Comment Re:Windows today (Score 1) 63

I just set one up for my mother-in-law. Setup worked great until Windows Update couldn't install an update and couldn't not install an update, so I had to reinstall Windows to fix it. Then I copied over all the old files and went to back it up with Clonezilla and discovered that Microsoft had turned on disk encryption by default so now I have to figure out how to turn that off because I don't want her to lose all her data at some point in the future when something breaks and expect me to figure out a way to decrypt it.

I thought Windows was malware, but it seems to be progressing to Ransomware.

Slashdot Top Deals

Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?

Working...