Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Who would dare opt in? (Score 1) 28

Who would opt in to this? No matter how well the company tries to police this, there will be AI generated slop of artists singing terrible lyrics that they would never do in real life. Does is matter that the company can issue take down request after the fact when your new hit single "Adolf's Solution" featuring your likeness adorned with a silly mustache has already gone viral? Maybe that's on the nose enough for an LLM to shut down, but there are plenty of other terrible things that can be made with this and 4chan will try to make them all.

It's a license between an AI music generator and WMG. Presumably someone can ask for it to be generated and it probably gives you a 30 second sample before you have to pay for it. At that point the artist likely will have the ability to veto the creation, or to take it as their own,

And I suppose it's a way for smaller artists to make some money because obviously the AI maker is going ot have to pay WMG for the license to do it.

If the artist approves then whoever created the song presumably just has to pay up for it and they get the download. And chances are it's non-exclusive, so WMG and/or the artist get the ability to have that song for them as well.

And there's likely to be logs to, so if someone did do a deep fake, you have their billing address and know who actually created it so you know it was AI generated. The fact it's not anonymous is likely a huge guardrail in what can be preduced

Comment Re:PR article (Score 1) 157

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 2) 157

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Comment Re:First hand knowledge (Score 1) 88

Funny you should mention badge removal: Chinese manufacturers will send out response teams to remove logos and badges from EVs that catch fire in mainland China.

Which is fine, because we have non-Chinese influenced EV data as well. China exports a lot of their EVs to Europe and Australia, so if they had a habit of catching fire, we'd know about it.

And starting in January 1, 2026, China would require an export permit to prevent exporting low quality EVs, the sale of such has brought down their reputation (see a recent French EV crash test of a Chinese EV which was miserable compared to other EVs).

So if they're keeping the crap for themselves and exporting the good cars, we all benefit.

The BYD Dolphin, which is among the smallest and cheapest EVs you can buy (except in North America) reviews quite favorably - https://arstechnica.com/cars/2...

It's not flashy, it's not fun, it's an EV that'll get you around town like any other car.

Comment Re:It WILL Replace Them (Score 4, Insightful) 43

The illusion of intelligence evaporates if you use these systems for more than a few minutes.

Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.

Comment Re:Ah, well. (Score 1) 44

It might not even be necessary to fork much. Genuine Arduino hardware is so expensive most people use clones, lots of people use Platform IO instead of the Arduino IDE, and the Arduino core for the newer microcontrollers is not made by Arduino anyway.

The "magic" Arduino bit is the Arduino bootloader. That is also open source and anything that can speak the protocol can upload new firmware.

That's why Arduinos encompass more architectures than just AVRs - you can get ARM based Arduino compatible boards, I believe there are a few RISC-V ones, and at least one ESP32 based one.

The fact it's just a bootloader is why clone boards exist - there's nothing special about the official Arduino boards. It's easy to make your hardware "Arduino compatible" which makes it often much easier to develop with as you can easily update it without needing the AVR programmer.

Comment Re:CO2 is a virus? (Score 1) 45

That said VOCs is a better proxy. With VOCs you can approximate CO2 as well, but also pick up other things such as someone's farts, though I suspect you don't need electronics to tell you to open the window then.

The problem is VOCs are a poor proxy for ventilation. By VOCs, most people mean benzene based substances (6C rings) - which are things like paints and plastics and polymers. And in smaller quantities as perfumes and such. Flatus, is mostly stuff like methane and hydrogen sulfide, which doesn't usually show up as VOC - methane is classified as a hydrocarbon and hydrogen sulfide an acid gas. VOCs also dissipate, which is why "new car smell" is named such - after a car is manufactured the seats, fabric treatment, plastic, etc, all offgas and into the enclosed cabin of a car causing that scent. But once it's done, it dissipates.

CO2 is a better proxy because it means there are living things in the space and thus can be used to determine how well the ventilation is working. If the CO2 is rising, it means there are more people than the ventilation system can handle as it can't replace air fast enough.

Problem is many older buildings are designed to maintain temperature more than circulate air around as air quality is a more recent thing. Made all the more relevant due to recent events that raised awareness.

Slashdot Top Deals

Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?

Working...