Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Midjourney lawsuit - both necessary and inevita (Score 1) 88

> creators, whether they’re indie artists or billion dollar studios, deserve compensation when their work is harvested as fuel for someone else’s generative model.

Ok, let's assume this idea. Who should pay?

- model developers? they are a cost center
- model hosts? they make cents a million tokens, serving AI is commoditized, there is little profit in it
- users who generate their own ideas? how can you tax Mary's enjoyment of AI abstract art, or Jonny's passion for anime girls?

Comment Re:If you have a mediocre workforce at best (Score 1) 101

My experience with agentic flows (vibe coding) is like yours. What is really needed is (1) having the project and the plan documented (2) generating sufficient tests. They put guardrails around the agent. You also have to manage its context carefully, the less useless stuff in it the better. If regular coding is like walking, slow and steady, vibe coding is like surfing. You got to give up control, so compensate by setting up constraints.

Comment Alternative (Score 1) 41

An alternative would be - instead of adding that followup question, to realign to users. It can check if the user is properly understood, if they are talking past each other or not, and align to the user better. As a conversation gets longer the model should introspect its performance so far, and adapt.

Comment Re:We could talk about welfare of AI ... (Score 1) 105

You can directly know you are conscious, no need to explain or verify. But for others, it is not possible even in principle to have direct access to their subjective states. Wanting to do so is like wanting to see the gliders in Conways' game of life code. The description of the principle (explanation for consciousness) is unlike its recursive unfolding in time. Time is the core element in consciousness, because recursion is temporal. Turing and Chaitin show recursive processes are incompressible, their shortest description are themselves. Recursion incompressibility explains why 1st person is isolated.

Comment Re:Trained on about 33 million books .. (Score 1) 59

You focus too much on derivative generation. Yes, LLMs can do that, they can paraphrase texts, restyle them, summarize. But usually we provide 10 or even 50 different sources for the model to extract an answer. So it does something on top, it correlates information across sources. Synthesizing from 1 source could be derivative, synthesizing from 10 sources is meta-analysis, something new. Another reason we don't generate derivative content is because we actually don't need to do that, we need direct answers to our own questions, not to generate a clone of a book or article.

Comment Re:Trained on about 33 million books .. (Score 1) 59

If you want to go that route, then any page you load on the internet creates many copies in all the intermediate nodes and in your computer, and that is before you even get to read the license. But assuming someone did the deed once and trained a LLM on copyrighted data, then generated 4 trillion synthetic tokens, from that moment on there is no copyright issue anymore, you can subtitute copyright protected text with synthetic replacement. This is a small 2B model, not going to store many facts, but will operate with information provided in context.

Slashdot Top Deals

2 pints = 1 Cavort

Working...