Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:What is thinking? (Score 4, Insightful) 57

As much as I agree with the statement that contemporary LLMs certainly differ a lot from what we experience as "thinking" from other human beings, the problem with this line of argument remains that there is no consensus on what exactly manifests "thinking",

The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.

It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.

Comment Re:It is NOT autoconplete the way you think it is (Score 1) 210

That the statistical model for word prediction is far more complicated that the autocorrect in my text editor does not in any way a refutation of what I said. The more complicated algorithm IS the steroids part of "autocomplete on steroids".

You are doing a fine job of stressing the profoundness of the difference. But it is a difference that is immaterial to the point I was making. The algorithm underlying an LLM is not intelligent, despite being able to create a convincing simulacrum of intelligence.

Intelligence has to do with being able to learn and understand new topics and situations. No LLM can do that. When you hold a conversation with an LLM, the API sends all your previous correspondence (your prompts + its own responses) as a prelude to your next prompt. It is a clever hack (by the LLM designers) to create the impression of having a conversation where one is not actually occurring.

Comment Re: Case in point (Score 1) 210

The problem there is you believe what the AI tells you about its own reasoning. It doesn't "reason" when it answers your query. It predicts the next word, until it is done, based on information in the training set. When you as it "Why did you give me that answer" it does the exact same thing again. Predicts the next word that would appear if you asked a person to explain that answer, until it is done, based on information in its training set.

One of the AI devs over at Kagi posted something recently, that AI is not a liar, but a bullshitter. A liar knows the truth and wants to deceive you. A bullshiter does not know, or care, what the truth is, it just wants to convince you. LLM have been engineered to use convincing tone becuase that gets you to use it again.

There is no reasoning, only bullshit.

Comment Re: Case in point (Score 4, Informative) 210

Precisely. LLM systems are, ultimately, auto complete on steroids. That they can present a reasonable simulacrum of intelligence, does not change the fact that there is nothing else intelligence involved. No reasoning, no knowledge. Just probability based word assemblies.

that is why we are not sufficiently impressed for this douche. We see the limitations, and the harms that come from ignoring the limitations, and end up underwhelmed. They are promising something they are not actually delivering.

Comment Re:Stay off my PC! (Score 1) 41

Gaming exclusively on modern consoles on grounds that games for Linux or Windows are presumed malware means you'll probably get indie games years late or never. This is because it takes time for an indie developer to build enough of a reputation in the industry to become eligible to buy a devkit for a modern console.

Unless by consoles, you mean things like the NES and Genesis, which are still getting brand-new indie games decades after Nintendo and Sega stopped supporting them.

Comment Re:AI code = Public Domain (Score 1) 45

That is how it's been, Those AI tools were trained on open source/public domain content, so any contribution by AI tools must be considered released under public domain. It does not get simpler than that, and current US copyright law has already indicated that any AI created works are not eligible for copyright

That's not the question.

The question is whether the AI-produced code is a derivative of existing code, and the answer is still not resolved.

In some cases, the answer is a clear YES, because the code is a direct copy of something written by someone else. If something like that ends up in the kernel, it will have to be removed when someone notices.

Slashdot Top Deals

Nothing motivates a man more than to see his boss put in an honest day's work.

Working...