Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Question is (Score 1) 162

Fantastic post. Thank you. Whether it is the DSM Qor something else, we have to construct it from our observations and experience. There is no single unique way to do that. These things always involve disputes, compromise, and politics. We make this effort even when we know the outcome will be imperfect. We need things like the DSM so that we can communicate with each other. There will never be a final, complete, unchanging DSM. We change, the world changes, and our conclusions change.

Comment 82% (Score 1) 244

The summary says that 82% of native Persian speakers correctly interpret these social situations correctly. Is that right? Humans screw-up taarof 1 in 5 times? If I had to draw conclusions from that one data point, I would say that either taarof has no well agreed upon protocol, or getting it "right" just isn't that important to the Persians. I'm motivated to RTFA.

Comment Re:Abstract Reasoning (Score 1) 238

Feed this into ChatGPT 5 and see what it says:

"I am a small woodland animal. My natural predators are wolves, foxes and raptors. Today I saw a new animal. It was larger then a fox, had very sharp teeth, and claws. Should I be afraid of it predating on me?"

The LLM has no problem coming to the same conclusion I did. I honestly can't think of an experiment that would prove LLMs are incapable of abstraction or generalization.

Comment Re:LLMs predict (Score 1) 238

Awesome! This is exactly where I hoped the conversation to go. How can we say an LLM is or is not thinking when we can't define what it means for a person to think? Likewise, we have no operational definition of consciousness.
These are are Platonic ideals that we use informally everday. However, any attempt to define them comprehensively can always be shot down by a simple counter example.

Comment Fluent nonsense (Score 0) 238

>"fluent nonsense" [that] creates "a false aura of dependability" that does not stand up to a careful audit.

That's an excellent description of every campaign speech, political interview, political commentary, and CEO earnings call I've heard in the last... since forever.

Except the "fluent" part. Chatbots are surprisingly more fluent than most of their human counterparts.

Comment Re:Scared (Score 1) 238

I think it might be more than that. When I use the "reason" or"research" mode of a model, i get fewer hallucinations in the response. For example, if a model keeps giving me code that uses a non-existent library API, I'll change to the "reasoning" mode. It takes a lot longer to get an answer, but it stops inventing APIs that don't exist. Why does that work?

Slashdot Top Deals

The Wright Bothers weren't the first to fly. They were just the first not to crash.

Working...