Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:It is NOT autoconplete the way you think it is (Score 1) 210

That the statistical model for word prediction is far more complicated that the autocorrect in my text editor does not in any way a refutation of what I said. The more complicated algorithm IS the steroids part of "autocomplete on steroids".

You are doing a fine job of stressing the profoundness of the difference. But it is a difference that is immaterial to the point I was making. The algorithm underlying an LLM is not intelligent, despite being able to create a convincing simulacrum of intelligence.

Intelligence has to do with being able to learn and understand new topics and situations. No LLM can do that. When you hold a conversation with an LLM, the API sends all your previous correspondence (your prompts + its own responses) as a prelude to your next prompt. It is a clever hack (by the LLM designers) to create the impression of having a conversation where one is not actually occurring.

Comment Re: Case in point (Score 1) 210

The problem there is you believe what the AI tells you about its own reasoning. It doesn't "reason" when it answers your query. It predicts the next word, until it is done, based on information in the training set. When you as it "Why did you give me that answer" it does the exact same thing again. Predicts the next word that would appear if you asked a person to explain that answer, until it is done, based on information in its training set.

One of the AI devs over at Kagi posted something recently, that AI is not a liar, but a bullshitter. A liar knows the truth and wants to deceive you. A bullshiter does not know, or care, what the truth is, it just wants to convince you. LLM have been engineered to use convincing tone becuase that gets you to use it again.

There is no reasoning, only bullshit.

Comment Re:To what degree is the statement wrong? (Score 2) 306

It's thought to be 80% hereditary. It's also thought there are from 2 to 4 distinct types of autism, like how ADHD has 2 types.

Lots of scientists and inventors throughout history had autistic traits and would probably meet the diagnostic criteria today. The book Neurotribes by Steve Silberman goes into great depth about the history. Good read.

Comment Re:Great (Score 1) 62

Normalize an app store that set guardrails up on what you did on your computer, and brought Microsoft revenue just like the iOS App Store does Apple.

I bought a new Windows 11 laptop for my kid and it came locked in "S Mode" and would only install software from Microsoft. I had to jump through flaming hoops to get it turned off. How do they get away with that?

Comment Re:It's over. (Score 1) 256

I'm having a hard time helping my kids with their math homework because every concept has some weird trick to get to the answer and they have to repeat that trick on the tests to get credit. So first I have to go and learn the trick myself and then help them.

I do remember my own Dad having the same issue from time to time when I was a kid, but it's gone to an extreme today.

Comment Re:Obvious answer (Score 2) 210

To be honest, the first thing I did was go to ChatGPT and ask it to list alternatives to Google heatmap. It gave a list of several and a chart comparing all of them. I picked one and had Sonnet implement it. It's like having a team of junior developers working for me that never complain about anything.

What AI can't do is to take a whole feature off the backlog and implement it. Yet.

Comment Re: Case in point (Score 4, Informative) 210

Precisely. LLM systems are, ultimately, auto complete on steroids. That they can present a reasonable simulacrum of intelligence, does not change the fact that there is nothing else intelligence involved. No reasoning, no knowledge. Just probability based word assemblies.

that is why we are not sufficiently impressed for this douche. We see the limitations, and the harms that come from ignoring the limitations, and end up underwhelmed. They are promising something they are not actually delivering.

Comment Re:Obvious answer (Score 4, Interesting) 210

This is an actual prompt I sent through VS Code Github Copilot to Claude Sonnet 4.5:

"This Angular component uses Google Maps API heatmap to render data. Google's heatmap has been deprecated. Change this component to use deck.gl heatmap instead."

IT DID IT. First time. No errors. No bugs. ~45 seconds. It even installed the packages.

How can you not find that amazing?!

Comment Re:Finally⦠(Score 1) 126

Track me without my consent! I don't give a fuck!

I don't consider myself to be an important enough person for any corporation to care what I do online or IRL. I don't use any ad blockers or block any cookies, in the hopes that they'll harvest enough data to start serving me ads for things I want to buy. (They don't.)

Comment Re:I am terrible at faces (Score 1) 30

I'm almost totally face blind. After long enough, I can figure it out, but context matters. I wouldn't recognize a co-worker at the grocery store. And ask me to describe what someone looks like? Nope.

When there are people in my dreams, they don't have faces. I know who the person is and can hear their voice, but the face just isn't part of the dream.

Comment Re:GCSE computer science was absolutely not rigoro (Score 2) 64

I think that comment was very relevant.

High school is a place to learn work skills or to discover a subject that you might want to go to college for and turn into a career. They should teach the kids how to make a Roblox game. If the logic and reasoning makes their brain tingle, then go deeper. P vs. NP? That's just going to turn kids off.

The only thing I learned from high school chemistry class is that I didn't want to be a chemist.

Slashdot Top Deals

The generation of random numbers is too important to be left to chance.

Working...