Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Need state approved toilet paper to wipe your own (Score 1) 102

California's version "adds a certification bureaucracy on top: state-approved algorithms, state-approved software control processes, state-approved printer models, quarterly list updates

This is the most California thing I've ever read. Unconstitutional, unenforceable, and a massive increase in costs and bureaucracy; they hit the trifecta! I wonder if printer manufacturers that bake their own bread will be exempt once their checks to the governor's presidential campaign clear.

Incidentally, this is the kind of stupid shit that helps Trump and people like him get elected over and over.

Comment Re:LLM had a head start (Score 1) 113

Don't ask some LLM's how many "r"s are in strawberry.

That was definitely a problem two years ago. I did just check in ChatGPT, Claude, and Gemini and all reported 3 correctly. The problem with people throwing out these sorts of criticisms isn't that they're all wrong; it's that they're ignorant of the leaps in progress being made. These models are rapidly improving and it's getting harder to find serious gotchas with them. They're still weak in some areas (e.g., spatial reasoning), but for serious power users who know how to prompt them well? They've become insanely powerful tools.

Not gods; tools. But really, really strong tools for huge variety of tasks.

Comment Re:Too late (Score 1) 65

I've used ChatGPT to write code and Gemini to debug it. If you pass the feedback back and forth, it takes a couple iterations but they'll eventually agree that it's all good and I find that's about 90-95% of the way to where I need it to be. Earlier today I took a 6kb script that had been used as something fast and dirty for years - written by someone long gone from the company - and completely revamped it into something much more powerful, robust, and polished in both its code and its output. Script grew to about 20kb, but it's 10x better and I only had to make minor tweaks. Between the two, they found all sorts of hidden bugs and problems with it.

Comment Re:I donno... (Score 1) 186

Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.

LK

Don't agree at all and I think that's a morally dangerous approach. We're looking for a scientific definition of "desire" and "want". That's almost certainly a part of "conscious" and "self aware". Philosophy can help, but in the end, to know whether you are right or not you need the experimental results.

Experiments can be crafted in such a way as to exclude certain human beings from consciousness.

One day, it's extremely likely that a machine will say to us "I am alive. I am awake. I want..." and whether or not it's true is going to be increasingly hard to determine.

LK

Comment Re:I donno... (Score 2) 186

An LLM can't suddenly decide to do something else which isn't programmed into it.

Can we?

It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.

Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.

LK

Comment New Flash: Farrier Very Concerned About Automobile (Score 3, Insightful) 92

Wikipedia is an interesting concept and it works decently well as a place to go read a bunch of general information and find decent sources. But LLMs are feeding that information to people in a customized, granular format that meets their exact individual needs and desires. So yeah, probably not as interested in reading your giant wall of text when they want 6 specific lines out of it.

Remember when Encyclopædia Britannica was crying about you stealing their customers, Wikipedia? Yeah, this is what they experienced.

Comment Re:"easily deducible" (Score 1) 60

If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.

When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.

Comment Re:"easily deducible" (Score 1) 60

They do quite a bit more than that. There's a good bit of reasoning that comes into play and newer models (really beginning with o3 on the ChatGPT side) can do multi-step reasoning where it'll first determine what the user is actually seeking, then determine what it needs to provide that, then begin the process of response generation based on all of that.

Slashdot Top Deals

grep me no patterns and I'll tell you no lines.

Working...