Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re: Why? (Score 1) 57

That has not been my experience, at all. I'm entirely against the concept of what they're doing (giving me a reason not to visit the websites that ultimately pay for the production and publication of information) but the AI summaries and links to related articles tend to be spot on what I'm looking for. Perhaps you can give me a (non-contrived) search to try that demonstrates your claim?

Comment Re:For those getting pitchforks ready (Score 2) 153

The issue with health concerns like this is that it's not like it explodes and kills you - there's really no way to say, "It was the molecule on March 13, 2026 that started cancer in your body"

You can't even do that with cigarettes - you can only make a conclusion on cause that's well supported by circumstantial evidence.

And I'm not saying you're arguing against it, but just broadly speaking ... arguing *against* more information - unless the argument is that the information itself is inaccurate - seems particularly anti free-market to me. (Obviously that's why companies fight against the burden of regulation designed to increase market transparencies.)

Comment Re:So many things that contribute to this (Score 1) 215

The irony of your sarcasm is it actually *is* horrible.

Water is good - necessary even - but too much water will kill you. Choice is the exact same way - it's entirely possible to have too much of it, as much as that contradicts an ethos buried deeply in the American id.

Comment Re:"easily deducible" (Score 1) 60

If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.

When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.

Comment Re:"easily deducible" (Score 1) 60

They do quite a bit more than that. There's a good bit of reasoning that comes into play and newer models (really beginning with o3 on the ChatGPT side) can do multi-step reasoning where it'll first determine what the user is actually seeking, then determine what it needs to provide that, then begin the process of response generation based on all of that.

Comment Re:LLMs Bad At Math (Score 3, Insightful) 60

This is not a surprise, just one more data point that LLMs fundamentally suck and cannot be trusted.

Huh? LLMs are not perfect and are not expert-level in every single thing ever. But that doesn't mean they suck. Nothing does everything. A great LLM can fail to produce a perfect original proof but still be excellent at helping people adjust the tone of their writing or understanding interactions with others or developing communication skills, developing coping skills, or learning new subjects quickly. I've used ChatGPT for everything from landscaping to plumbing successfully. Right now it's helping to guide my diet, tracking macros and suggesting strategies and recipes to remain on target.

LLMs are a tool with use cases where they work well and use cases where they don't. They actually have a very wide set of use cases. A hammer doesn't suck just because I can't use it to cut my grass. That's not a use case where it excels. But a hammer is a perfect tool for hammering nails into wood and it's pretty decent at putting holes in drywall. Let's not throw out LLMs just because they don't do everything everywhere perfectly at all times. They're a brand new novel tool that's suddenly been put into millions of peoples' hands. And it's been massively improved over the past few years to expand its usefulness. But it's still just a tool.

Slashdot Top Deals

I like work; it fascinates me; I can sit and look at it for hours.

Working...