Comment Re: smart! (Score 0) 231
Another Canadian dunce who's drank the warm cup of PP
Another Canadian dunce who's drank the warm cup of PP
That has not been my experience, at all. I'm entirely against the concept of what they're doing (giving me a reason not to visit the websites that ultimately pay for the production and publication of information) but the AI summaries and links to related articles tend to be spot on what I'm looking for. Perhaps you can give me a (non-contrived) search to try that demonstrates your claim?
Texas and Florida prohibit local governments from mandating rest and water breaks.
The issue with health concerns like this is that it's not like it explodes and kills you - there's really no way to say, "It was the molecule on March 13, 2026 that started cancer in your body"
You can't even do that with cigarettes - you can only make a conclusion on cause that's well supported by circumstantial evidence.
And I'm not saying you're arguing against it, but just broadly speaking
The irony of your sarcasm is it actually *is* horrible.
Water is good - necessary even - but too much water will kill you. Choice is the exact same way - it's entirely possible to have too much of it, as much as that contradicts an ethos buried deeply in the American id.
If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.
When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.
They do quite a bit more than that. There's a good bit of reasoning that comes into play and newer models (really beginning with o3 on the ChatGPT side) can do multi-step reasoning where it'll first determine what the user is actually seeking, then determine what it needs to provide that, then begin the process of response generation based on all of that.
This is not a surprise, just one more data point that LLMs fundamentally suck and cannot be trusted.
Huh? LLMs are not perfect and are not expert-level in every single thing ever. But that doesn't mean they suck. Nothing does everything. A great LLM can fail to produce a perfect original proof but still be excellent at helping people adjust the tone of their writing or understanding interactions with others or developing communication skills, developing coping skills, or learning new subjects quickly. I've used ChatGPT for everything from landscaping to plumbing successfully. Right now it's helping to guide my diet, tracking macros and suggesting strategies and recipes to remain on target.
LLMs are a tool with use cases where they work well and use cases where they don't. They actually have a very wide set of use cases. A hammer doesn't suck just because I can't use it to cut my grass. That's not a use case where it excels. But a hammer is a perfect tool for hammering nails into wood and it's pretty decent at putting holes in drywall. Let's not throw out LLMs just because they don't do everything everywhere perfectly at all times. They're a brand new novel tool that's suddenly been put into millions of peoples' hands. And it's been massively improved over the past few years to expand its usefulness. But it's still just a tool.
Lol, so many of the posts here reek of bitter developers who have clearly never worked in the halls of quality software engineering
yeah well, everything sounds simple to people who don't quite understand the nature of the actual problem, but here's a hint:
The next day after your "simple solution": "FTC sues Google claiming that it is identifying their non-political party emails as political party emails"
"It's as if the people reporting the news don't care about being consistent"
That's just an astoundingly stupid thing to say. I don't even know where to begin with that.
Why wouldn't you just turn OneDrive off? I mean, I get it, stubborn master of your computer stuff, but honestly, you're just making shit difficult for yourself. Just turn it off.
"Usually to my desktop."
Oh my.
It's adorable you seem to think the experience of home/prosumer users is that of enterprise deployments/users.
I would argue that cruelty is the point.
I like work; it fascinates me; I can sit and look at it for hours.