Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:"easily deducible" (Score 1) 60

If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.

When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.

Comment Re:"easily deducible" (Score 1) 60

They do quite a bit more than that. There's a good bit of reasoning that comes into play and newer models (really beginning with o3 on the ChatGPT side) can do multi-step reasoning where it'll first determine what the user is actually seeking, then determine what it needs to provide that, then begin the process of response generation based on all of that.

Comment Re:LLMs Bad At Math (Score 3, Insightful) 60

This is not a surprise, just one more data point that LLMs fundamentally suck and cannot be trusted.

Huh? LLMs are not perfect and are not expert-level in every single thing ever. But that doesn't mean they suck. Nothing does everything. A great LLM can fail to produce a perfect original proof but still be excellent at helping people adjust the tone of their writing or understanding interactions with others or developing communication skills, developing coping skills, or learning new subjects quickly. I've used ChatGPT for everything from landscaping to plumbing successfully. Right now it's helping to guide my diet, tracking macros and suggesting strategies and recipes to remain on target.

LLMs are a tool with use cases where they work well and use cases where they don't. They actually have a very wide set of use cases. A hammer doesn't suck just because I can't use it to cut my grass. That's not a use case where it excels. But a hammer is a perfect tool for hammering nails into wood and it's pretty decent at putting holes in drywall. Let's not throw out LLMs just because they don't do everything everywhere perfectly at all times. They're a brand new novel tool that's suddenly been put into millions of peoples' hands. And it's been massively improved over the past few years to expand its usefulness. But it's still just a tool.

Comment Why on Earth would you EVER announce it? (Score 1) 49

If/when true AGI is achieved, only a fool would announce it. What would announcing it do for you? Make you famous? Rich? Cool. Know what's better than all that?

Not telling a damn soul and using the AGI quietly to do whatever the Hell you want. If you want to be rich, the AGI will tell you how to become rich. If you want to be famous, the AGI will tell you how to become famous. You can do both. And you don't have to stop there. A real, vastly superior AGI enables the person controlling it to do anything. The second you tell people about it, you'll lose control over it and then you're the famous idiot who did a cool thing one time. Kids in elementary school will recite your name back on a test. And you could have had everything.

Anyone smart enough to crack AGI can't also be stupid enough to advertise when they do it.

Comment Re:It is low CO2 (Score 5, Insightful) 135

You can get a lot more renewable energy for the money. Colorado tax payers are going to get fleeced by this.

The other issue not mentioned is speed. It takes so long to build nuclear that it can't be part of any realistic plan to address climate change, and it also makes it very prone to corruption because nothing gets delivered for decades.

These are all issues directly related to regulation and unnecessary red tape created out of NIMBYism and irrational fear around radiation. India, Canada, and China aren't stupid. They're building and/or modernizing nuclear power plants like crazy because they're so effective at reliable baseline power, which renewables simply are not. In the US, we force years - sometimes decades - of reviews and permits and defending court cases and other bullshit unrelated to the construction and operation of clean, safe nuclear power.

The other issue going to cost is that the US - again, stupidly - bars reuse of high energy spent fuel. If you simply separate the low energy (relatively safe, but useless for generating power) waste from the high energy fuel remaining and feed the high energy stuff back in, you can extract nearly all the energy, save a ton of fuel costs, mine less fuel, and have vastly less volume of waste and vastly less energetic waste.

Let's assume some sort of absolute mandate were passed to build 5 CANDU-6 (known, proven, safe, reliable design) reactors. No reviews, no permits, no red tape, no lawsuits. Just build the damn things now. You can get one operational in ~3.5 years, all of them in about 4ish years. South Korea and China have built PWRs in 5. Assuming we also lifted the ban on fuel reprocessing, CANDU-6 plants will produce power at a cost of 5-6 cents per kWh, yielding a retail price of 13-17 cents per kWh. US average is about 16.2 cents, California has rates pushing 50 cents. But we're too stupid to get out of our own way and just do it, so we'll keep strangling the poor and middle class economically.

Comment Re:The cycle (Score 2) 178

It's mimicked intelligence. You're absolutely right that - under the hood - there's not the sort of traditionally cognitive processing happening that we might consider intelligence. That can be a distinction without a difference if the output is the same, and for quite a lot of things, they're becoming indistinguishable.

I think the real challenge for LLMs specifically is the training data. Between the limits placed technically and legally, malicious poisoning already happening, and the breakdown of function seen when LLM generated content is repeatedly added to its training data (i.e., "model collapse"). However, I also think that by the time we start to see major effects of this, the LLMs of today will have evolved to largely work around this limitation and the underlying process for generating output will be far less susceptible to the problems seen today. Time will tell whether that's overly optimistic, but there's a ton of development in this space toward better approaches.

Comment Re:Simple solution (Score 3, Interesting) 178

Two decades ago, everybody and their brother were charging head-first toward six figure salaries (those used to mean something) and the easy life of playing arcade games at a startup waiting to become millionaires. Anyone who thought this was sustainable - particularly for the general population - was failing to think. Coding ability, like most things, is an innate skill advanced by training. You can take people with little innate talent and train them to get better just like you can take Average Joe and teach him to swim faster. But Average Joe swimmer and Average Jane coder are never going to be particularly valuable in that role long term. Once the stupid money turned off, value had to be reassessed, and lots of Galaga arcades went up on eBay.

Never play to the fads. Find something you're good at naturally that's valuable long term, develop your skills to become great at it, and market yourself appropriately.

Comment Re:The bottom tier is no longer required (Score 1) 178

Have it break everything up into smaller chunks/functions/whatever, have it include debugging and error checking/handling, and test each code chunk. When one does have a bug, you get 2-3 cracks at it before ChatGPT eats its own tail. At that point, start a new session and feed the code block in or fix it manually. I do the same thing you're doing and it can cut a ton of time out of helping write utility scripts that help automate stuff so I can spend my focus elsewhere. But it doesn't act like a human and it doesn't adapt until the next model comes out, so we have to adapt to find how best to work within its limitations to maximize productivity.

Done right, it's a huge accelerator. I don't think more than 10-20% of people are using it right.

Comment Re:Good (Score 1) 119

Take heart in knowing that most of the world isn't participating in this stupidity and others who have been are already pulling it back as research comes in to show how destructive some of these actions are to the mental and physical health of people.

To quote an old Despair.com poster: "It could be that the purpose of your life is only to serve as a warning to others".

Comment Oh boy... (Score 1) 94

More virtue signalling and white saviorism poisoning the African continent and compromising African sovereignty and self-sufficiency, all so more American tax dollars can get funneled to multinational conglomerates and self-serving "non-profits".

How about we take that money and instead hand it out as annual bonuses to the top 100 teachers in the US?

Comment Re:garbage story (Score 1) 98

Irrelevant. He could draw minors in Photoshop all day long. We're talking about the most disgusting use of free expression here. And the DOJ is trying to criminalize it. I sincerely doubt it's going to fly given the broad, terrifying constitutional implications.

However, when it comes to distributing the materials to a minor, attempting to lure a minor, really everything relating to his actual contact and conduct with minors, dude ought to burn. But the creation and possession of artificial representations of imagined persons and situations? That simply cannot be criminalized in the United States of America. It is plainly, clearly, entirely protected conduct, regardless of how disgusting and terrible it is.

Slashdot Top Deals

The program isn't debugged until the last user is dead.

Working...