Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:To use scientific terminology (Score 1) 248

Possibly.. but the theorems they are referring to are the basis of modern cryptography..
If they're wrong about that, then cryptography would no longer be "safe".

(assuming the paper isn't totally flawed.. I can't say my knowledge of Theory of Computation is good enough to have a valid opinion on that)

Comment Re:Either I'm confused or the summary is incomplet (Score 1) 248

Whatever storage device you have, you'll never defeat Turing's infinite tape. By definition it can hold more storage than anything in the Universe. And yet it has known limitations. So storage isn't the key.

As for randomness. Chaitin defines something random as something that cannot be compressed, or expressed with a shorter expression than itself.

The current assumption is that no computer can do more than the Turing machine in terms of computation capacity. It is a universal computer, it represents what is possible. Any anything outside of it isn't possible. (Turing seems to have proven this point.. or it's widely accepted to be the truth.. I don't remember)

Comment Re:Lack of imagination (Score 1) 248

Chaitin would say that your "theorem" or "understanding" or "algorithm" of something should be shorter than the something you are trying to model. However, he proved that there are cases in which that is not the case. Some things are just not "computable".

Goedel would say that you cannot compute all math based on some base Axioms. This is a consequence of the nature of algorithms.

So whatever "understanding" you have that transcends these models (like the Turing machine) is by definition "non-algorithmic".

This is the sort of wonderful things you'd study in a Theory of Computation class. Really gory stuff. But it's why CS Academics worship Goedel and Turing.

Comment Re:Short sighted (Score 1) 248

Well, the way "algorithms" are defined includes they being step by step instructions and having finite length.

Based on this assumption Godel proved that there are things that algorithms cannot compute. Turing pretty much said the same thing and provided a model of computation that has held up to improvements such as parallelism and quantum computing.

So based on the definition of what an algorithm is, then yeah, it will have limitations. That's what the Incompleteness Theorem is about.
It makes it obvious then that some things the popular press thinks of as computable, may in fact not be. This Simulation theory is one of them. The other is the idea of the super fast singularity AI that can compute faster than the limitations of a Turing machine. Not going to happen.

But I thought these things were obvious. I guess someone had to lay out the argument formally.

ps. Math deals with ideals and perfection.. in the real world things don't need to be complete or perfect or work in all cases. So even if something is "undecidable" it can still be pretty darn good. Like an LLM can find bugs in your code. But there probably are no guarantees that it could find all bugs in all code. Because that would violate the incompleteness theorem.

Comment Ad revenue (Score 1) 31

Much of the internet runs on ad revenue..
And people go to fewer and fewer websites. Gone are the days that people just browsed around on websites. Social media feeds you what you want now.
So yeah, it's time for a business model disruption.

Big AI companies can now own a new window to all apps. Making ad revenue hard to share.

Comment ClosedAI (Score 2) 60

OpenAi was founded as a non-profit to develop OPEN AI tech for all. So that companies like Google wouldn't monopolize that field.

Instead it closed the door. Other companies followed suit.

Except this little company in China that keeps delivering bombshells and sharing tricks with the world.
Good for DeepSeek. Open source lovers around the world should be appreciative.

Comment Re:Generative vs Factual (Score 1) 90

Did you read the paper?

The paper is about whether we reinforce it to answer correctly (whatever that means) vs it simply saying "I don't know".

Most LLMs are rewarded when they give a correct answer. Say: 1 for correct, and 0 for incorrect.
So they are rewarded to always try an answer even when they aren't certain. The paper suggests rewarding "I don't know". For example, you could give 0.2 points for saying "I don't know", 0 for incorrect, and 1 for correct. This way the model will try to answer but only when it meets a certain threshold.

All generative models are generative. What they generate is the most probably text. What you consider as most probable text would depend on several factors (esp. the data set). But during the reinforcement learning phase they can teach it to say "I don't know" so it plays nicer with humans that expect answers that are more than just gramatically correct.

Slashdot Top Deals

"You can't make a program without broken egos."

Working...