Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:End to End with a back door? (Score 1) 103

Even simpler: End-to-end with copies of the key stored on their servers. MS does this with BitLocker (unless you are careful). Obviously, no actual expert would call that "secure" end-to-end (or "secure" disk encryption), but most people are not experts in this area and lying by misdirection has no legal consequences. It is time to change that.

Comment May still be end-to-end (Score 1) 103

Just not "secure" end-to-end. There are quite a few possibilities where traffic is end-to-end encrypted, but they can still read it. One is storing copies of the keys on their servers (like MS does with Bitlocker if the users are not careful, but that is a different discussion).

The real problem we have here is lack of clear definitions and legal liability. Obviously, any competent security expert will only call things end-to-end encrypted where nobody but the endpoints have access to the encryption keys, directly or indirectly. Also obviously, unless and until there is a standardized, legally enforceable definition, all kinds of lies-by-misdirection are possible and that seems to be what Meta is doing here.

The bottom line is to not trust the Big Data companies on anything. All of them have a long history of lying and incompetence. Encrypt it yourself or expect that they can read your stuff.

Comment Re:Nope (Score 4, Interesting) 40

Yep. Obviously. They have that little problem that each step they take to get to something has a probability of failing, unlike actual deduction. That probability adds up and at some depth, it is all noise.

Good to see somebody took the time to look into things. From briefly skimming the paper, the upper bounds seem to not depend on the LLM, but solely on the complexity of the question asked. The limit they examine seems to be of the form that "all queries above the complexity limit will end up in hallucination".

Comment Re:A moment of honesty (Score 3, Informative) 40

Among so much hype, I almost can't believe he said the quiet part out loud: LLMs are not thinking creatures.

Any bets on if he keeps his job?

As people generally struggle to fail to use the "general" in general intelligence, I expect not many people will even notice.

But yes, he essentially said that LLMs cannot do novel things. The "yet" is an obvious lie by misdirection.

Comment Re:LLM's are prediction machines (Score 2) 40

Indeed. What is hilarious is that, apparently, many people are suffering from similar issues and cannot actually put the "general" in general intelligence in whatever thinking they are capable of. And hence the hype continues, despite very clear evidence that it cannot deliver.

As to "AGI", that is a "never" for LLMs. The approach cannot do it. We still have no credible practical mathematical models how AGI could be done.

I would submit that automated theorem proving or automated deduction (basically the same thing) is a theoretical model that is AGI. But that one is not practical because it gets bogged down in state-space explosion on simple things already. Scaling it up to what a really smart Mathematician can do would probably take more computing power than we can have in this universe, as the effort goes exponential in reasoning depth with a high base number. BTW, this was explored extensively around the 1990. What came out of it are proof-assist tools (very useful!), where a smart human takes the system in baby-steps through a proof and the system verifies the reasoning chain.

But besides that one? No mathematical / algorithmic approaches that can create AGI are known. They all are failing in the "general" aspect of things. Just like so many (but not all) humans do.

Slashdot Top Deals

"Well hello there Charlie Brown, you blockhead." -- Lucy Van Pelt

Working...