Comment Re:This is a lie by misdirection. And he knows it. (Score 1) 42
Physicalism is belief, no Science. It is unknown whether throwing enough (simulated or real) neurons together can make a human or not.
Physicalism is belief, no Science. It is unknown whether throwing enough (simulated or real) neurons together can make a human or not.
Physicalism is belief, not Science. You are arguing a quasi-religious stance. The actual Science says that we have no clue what humans do to show intelligence.
Also editing tables in latex is tedious and tricky, and is a table at which AI should excel (generating templates, reformatting).
Not that tricky, really. If this thing is essentially just a LaTeX editor addon, it will not do much.
Because that is what it essentially did for any real programming besides simplistic boilerplate: https://mikelovesrobots.substa...
Yes. Run it locally on some OS that does not spy on you and generally prioritizes security and the effort to get into your messages goes through the roof. Unless you are really wanted by somebody with pretty deep pockets, you will be secure.
While Zuck-the-Fuck is wrong on many things, on this one he is spot-on.
I would have been surprised if they did not have that capability. None of the "Big IT" companies are trustworthy in any way. They are also time and again doing really incompetent stuff.
It is only criminal if they assured in a legally binding manner that they would not do it. There is no legal definition of "end-to-end" encryption, even if most experts will agree that it also means only the endpoints have the keys and the keys are carefully secured against anybody else.
Even simpler: End-to-end with copies of the key stored on their servers. MS does this with BitLocker (unless you are careful). Obviously, no actual expert would call that "secure" end-to-end (or "secure" disk encryption), but most people are not experts in this area and lying by misdirection has no legal consequences. It is time to change that.
Just not "secure" end-to-end. There are quite a few possibilities where traffic is end-to-end encrypted, but they can still read it. One is storing copies of the keys on their servers (like MS does with Bitlocker if the users are not careful, but that is a different discussion).
The real problem we have here is lack of clear definitions and legal liability. Obviously, any competent security expert will only call things end-to-end encrypted where nobody but the endpoints have access to the encryption keys, directly or indirectly. Also obviously, unless and until there is a standardized, legally enforceable definition, all kinds of lies-by-misdirection are possible and that seems to be what Meta is doing here.
The bottom line is to not trust the Big Data companies on anything. All of them have a long history of lying and incompetence. Encrypt it yourself or expect that they can read your stuff.
This is clearly targeted at people that will just buy several and do not care about the cost.
I am sure the likes of Musk will love this thing!
Yep. Obviously. They have that little problem that each step they take to get to something has a probability of failing, unlike actual deduction. That probability adds up and at some depth, it is all noise.
Good to see somebody took the time to look into things. From briefly skimming the paper, the upper bounds seem to not depend on the LLM, but solely on the complexity of the question asked. The limit they examine seems to be of the form that "all queries above the complexity limit will end up in hallucination".
Among so much hype, I almost can't believe he said the quiet part out loud: LLMs are not thinking creatures.
Any bets on if he keeps his job?
As people generally struggle to fail to use the "general" in general intelligence, I expect not many people will even notice.
But yes, he essentially said that LLMs cannot do novel things. The "yet" is an obvious lie by misdirection.
Indeed. What is hilarious is that, apparently, many people are suffering from similar issues and cannot actually put the "general" in general intelligence in whatever thinking they are capable of. And hence the hype continues, despite very clear evidence that it cannot deliver.
As to "AGI", that is a "never" for LLMs. The approach cannot do it. We still have no credible practical mathematical models how AGI could be done.
I would submit that automated theorem proving or automated deduction (basically the same thing) is a theoretical model that is AGI. But that one is not practical because it gets bogged down in state-space explosion on simple things already. Scaling it up to what a really smart Mathematician can do would probably take more computing power than we can have in this universe, as the effort goes exponential in reasoning depth with a high base number. BTW, this was explored extensively around the 1990. What came out of it are proof-assist tools (very useful!), where a smart human takes the system in baby-steps through a proof and the system verifies the reasoning chain.
But besides that one? No mathematical / algorithmic approaches that can create AGI are known. They all are failing in the "general" aspect of things. Just like so many (but not all) humans do.
How often I found where I should be going only by setting out for somewhere else. -- R. Buckminster Fuller