Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:The Imitation Game (Score 1) 78

And all non-free operating systems can push updates to users. And the free ones can at least hope that you install them yourself.

Microsoft: Proven since XP that they can install updates on user's PCs without their mechanism (one time they pushed one to fix broken Windows update). Can be targeted at single users.
iOS: Appstore/Services stuff in the background. Can be targeted at single users.
Google: Playstore/Services stuff in the background. Can be targeted at single users.
Redhat/Canonical: Probably harder to target a single users, depending on which mirror they use and if the mirror hoster cooperates. But patriot act should be enough to force them to sign and upload a package, possibly for all users.

There are also more stores that may target users. Steam, Epic, possibly snap and flatpak. Other than the large package repos it is easier to target users (and avoid security researchers getting their hands on the malware).

I'd trust LUKS on some distribution with high ideals to be harder to infiltrate, but the Linux large corporations are probably not protecting you.

Comment Re:Question (Score 1) 19

There are two points. The first is reductionalist view. Technically a LLM predicts the next token, but before it does, it does a lot things that influence what token it is. Who mentions. Here the "just" in "It is just autocomplete" does a lot of lifting. The second point is more technically, that we're only talking about the last layer for autocomplete. The LLM has a lot of state before the layer computing token probabilities discards most of the state to output a probability distribution.

Otherwise I think you're right, without needing to examine every minor point, but the point about forgetting prior points are mostly present in older RNN like LSTM and the transformer architecture mitigates this a lot. On the other hand it introduces "lost in the middle" what seems to be even harder to explain to ordinary users. And the whole "needle in the haystack" tests seem flawed to me. Don't ask for a password hidden in the middle of a story. Ask for the motivations of a minor character that appeared only once in the text. That's would be proving that an LLM can handle long contexts.

Comment Re:Question (Score 1) 19

"Autocomplete" is dishonest reasoning. You can do autocomplete with a single hidden layer. Now go count the hidden layers of your favorite LLM. The point is not if it is trained on "just" text, but that it has emergent properties. And most people who use terms like "glorified autocomplete" know that.

Comment Re:Question (Score 1) 19

Maybe they expect models that are more than autocomplete. I mean they extended a better autocomplete to a model that you can ask questions and get "PhD level" answers, so one shouldn't rule out that they reach AGI. I don't believe in it, but I won't bet against it.

Comment Re:When you forget to make backups for two years (Score 1) 111

One would have been enough here. Or RTFM on the function. The last time I looked they had a text near the button like "Opt-Out of training disables your chat history and deletes stored chats". It is basically a general "Privacy" Button that (at least as documented) deletes all chat data.

Comment Re:"Regulate" (Score 2) 7

The EU has numbers of FLOPs defined for when a model needs to be regulated.

"GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). Providers must notify the Commission if their model meets this criterion within 2 weeks."

Comment Re:That's not really a surprise (Score 1) 39

In theory you're right, but in practice I doubt that they gave the AI a thousand tries. And your optimization approach would need a few orders of magnitude more. If you have an AI that can do self play it will find all kinds of exploits as shortcuts, but it also needs like a million tries for that.

Comment Re:The took a western open-source AI model, (Score 1) 52

You can look at the code, because it is open source. No need for conspiracy theories.
You can also compare their model with others. Don't forget that Chinese companies pioneered thinking models (i.e. chain of thought baked into the model). We have to credit Meta for starting the open source movement with Llama 1 and providing good models until Llama 3, but the Chinese then continued to provide the best Open Source models (with Mistral being close, but releasing less often).

Slashdot Top Deals

Nature always sides with the hidden flaw.

Working...