Comment Re:NIST algorithms (Score 1) 20
You have an irrational trust in an agency that has published intentionally compromised algorithms before. Well, there are tons of fools around. You fit right in.
You have an irrational trust in an agency that has published intentionally compromised algorithms before. Well, there are tons of fools around. You fit right in.
I guess you think Peter Gutmann has no clue as well. You are a fool.
Attackers like that!
In other news, competent cloud account system administration is _harder_ than for local installations, due to all the extra functionality, reachability, complexity, tooling. All of that is a KISS violation and the enemy of security.
Same. I have had hard freezes on Win11 with hardware that never have any trouble under Win10.
I remember my last Linux crash. It was ca. 2010 and I told the kernel via parameter I had way more memory than was in the machine. Oh, you mean crash without gross user error? Hmmm. I had a few (not a lot) with some specific defective hardware. And I have been using Linux since 1995.
True. And it does not look like they even have a snowflake's chance in hell to ever get to profitability without some major breakthrough. And even with that, they will have collapsed long before. The numbers for the competition do not look that much better though, it is just way more obvious for OpenAI.
The whole idea of general LLMs is massively overhyped and cannot deliver on the hype. Large players (Google, Microsoft, potentially Nvidia) may survive because they have enough reserves and other revenue, but not even that is assured.
And fail. How clueless can you be? CERN does a lot more and, in particular, a lot of applied CS research due to the massive amount of data they need to be able to handle. Even if they partially fail their core mission (they cannot fully fail anymore), the money invested was already recovered countless times over.
No idea. But what we have in "post quantum" crypto is all laughably weak against conventional attacks and laughably unverified. We have had finalists of competitions broken with low effort (one laptop) and the like. Moving to these algorithms is an excessively bad idea.
Quantum hardware may never be up to the task. They cannot even factorize 35 at this time (https://eprint.iacr.org/2025/1237). The whole thing is a mirage and a bad idea that refuses to die.
Incidentally, even if they ever become able to do tasks of meaningful size, QCs are completely unsuitable for reversing hashes and that is what cracking passwords needs.
They are hallucinating hard. The current actual actual quantum factorization is not even 35 (that attempt failed, overview in https://eprint.iacr.org/2025/1...).
While crypto-agility is a good idea, there is no threat from Quantum "Computing" and there may never be one.
No argument.
Obviously. Until you add external input and command injection becomes a thing.
Agreed. General LLM tech is obviously a dead end, at least without some fundamental breakthrough. Specialist models may or may not fix hallucinations and command injection, but at least there seems to be a reasonable chance that they will or that other safeguards can be put in place.
Most people are not smart and cannot assess reality adequately. Hence I would say this is a fundamental product defect and should make the providers liable for any and all damage done. This is, after all, a product marketed to the general population, when it clearly should be experts-only.
While not surprising (LLMs are not reliable instruction followers and cannot be), this pretty much kills the idea of LLM-Agents in most usage scenarios. And it is even worse: As LLMs do not have a separation between data and instructions, this means that command-injection attacks seem to be getting even easier. Another reason that LLM-Agents are a very bad idea.
It's been a business doing pleasure with you.