Comment Re:NIST algorithms (Score 1) 44
No idea. But what we have in "post quantum" crypto is all laughably weak against conventional attacks and laughably unverified.
This isn't true.
Yes, one of the finalists was broken, utterly. There are no successful attacks against ML-DSA, ML-KEM or SLH-DSA, and they have good security proofs. Note that "successful attack" and "security proof" both have different meanings to cryptographers. A successful attack is one that reduces the security even a little from what it theoretically should be, even if the reduction still leaves the algorithm completely unbreakable in practice. A security proof is a proof that the construction is secure if the underlying primitives satisfy security assumptions. There is no cryptographic algorithm in existence that has a security proof that a mathematician would consider to be a proof; we just don't know how to do that. In the case of ML-DSA and ML-KEM, the underlying assumptions are about the hardness of the underlying mathematical problems, Module-LWE and Module-SIS. In the case of SLH-DSA the underlying assumptions are about the security of hash algorithms.
Module-LWE and Module-SIS are fairly new problems, and have only been studied for a little over a decade. The whole field of mathematics they're based on is less than 30 years old, so it's more likely that some mathematical breakthrough will destroy their security than it is that some breakthrough will wipe out ECC, which has been studied for about 50 years, and which builds on 150 years of algebraic geometry. Still, a mathematical breakthrough could destroy ECC or RSA, too.
In contrast SLH-DSA is rock solid, from a security perspective. We've been studying hash functions for a long time, and, really, our entire cryptographic security infrastructure is based on the assumption that our hash functions are good. If that turns out not to be the case, then quantum computers will be the least of our problems because to a first approximation every cryptographic protocol in existence relies on secure hashing. It's far more likely that ECC or RSA will be broken than that SLH-DSA will be broken. Unfortunately, SLH-DSA is orders of magnitude slower than what we're used to.
It's worth noting that SIKE (the NIST PQC finalist that was broken) also had a security proof. The problem was that the proof showed that SIKE was secure if the supersingular isogeny problem was hard -- but what SIKE actually used wasn't that problem, exactly. SIKE required additional data to be published, and that additional information reduced the hardness of the problem. This is why the break was so total, and was found immediately when researchers began scrutinizing SIKE. All it took was the observation that SIKE relied on a less-hard problem, then a mathematical solution to the less-hard problem.
NIST chose these three algorithms for good reasons. ML-KEM and ML-DSA have larger keys than we're used to with RSA and especially ECC, but they're not that much larger, not so large that they simply can't be used in existing protocols. And they're fast, with performance on par with what we're used to. So they are feasible drop-in replacements in most cases.
SLH-DSA is not a drop-in replacement. The keys are very small (on par with ECC, a bit smaller, even), but the signatures it produces are enormous: the smallest is 8k, the biggest is 50k (depending on parameter choices). Also, signing is 50-2000 times slower than EC-DSA (depending on parameter choices) and verification is 10-30 times slower.
So, what NIST did was choose a pair of quite-usable and probably-secure algorithms (ML-KEM and ML-DSA) that cover all cryptographic needs and are very close to being drop-in replacements, plus a less-usable but absolutely-secure algorithm as a backstop. I don't know that they ever explicitly stated the strategy they were suggesting, but it's obvious: Use ML-KEM and ML-DSA as your everyday algorithms for operational security and for firmware signing, but for firmware signing specifically, burn an SLH-DSA public key into your devices that you can use to verify new firmware and new public keys that use new algorithms in the event the ML- algorithms are ever broken.
Moving to these algorithms is an excessively bad idea.
I don't think so, and neither does Google -- which employs a lot of professional academic cryptographers (which I'm not).
Whether you should move to these algorithms depends on what you're doing, and what your service lifetimes are. If the data you're encrypting or signing only needs to be secure for a decade, don't bother. Existing ECC-based constructions will be fine.
If the data needs to be secure for more than that, if you're really concerned about harvest-now-decrypt-later attacks that could be performed 20-30 years from now, you should move to ML-KEM, and do it soon. There actually isn't that much data that really needs to be secure for that long... but if yours is in that category it's more likely that it will still be secure in 2050 if it's encrypted with ML-KEM/AES than if it's encrypted with ECDH/AES. Both options are a gamble, of course. ML-KEM is more likely to fall to a cryptographic attack than ECDH, but ECDH is at risk from quantum computing.
Firmware signing is a very interesting case. Firmware security is foundational to system security. Phones today are expected to have an ~8-year lifespan, so a phone launched in 2029 needs to remain secure until 2037... and that is getting into the range where there's a non-trivial probability that quantum computers will be large enough, reliable enough and cheap enough to be a threat. That probability is only in the 1-5% range (IMO), but in the cryptographic security world 1-5% is utterly unacceptable. I work on automotive firmware these days (I left Google six months ago) and we have ~5 year development timelines, followed by 20-year operational timelines, so a project we start today needs to be secure until 2051. The probability of large, reliable, cheap quantum computers by 2050 approaches 100%.
On the other hand, can your hardware really accept a ~20X longer firmware verification time from using SLH-DSA? That's not a question with a universal answer. Some contexts can, some can't. ML-DSA is more computationally-practical, but there's a risk that it will be broken. I think the clearly-appropriate strategy for now is: Ship your hardware with ML-DSA verified firmware, but also burn an SLH-DSA public key into the ROM (or OTP fuses) and arrange things so you can use that SLH-DSA public key to verify and install a new firmware verification scheme in the future, should ML-DSA be compromised. Or, alternatively, stick with EC-DSA or Ed25519 for now, but include that same SLH-DSA-based infrastructure for migrating to something else. If your hardware lifetime is long enough, you almost certainly will have to actually use that to migrate to some PQC algorithm. If feasible, it would be better to start with ML-DSA now.