Forgot your password?
typodupeerror

Comment Re:double standards (Score 1) 78

they're all 100% letting the Epstein saga slide.

Almost makes you want a Putin like strong man to sort them all out. right haruchai

If Putin had been around, he'd have been in the Epstein files, too. It's vanishingly-unlikely that any strongman like that wouldn't also be a sexual abuser. It's all part of the same disrespect for others.

Comment Re:Does no one remember? (Score 1) 160

Not as remarkable as Linux, which somehow has become so despite (virtually) no paid developers.

Linux has a large number of highly-paid developers. If you look at the kernel, specifically, there are basically no unpaid volunteers contributing significantly to it, and there haven't been for a long time. The right way to understand kernel development is as a collaboration between a large number of corporations, each of whom contributes the paid work of skilled engineers and most of which also contribute cash to a foundation that employs the highly-paid engineers who coordinate all of the work (notably Linus, who makes a seven figure salary -- honestly, ought to be eight figures, but he's certainly not hurting).

If you look beyond the kernel to the other tools and desktop environments, the volunteer participation rises significantly, but there's also a lot of paid work.

Comment Re:Why do we trust the big ones? (Score 1) 51

We are not going to get AGI this century.

You cannot possibly know that.

AGI is not a question of throwing more computing power at the problem. Something fundamental is missing and we have no idea what.

This seems plausible, but it implies that you cannot possibly know whether we're going to get AGI this century. If it's true, it means that we'll get AGI when we discover that as-yet-missing knowledge, and there's no way to predict when that might happen. It might have happened yesterday and we just don't know it yet. What is certain is that (a) the knowledge exists and (b) we're looking for it, really hard.

Comment Re:Why do we trust the big ones? (Score 1) 51

Y2K is a better example. Y2K could have been a castrophe. A decade before it happened we started working to fix all the systems. Hundreds of millions of dollars (maybe billions) were spent on Y2K remediation. Then Y2K came and... nothing much happened. Lots of people pointed and said "Haha! All that money spent fixing the problem was a waste!", but they were wrong. All of the money spent fixing the problem fixed the problem.

This is what we have to do with cryptography and quantum computers. If we wait until practical QCs arrive, we'll be in big trouble. Not only will it take years to replace all of the classical crypto infrastructure, so we need to do the work before the QCs arrive, there are some cases where there will be no possibility of remediation. There are two major categories:

1. Harvest-now-decrypt-later (HNDL). Any cases where data needs to be protected for decades is subject to attacks that involve storing the data now and holding it until quantum computers can decrypt it. We undoubtedly have a lot of data that is already stored for later decryption, but we want to avoid increasing that risk further.

2. Hardware trust. Secure hardware requires trusted firmware, which requires burning public keys and verification algorithms into ROM, and many of these devices will be in service for decades. So we need to be able to deliver secure firmware updates for decades, using keys and algorithms we burn into ROM now. This is particularly relevant to me, because I'm working on firmware for automobiles, which have a 5-year development window, and a ~20-year (or more!) service life. So I'm working on systems that need to be securable through 2051, and it's pretty important because these vehicles have some degree of self-driving capability. A vulnerability that enables mass compromise and takeover could be used to mount a horrific terror attack.

So, yes, this matters. Probably. It's possible that practical quantum computing will never emerge, but given the tremendous progress over the last few years, that seems like a bad bet. Google's 2029 target is wise.

Comment Re: Mac OS has already started to pester me (Score 1) 51

AES-256 will remain quantum resistant forever. QCs only get you a halving of the bits for block-ciphers.

These statements are too strong -- in both directions!

First, although Grover's algorithm is proven to be the optimal quantum algorithm for generalized search, you don't necessarily need a generalized search algorithm to break a block cipher. Block ciphers have internal structure that may be exploitable by quantum algorithms. Indeed researchers have made some progress in designing quantum algorithms to break Feistel network-based ciphers (which AES is not, but the previous standard cipher, DES, is). The result of that work, Simon's algorithm, is not a practical way to break Feistel network ciphers, but more research may improve it. So it's certainly possible that researchers could identify AES substructure that can be attacked with quantum computers, and this could result in a quantum algorithm that breaks AES. We have no hint of anything like that, and no one is really considering it to be likely, but "quantum resistant forever" is too strong.

Second, the claim that QCs get you a halving of the bits for block ciphers using Gover's algorithm is technically correct, but overstates the practical reality. Even assuming we had large, reliable and cheap quantum computers, the way Grover's algorithm would be applied to breaking AES requires 2^(n/2) sequential operations, each of which is a non-trivial quantum circuit. Moreover, other practical considerations, which are way too complicated to get into here -- in large part because I don't really understand them; I'm repeating what more-knowledgeable colleagues say here -- mean that AES-128 will probably retain ~90 bits of security, which means it will probably remain secure forever, assuming no better-than-Grover's algorithm exists.

Comment Re:NIST algorithms (Score 1) 51

Wasn't NIST shown to have been compromised by the NSA? Is this still the case?

No.

What was shown is that one random number generation algorithm was found to have been backdoored at the NSA's request. There is no evidence that this has ever happened with any of the other NIST-standardized algorithms, and it's also known that the NSA has stepped into strengthen other NIST algorithms (notably, DES -- though the the NSA both strengthened that by improving the S boxes and weakened it by asking for a smaller key size, though that wasn't a secret weakening; everyone understands the implications of smaller key sizes, and where necessary there are easy workarounds, hence triple-DES, which is still secure today).

All of this was in the past when the NSA held a considerable lead in cryptographic knowledge over academic cryptographers. It seems very unlikely that this is the case any more, and in fact at this point basically all novel cryptographic knowledge seems to be flowing in the other direction.

The only other case that people do wonder about a little bit is the choice of the elliptic curves used for ECDSA and ECDH. NIST never published any rationale for those curve choices, or any publicly-verifiable information about their choices. Most likely this is because there was no systematic choice process; they chose curves at random, verified their security properties and went with it, but it's possible that these specific curves have some internal structure that the NSA knows about and can exploit, but the rest of the world doesn't. After decades of fruitless scrutiny this isn't likely, but it's possible, which is why Daniel J. Bernstein ensured that his Curve25519 did have a clear, rational and publicly-verifiable construction process, and that's part of why many systems prefer Ed25519 and X25519 over ECDSA and ECDH (Curve25519 is also faster and has smaller public keys).

In the case of the PQC algorithms, all of them were created by academic cryptographers, and there are no suspicious modifications. So if the NSA has backdoored them they've done it really, really subtly. IMO, it's not a risk worth worrying about unless you're specifically defending against the NSA, and probably not even then. For commercial work, like I do, it's sufficient to trust that if the NSA is smart enough to have learned from the Dual_EC_DRBG debacle. In that case, not only did the backdoor eventually come out fully, along with evidence that the NSA had paid at least one company to use that algorithm, academic cryptographers suspected its existence even before the standard was published, and spoke publicly about it. That was probably what motivated the NSA to pay for its use, since it was under a cloud of suspicion from the beginning. So it was a very foolish move by the NSA, driven by extreme and obviously unjustified overconfidence in their lead in cryptographic knowledge.

Comment Re:NIST algorithms (Score 1) 51

No idea. But what we have in "post quantum" crypto is all laughably weak against conventional attacks and laughably unverified.

This isn't true.

Yes, one of the finalists was broken, utterly. There are no successful attacks against ML-DSA, ML-KEM or SLH-DSA, and they have good security proofs. Note that "successful attack" and "security proof" both have different meanings to cryptographers. A successful attack is one that reduces the security even a little from what it theoretically should be, even if the reduction still leaves the algorithm completely unbreakable in practice. A security proof is a proof that the construction is secure if the underlying primitives satisfy security assumptions. There is no cryptographic algorithm in existence that has a security proof that a mathematician would consider to be a proof; we just don't know how to do that. In the case of ML-DSA and ML-KEM, the underlying assumptions are about the hardness of the underlying mathematical problems, Module-LWE and Module-SIS. In the case of SLH-DSA the underlying assumptions are about the security of hash algorithms.

Module-LWE and Module-SIS are fairly new problems, and have only been studied for a little over a decade. The whole field of mathematics they're based on is less than 30 years old, so it's more likely that some mathematical breakthrough will destroy their security than it is that some breakthrough will wipe out ECC, which has been studied for about 50 years, and which builds on 150 years of algebraic geometry. Still, a mathematical breakthrough could destroy ECC or RSA, too.

In contrast SLH-DSA is rock solid, from a security perspective. We've been studying hash functions for a long time, and, really, our entire cryptographic security infrastructure is based on the assumption that our hash functions are good. If that turns out not to be the case, then quantum computers will be the least of our problems because to a first approximation every cryptographic protocol in existence relies on secure hashing. It's far more likely that ECC or RSA will be broken than that SLH-DSA will be broken. Unfortunately, SLH-DSA is orders of magnitude slower than what we're used to.

It's worth noting that SIKE (the NIST PQC finalist that was broken) also had a security proof. The problem was that the proof showed that SIKE was secure if the supersingular isogeny problem was hard -- but what SIKE actually used wasn't that problem, exactly. SIKE required additional data to be published, and that additional information reduced the hardness of the problem. This is why the break was so total, and was found immediately when researchers began scrutinizing SIKE. All it took was the observation that SIKE relied on a less-hard problem, then a mathematical solution to the less-hard problem.

NIST chose these three algorithms for good reasons. ML-KEM and ML-DSA have larger keys than we're used to with RSA and especially ECC, but they're not that much larger, not so large that they simply can't be used in existing protocols. And they're fast, with performance on par with what we're used to. So they are feasible drop-in replacements in most cases.

SLH-DSA is not a drop-in replacement. The keys are very small (on par with ECC, a bit smaller, even), but the signatures it produces are enormous: the smallest is 8k, the biggest is 50k (depending on parameter choices). Also, signing is 50-2000 times slower than EC-DSA (depending on parameter choices) and verification is 10-30 times slower.

So, what NIST did was choose a pair of quite-usable and probably-secure algorithms (ML-KEM and ML-DSA) that cover all cryptographic needs and are very close to being drop-in replacements, plus a less-usable but absolutely-secure algorithm as a backstop. I don't know that they ever explicitly stated the strategy they were suggesting, but it's obvious: Use ML-KEM and ML-DSA as your everyday algorithms for operational security and for firmware signing, but for firmware signing specifically, burn an SLH-DSA public key into your devices that you can use to verify new firmware and new public keys that use new algorithms in the event the ML- algorithms are ever broken.

Moving to these algorithms is an excessively bad idea.

I don't think so, and neither does Google -- which employs a lot of professional academic cryptographers (which I'm not).

Whether you should move to these algorithms depends on what you're doing, and what your service lifetimes are. If the data you're encrypting or signing only needs to be secure for a decade, don't bother. Existing ECC-based constructions will be fine.

If the data needs to be secure for more than that, if you're really concerned about harvest-now-decrypt-later attacks that could be performed 20-30 years from now, you should move to ML-KEM, and do it soon. There actually isn't that much data that really needs to be secure for that long... but if yours is in that category it's more likely that it will still be secure in 2050 if it's encrypted with ML-KEM/AES than if it's encrypted with ECDH/AES. Both options are a gamble, of course. ML-KEM is more likely to fall to a cryptographic attack than ECDH, but ECDH is at risk from quantum computing.

Firmware signing is a very interesting case. Firmware security is foundational to system security. Phones today are expected to have an ~8-year lifespan, so a phone launched in 2029 needs to remain secure until 2037... and that is getting into the range where there's a non-trivial probability that quantum computers will be large enough, reliable enough and cheap enough to be a threat. That probability is only in the 1-5% range (IMO), but in the cryptographic security world 1-5% is utterly unacceptable. I work on automotive firmware these days (I left Google six months ago) and we have ~5 year development timelines, followed by 20-year operational timelines, so a project we start today needs to be secure until 2051. The probability of large, reliable, cheap quantum computers by 2050 approaches 100%.

On the other hand, can your hardware really accept a ~20X longer firmware verification time from using SLH-DSA? That's not a question with a universal answer. Some contexts can, some can't. ML-DSA is more computationally-practical, but there's a risk that it will be broken. I think the clearly-appropriate strategy for now is: Ship your hardware with ML-DSA verified firmware, but also burn an SLH-DSA public key into the ROM (or OTP fuses) and arrange things so you can use that SLH-DSA public key to verify and install a new firmware verification scheme in the future, should ML-DSA be compromised. Or, alternatively, stick with EC-DSA or Ed25519 for now, but include that same SLH-DSA-based infrastructure for migrating to something else. If your hardware lifetime is long enough, you almost certainly will have to actually use that to migrate to some PQC algorithm. If feasible, it would be better to start with ML-DSA now.

Comment Re:All copper is "oxygen-free" (Score 1) 69

The only thing stopping you from calling the water pipes in your house "copper-phosphorus pipes" is laziness and poor attention to detail.

A truly non-lazy person, then, would have to conduct a detailed spectrographic assay of all of the pipes (or at least sufficient samples from each lot) to accurately determine the precise composition of each, because all of them contain impurities and aren't merely copper and phosphorous.

In general, getting a truly pure sample of almost any element is incredibly-hard, and outside of laboratories (and even in laboratories, most of the time) it just doesn't matter. In the case of transporting anti-protons, standard "pure" copper is apparently inadequate, because it's not pure enough.

Comment Re:Water is what scares me (Score 1) 48

After decades of decreasing water supplies coupled with irresponsible explosive growth in the Great Basin, Front Range, and SW in particular.its just asking for trouble.

Even with the reduced precipitation there's still plenty of water for residential and commercial use. The problem, at least where I live (Utah), is agriculture. 80% of our water goes to agriculture. It would be one thing if we were growing regionally-appropriate crops for local consumption, but nearly all of that agriculture is to grow alfalfa (a water-hungry crop that isn't appropriate for the high desert climate), and nearly all of that alfalfa is shipped out of state, much of it out of the country, to feed cattle elsewhere. China is one of the biggest buyers. Essentially, our farmers are selling the contents of our aquifers to the world.

If we had plenty of water, letting our farmers buy it at a deep discount and sell it to willing buyers elsewhere would be fine, just another commercial use of a local resource, which is what trade is all about. But we definitely don't have plenty of water.

The solution is simple and straightforward (though legally complicated): Don't discounts. Set the same price for water across the board, residential, commercial and agricultural. There can and should be minor differences in delivery cost, and surcharges for purification, but the base cost of the water should be set through a single government-managed market, probably at the state level, probably divided up by drainages (drainages with more abundant water will have cheaper water; if this creates an arbitrage opportunity for someone to pipe water between drainages, great!).

Yes, this would probably put the alfalfa farmers out of business, but that's good because growing alfalfa in the desert is a bad idea. It might also raise the price of local produce, but that's as it should be, putting agricultural water use directly in competition with other water use. If prices go up, people will find ways to be more efficient. Farmers may switch to drip irrigation. If you build too many houses for the available water supply, well, those houses are going to have very expensive water and residents are going to want to find ways to conserve -- and maybe the high cost of water will disincentivize new move-ins.

The bottom line is that efficiently allocating scarce resources is what markets are good at. The problem with water isn't that there are too many people or not enough water, the problem is that we don't properly allocate the water or encourage conservation in the right places. Trying to fix this through regulation rather than market pricing will always be subject to regulatory capture and will never be as efficient or as effective as just enabling a competitive market and letting it work.

Comment Re:I use Claude Code from my phone all the time (Score 1) 42

The Pixel 10 Fold looks pretty cool, but it takes me back to, geez, late '80s / early '90s?, when Casio came out with a folding "B.O.S.S" data bank, a precursor of the PDA. I still have it floating around somewhere, and I'd have used it for much longer, except the ribbon cable between the screen half, and the keyboard half split, at some point, from the frequent flexing. How do you feel the Pixel's gonna hold up?

No idea. It's fine so far, but I've only had it for a few months. Honestly, I'm pretty brutal on devices. Odds are high that I'll break it in some other way before the flexing causes a problem.

Comment Re:Just me? (Score 1) 42

Just wait until you hear someone talking to Claude on their phone, then interject with, "Hey Claude, order 5 tons of surströmming at highest available price, same day delivery."

Either Claude fails and the person realizes it doesn't necessarily do as told, or it succeeds and the person realizes it's a really really bad idea.

In a case like that I think Claude is "smart" enough to push back. Claude often catches my mistakes. It's also pretty easy to add rules like "Request confirmation for any purchase requests that are unusually large or otherwise out of the ordinary for the user. Review past purchases to determine user purchasing patterns." to make this explicit.

Claude is far, far smarter than Alexa.

OTOH, it sometimes does do stupid things. On balance, I think I screw up more often than it does, but you can't just assume it will make the right decisions, so adding rules that require doublechecking with the user is a good idea.

Comment Re:I use Claude Code from my phone all the time (Score 1) 42

A tablet would be better... but if I'm going to lug a tablet around, my Macbook is better yet, since it's not that much bigger than a tablet and has a keyboard.

I did exactly this for a while as an on-call admin, and found the iPad to be a better fit. It was slimmer and easier to pack, if only by degrees, and if I couldn't use a keyboard because of the location - like literally standing in the foyer of a Broadway play house fixing a problem before heading in to see the show - I could at least peck at the on-screen keys with my thumbs while holding the iPad. Of course, ymmv, but for remote work, the iPad was the better option for me.

Without a foldable phone, I'd agree. With the foldable, I can unfold it and have a reasonably large on-screen keyboard, which I can type on with both thumbs. And of course my phone is always with my, while a tablet would be an extra device to carry -- and if I'm carrying an additional device, the laptop is more functional.

Comment Re:I use Claude Code from my phone all the time (Score 1) 42

I'm surprised Anthropic doesn't have an app that let's you hook up from your phone to your development environment and cause all that to happen without the intermediary. Coming up soon I guess.

Me too. I looked! Termius + tmux works reasonably well, but an app specifically for this purpose would be nicer.

Comment I use Claude Code from my phone all the time (Score 3, Informative) 42

I use the Termius app on my phone, SSH to my workstation, run tmux attach -d to attach to the tmux session in which I'm running Claude, then tell it to do stuff. It can only do stuff that can be done via the command prompt, HTTP requests or MCP integrations (Gmail, Drive, Confluence, Jira, etc.), but that covers a lot of ground. "Only what I can do from the command prompt" is not much of a limitation.

I've told Claude to write a design doc in Confluence (which I reviewed and shared with others to get feedback); then implement the feature, including tests; build and run the code and tests on two hardware platforms (the host and an attached embedded QNX board); commit the code to a feature branch and push the branch upstream (where I reviewed it and told Claude what to fix); create a pull request; respond to reviewer comments; and merge the PR, all from my phone while a thousand miles from the workstation. I've only done the complete cycle from the phone once, but I've done pieces of it many times.

To make this work well, it helps to have a phone with a big screen. I have a Pixel 10 Fold, unfolded for Termius use. A tablet would be better... but if I'm going to lug a tablet around, my Macbook is better yet, since it's not that much bigger than a tablet and has a keyboard. And, obviously, I do reach for the laptop rather than the phone if I have it. But I can get a lot done from the phone.

This new feature is basically "Let poor GUI users do what command-line jockeys have been doing for a while".

Slashdot Top Deals

I've looked at the listing, and it's right! -- Joel Halpern

Working...