Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Reference? (Score 5, Informative) 366

Sorry, I could have provided a link for that too. It was in the major Snowden story of last week that revealed the NSA was undermining public standards. The New York Times said this:

Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.

Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology and later by the International Organization for Standardization, which has 163 countries as members.

Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

“Eventually, N.S.A. became the sole editor,” the memo says.

Although the NYT didn't explicitly name the bad standard, there's only one that fits the criteria given which is Dual_EC_DRBG.

Comment Re:you have the source (Score 1) 566

How would they detect any shared properties? The point is that they are providing a random number generator (not a stream of random numbers) which is supposedly "secure". Secure means that no one, including the person providing the RNG, can predict the stream of numbers coming form the RNG. If the RNG coming form the US source is not honest, that means that presumably the NSA can predict the stream of numbers coming out of the RNG. But the NSA (assuming that it distrusts the KGB and the MSS) wouldn't want the KGB and the MSS to be able to carry out the same feat. The same is true for each of the other devices. So there's no way that any one of the actors should be able to detect any shared properties --- that's the point of the proposal.

Now, if the NSA is able to gimmick the RNG coming from China, then that's a different story. And to the extent that many electronics are designed in the US and then manufacturered in China, that's certainly a concern. In order for a scheme like this to work, the parts would have to be designed and built in such a way that an outsider would believe that the NSA couldn't have possibly gimmicked an RNG, even if it could have been gimmicked by another spy agency. Then combine this with a device that you're sure couldn't have been gimmicked by the MSS, but may have been subject to pressure from the NSA, and so on.

Submission + - Are the NIST standard elliptic curves back-doored? 2

IamTheRealMike writes: In the wake of Bruce Schneier's statements that he no longer trusts the constants selected for elliptic curve cryptography, people have started trying to reproduce the process that led to those constants being selected ... and found it cannot be done. As background, the most basic standard elliptic curves used for digital signatures and other cryptography are called the SEC random curves (SEC is "Standards for Efficient Cryptography"), a good example being secp256r1. The random numbers in these curve parameters were supposed to be selected via a "verifiably random" process (output of SHA1 on some seed), which is a reasonable way to obtain a nothing up my sleeve number if the input to the hash function is trustworthy, like a small counter or the digits of PI. Unfortunately it turns out the actual inputs used were opaque 256 bit numbers, chosen ad-hoc with no justifications provided. Worse, the curve parameters for SEC were generated by head of elliptic curve research at the NSA — opening the possibility that they were found via a brute force search for a publicly unknown class of weak curves. Although no attack against the selected values are currently known, it's common practice to never use unexplainable magic numbers in cryptography standards, especially when those numbers are being chosen by intelligence agencies. Now that the world received strong confirmation that the much more obscure and less widely used standard Dual_EC_DRBG was in fact an NSA undercover operation, NIST re-opened the confirmed-bad standards for public comment. Unless NIST/the NSA can explain why the random curve seed values are trustworthy, it might be time to re-evaluate all NIST based elliptic curve crypto in general.

Comment Re:most like 100,000 years (Score 3, Informative) 63

Link: carbon dating can't be trusted beyond 150 million years.

Conclusion: The date of 100,000 years given here is wrong.

If you'd taken time to scan the paper, you'd easily find the section on dating (2.2): "A chronological model was
developed using a combination of radiocarbon, optically stimulated luminescence (OSL), and relative
palaeomagnetic intensity dating. [...] OSL measurements suggested that material incorporated into the basal sediments might date to
93,000 ± 9000 years ago."

I.e. the 100,000 years is independent of carbon dating. (Actually, I'm surprised they even attempted carbon dating in this environment.)

Comment Re:you have the source (Score 5, Insightful) 566

The random driver has changed significantly since July 2012, which is we were given a heads up about the paper described at http://factorable.net/ which is also when I took back maintainership of the /dev/random driver. We gather entropy at every single interrupt, and mix it into the entropy pool. This is done unconditionally, you can't disable it, like what happened with the SA_SAMPLE_RANDOM flag.

The thing about entropy pools is that when you combine entropy sources, the result gets better, not worse. So the best thing would be if we had hardware random number generators sourced from China, Russia, and the USA. Since presumably the MSS, KGB, and the NSA mutually distrust each other, if we combine the entropy from those three soruces, the result will be stronger than any one alone.

This is why I don't recommend using RDRAND directly. Sure, an honest (emphasis on honest) hardware random number geneterator will always be able to source higher quality entropy than anything we can do by sampling OS events, such as interrupts. But the problem is it's hard to guarantee that a HWRNG is really honest. Especially given the Snowden revelations which seem to indicate the NSA has successfully leaned on at least one chip manufacturer. If you must use RDRAND, I'd recommend generating a random key via some other means, and then encrypting the output of RDRAND by that random key before use the resulting randomness for session keys, etc. Or better yet, do what we do in /dev/random, which is to mix RDRAND with other sources of entropy.

Comment Re:you have the source (Score 2) 566

What I said is that /dev/urandom is much more important to get right than /dev/random. Realistically, far more programs use /dev/urandom than use /dev/random. GPG uses /dev/random for long-term key generatiom, but in terms of generating certs, creating session keys, etc., /dev/urandom is far more important.

If you trust Intel not to have gimmicked RDRAND, by all means, feel free to use it. Please do it in open source, though, so I can fix said program not to, though.....

Comment Re:Marital/Money problems??? (Score 4, Informative) 566

I think it's more likely that the RDRAND thing has been an ongoing argument/flamewar for a long time. See this thread for an example.

BTW Linus is right. According to what we know about randomness, even if RDRAND is hacked then mixing it with other entropy can't hurt - at worst, it merely is a no-op and achieves nothing. However, even if RDRAND is backdoored, the NSA is not the worlds only adversary. Given that when mixed with other randomness it doesn't hurt, it's still better to use it against all the other adversaries out there than not.

Linus' point is, exclusive reliance on RDRAND would be bad, but the kernel doesn't/shouldn't do that.

Comment Eh.... (Score 5, Informative) 59

Two datacenters owned by the same company using MPC is a really dumb use case. That won't help at all. The point of Google encrypting cross-dc communications is a forcing manoeuvre - it forces intelligence agencies to go via Google Legal to get information where the request can be analyzed and pushed back on. Even in countries where the legal system is flimsy and corrupt, that's an issue that can be improved significantly just with a single act of Congress or Parliament, whereas undoing their wiretapping infrastructure will prove somewhat harder because there's no adversarial lawyer standing in the way.

A better example might be two datacenters owned by different companies, where they don't mutually trust each other. Or, to give an actual use case, the OTR chat encryption protocol uses MPC to authenticate connections. They call it the socialist millionaires protocol. The two parties agree on a secret word (typically by one user posing a question to the other), and then a variant of MPC is used to verify that both parties selected the same word. The word itself never transits the wire and it's only used for authentication, so it's relatively strong even if the secret word is short or predictable.

Now, for some background. The paper can be found here if you want to skip the million+1 links and registration crap.

The basic idea behind MPC is that you write your shared computation in the form of a boolean circuit, made up of logic gates as if you were making an electronic circuit. The inputs to the program are represented as if they were electronic signals (i.e. as one and zero bits on wires). Once done, there are two protocols you can follow. The original one is by a guy named Andrew Yao. Each wire in the circuit is assigned a pair of keys. The details I'll gloss over now, but basically given the circuit (program) as a template, lots of random keys are created by party A, then the entire "garbled circuit" is sent to party B who will run it. Party A also selects the keys for his input wires and sends them to party B, who doesn't know whether they represent 0 or 1, only party A knows that.

Now party B wants to run the program with his input, but he doesn't want party A to know what his input is. So they use a separate protocol called an oblivious transfer protocol to get party A to cough up the right keys for B's input wires, without A finding out what they were. Finally, party B can run the program by progressively decrypting the wires until the output is arrived at.

What I described above is Yao's protocol. There is also a slightly different protocol called BGV. In BGV you don't send the entire program all at once. Instead, as party B runs through the program, each time they encounter an AND gate they do an oblivious transfer with party A. XOR gates are "free" and don't require any interaction. I forgot what happens for other kinds of gates. Basically, BGV involves both parties interacting throughout the computation, however, it can result in much less network traffic being required if your OT protocol is cheap, because if your circuit is very wide and shallow then most of the garbled program never has to even get transferred at all.

From what I can tell, most of the best results in MPC these days are coming from BGV coupled with new, highly efficient OT protocols. SPDZ appears to work on yet another design, but the basic reliance on circuit form remains.

Comment Re:Meaningless ... (Score 1) 248

Eh, you realise that Google has lots of engineers who don't live in the USA, have no ties to the USA, even strongly dislike the US government, right? Some of them are even working in China or Russia.

The idea that every Google employee is a slave to the NSA is absurd. The vast majority wouldn't even qualify for basic security clearance.

Slashdot Top Deals

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...