Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:BTW... (Score 1) 166

A belief in GW is entirely compatible with having a beach front house. The problem is that it is slow moving but inexorable.

Personally, I'm with the vast, vast majority of scientists who claim it's real and extremely dangerous. From what I've seen of the human race, we won't do anything until we get badly burned.

I guess everyone will know for sure one way or the other in a few decades. I just hope we can live with it.

Comment Re:BTW... (Score 1) 166

I thought we already covered this in the linux rdrand story. It's called unauditable because it whitens the raw entropy output using encryption on chip, making even quite non-random source data appear to be random. It is not called unauditable because it's a black box design. The paper states that the design is very well known.

The attack described in this paper is to modify both the entropy source output "c" and the post-processing encryption key "K", undetectably setting a fraction of them to constant bit values. This weakens the effective random number generation to some chosen n bits of entropy, instead of 128 bits. But because the AES encryption post-processing stage does a very good job of making its output appear random, it will still pass random number tests.

If we had access to the raw entropy source, we could see that it was not providing nearly enough entropy to the encryption post-processing stage.

Comment Re:Help them realize they're the asshole, with a b (Score 1) 356

Interesting. I once managed a very bright and young developer, whose coding was exceptional. He was very often (but not always) right. But he was also rude and completely lacking in any social graces. And it wasn't enough that he was right - he also made everyone else feel stupid and frustrated.

I had to find solo projects for him, as the rest of the team ended up flat out refusing to work with him - and I didn't blame them.

Comment Whitening on chip (Score 1) 566

I believe one of the issues with this instruction as a source of random numbers is that the instruction whitens the output with no access to the raw entropy data. Any physical process that acts as an entropy source will have some (possibly small) biases - it won't necessarily appear to be completely random in particular ways.

This can be audited to see that the output conforms to the physical processes which are described.

If the instruction whitens the output through some algorithmic transform (e.g. hashing) to give apparently random numbers as output, there is no way to distinguish that from say encrypting a counter with a secret key - whose output will also appear to be random - but is trivially crackable if you know the secret key.

So it becomes an exercise in trust in Intel, rather than something which an be independently verified. There was a good comment on the cryptography mailing list about this - that it would be better to have hardware entropy sources, leaving the final steps of random number generation to software.

Comment Re:RSA = out of date (Score 1) 282

Easy to confuse all this crypto stuff! I work with it regularly and still have to look quite basic stuff up if I haven't touched it for a while! Yes, I am that Matt Palmer, but no longer at the National Archives...I'm now doing contract security architecture for a consultancy.

The issues on IBE are kind of like trusting a CA, except there are no certificates and therefore no CA. There is a very powerful trusted party who can decrypt anyone's information. The way it works is, there are some all powerful master secrets, from which some public parameters are generated.

Anyone with the public parameters can generate a new public key for anyone (e.g. using your email address as the public key) and encrypt a message for you. The issue is that to decrypt the message, you have to ask the trusted party for a valid private key for that public key, which it can automatically generate for you given knowledge of the public key, using the master secrets.

One security issue of this system is how does the trusted party authenticate that you really are who you claim to be, and how does it distribute that private key to you. Another, possibly more serious objection, is that the trusted party can fundamentally generate private keys for anyone using their parameters, so they can decrypt everyone's data. You have to *really* trust that trusted party.

The only place I've seen IBE commercially used is by Voltage Security. One use case is to allow payment terminals to automatically generate a new public key for each payment. Since the payment provider is supposed to be able to decrypt all of these communications (they are the trusted party), then this works quite nicely.

Comment Re:RSA = out of date (Score 1) 282

Sorry, but this is just wrong.

The whole point of public key encryption is that you don't need to do a key exchange. You have the public key, which is, well, public. The problem then becomes trusting you have the correct public key. Signatures provided by some other trusted party are used for this, usually in certificates. There still needs to be some per-established trusted root or web of trust to enable this. Identity-based public key encryption even does away with the need for this, allowing the generation of arbitrary public keys for someone (although there are other security issues with this sort of encryption scheme which I won't go into here).

Diffie Hellman key exchange is unauthenticated and completely vulnerable to a man in the middle attack. It is used to create a shared secret between two parties, which becomes a shared key, usually for symmetric encryption. It's very old now but still amazingly cool - I love the somewhat counter-intuitive fact that two parties can create a shared secret amongst themselves using only public communications. As long as you accept that neither party has any idea at all who they are creating the shared secret with.

Comment Re:Uh , since around 1998? (Score 1) 371

I think you may be suffering from the same confirmation bias problem. In some cases the Java optimizer can determine that out of bounds access is not possible and it optimizes the additional bounds checks out. For example, a loop that goes from zero to array length - 1 can remove the array bounds checks. So every single array access in Java is not necessarily bounds checked.

But most of them probably are, and in practice C mostly runs faster than Java. But not always...

Comment Re:Security through obscurity? (Score 1) 168

I think we're actually in violent agreement. I completely agree that obscurity doesn't give you any real security, and yes people need to understand this.

But in the specific situation where something in widespread use turns out to have a security flaw, then disclosing the vulnerability until there has been a reasonable amount of time for a fix to be prepared doesn't make anyone safer.

If you agree with that, then you are also acknowleging that the obscurity may be providing very temporary security for some people. If you don't agree with that, then you seem to be saying that revealing vulnerablities immediately before a fix can be prepared does not weaken anyone's security...?

Comment Re:Security through obscurity? (Score 1) 168

Sorry, replying to my own post, but I forgot to make the point I wanted to!

Obscurity definitely doesn't give you real security. But if all you have is obscurity, then it is better to have that than nothing.

It might confer no actual security, but taking the obscurity away straight away will definitely make no-one safer. The possibility exists that some people will be protected by the obscurity, at least in the short term. It just can't be relied upon.

Comment Re:Security through obscurity? (Score 1) 168

Well, I can't say that I speak for the entire crypto and security community, but I do work in the field and I have thought about this a bit.

"No security by obscurity" isn't meant to inform how we approach the entire process of vulnerability disclosure. It just makes the point that relying on obscurity for security will give you no real security. This is what we need people owning, building and maintaining things with security requirements to understand.

When thousands or millions of fielded products are already out there with a vulnerability, then giving the manufacturers time to fix the issue is just responsible disclosure.

Disclosing after some reasonable period of time is also responsible, as an incentive to actually fix it. We take obscurity away after some time, so they can't argue that the obscurity is all their customers need. We don't start with revealing everything when there isn't yet a fix. That makes no one more secure.

Slashdot Top Deals

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...